id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,923,933 | https://en.wikipedia.org/wiki/Terraforming%20of%20Mars | The terraforming of Mars or the terraformation of Mars is a hypothetical procedure that would consist of a planetary engineering project or concurrent projects aspiring to transform Mars from a planet hostile to life to one that could sustainably host humans and other lifeforms free of protection or mediation. The process would involve the modification of the planet's extant climate, atmosphere, and surface through a variety of resource-intensive initiatives, as well as the installation of a novel ecological system or systems.
Justifications for choosing Mars over other potential terraforming targets include the presence of water and a geological history that suggests it once harbored a dense atmosphere similar to Earth's. Hazards and difficulties include low gravity, toxic soil, low light levels relative to Earth's, and the lack of a magnetic field.
The terraforming of Mars is considered to be infeasible using present-day technology. Disagreement exists about whether future technology should render the planet habitable. Reasons for supporting terraforming the planet include allaying concerns about resource consumption and depletion on Earth and arguments that the altering and subsequent or concurrent settlement of other planets decreases the odds of humanity's extinction. Reasons for objecting to terraforming the planet include ethical concerns about terraforming, and the considerable energy and resource costs that such an undertaking would involve.
Motivation and side effects
Future population growth, demand for resources, and an alternate solution to the doomsday argument may require human colonization of bodies other than Earth, such as Mars, the Moon, and other objects. Space colonization would facilitate harvesting the Solar System's energy and material resources.
In many aspects, Mars is the most Earth-like of all the other planets in the Solar System. It is thought that Mars had a more Earth-like environment early in its geological history, with a thicker atmosphere and abundant water that was lost over the course of hundreds of millions of years through atmospheric escape. Given the foundations of similarity and proximity, Mars would make one of the most plausible terraforming targets in the Solar System.
Side effects of terraforming include the potential displacement or destruction of any indigenous life if such life exists.
Challenges and limitations
The Martian environment presents several terraforming challenges to overcome and the extent of terraforming may be limited by certain key environmental factors. The process of terraforming aims to mitigate the following distinctions between Mars and Earth, among others:
Reduced light levels (about 60% of Earth)
Low surface gravity (38% of Earth's)
Unbreathable atmosphere
Low atmospheric pressure (about 1% of Earth's; well below the Armstrong limit)
Ionizing solar and cosmic radiation at the surface
Average temperature compared to Earth average of
Molecular instability — bonds between atoms break down in critical molecules such as organic compounds
Global dust storms
No natural food source
Toxic soil
No global magnetic field to shield against the solar wind
Countering the effects of space weather
Mars has no intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes. This poses challenges for mitigating solar radiation and retaining an atmosphere.
The lack of a magnetic field, its relatively small mass, and its atmospheric photochemistry, all would have contributed to the evaporation and loss of its surface liquid water over time. Solar wind–induced ejection of Martian atmospheric atoms has been detected by Mars-orbiting probes, indicating that the solar wind has stripped the Martian atmosphere over time. For comparison, while Venus has a dense atmosphere, it has only traces of water vapor (20 ppm) as it lacks a large, dipole-induced, magnetic field.
Earth's ozone layer provides additional protection. Ultraviolet light is blocked before it can dissociate water into hydrogen and oxygen.
Low gravity and pressure
The surface gravity on Mars is 38% of that on Earth. It is not known if this is enough to prevent the health problems associated with weightlessness.
Mars's atmosphere has about 1% the pressure of the Earth's at sea level. It is estimated that there is sufficient ice in the regolith and the south polar cap to form a atmosphere if it is released by planetary warming. The reappearance of liquid water on the Martian surface would add to the warming effects and atmospheric density, but the lower gravity of Mars requires 2.6 times Earth's column airmass to obtain the optimum pressure at the surface. Additional volatiles to increase the atmosphere's density must be supplied from an external source, such as redirecting several massive asteroids (40–400 billion tonnes total) containing ammonia () as a source of nitrogen.
Breathing on Mars
Current conditions in the Martian atmosphere, at less than of atmospheric pressure, are significantly below the Armstrong limit of where very low pressure causes exposed bodily liquids such as saliva, tears, and the liquids wetting the alveoli within the lungs to boil away. Without a pressure suit, no amount of breathable oxygen delivered by any means will sustain oxygen-breathing life for more than a few minutes. In the NASA technical report Rapid (Explosive) Decompression Emergencies in Pressure-Suited Subjects, after exposure to pressure below the Armstrong limit, a survivor reported that his "last conscious memory was of the water on his tongue beginning to boil". In these conditions humans die within minutes unless a pressure suit provides life support.
If Mars' atmospheric pressure could rise above , then a pressure suit would not be required. Visitors would only need to wear a mask that supplied 100% oxygen under positive pressure. A further increase to of atmospheric pressure would allow a simple mask supplying pure oxygen. This might look similar to mountain climbers who venture into pressures below , also called the death zone, where an insufficient amount of bottled oxygen has often resulted in hypoxia with fatalities. However, if the increase in atmospheric pressure was achieved by increasing CO2 (or other toxic gas) the mask would have to ensure the external atmosphere did not enter the breathing apparatus. CO2 concentrations as low as 1% cause drowsiness in humans. Concentrations of 7% to 10% may cause suffocation, even in the presence of sufficient oxygen. (See Carbon dioxide toxicity.)
In 2021, the NASA Mars rover Perseverance was able to make oxygen on Mars. However, the process is complex and takes a considerable amount of time to produce a small amount of oxygen.
Advantages
According to scientists, Mars exists on the outer edge of the habitable zone, a region of the Solar System where liquid water on the surface may be supported if concentrated greenhouse gases could increase the atmospheric pressure. The lack of both a magnetic field and geologic activity on Mars may be a result of its relatively small size, which allowed the interior to cool more quickly than Earth's, although the details of such a process are still not well understood.
There are strong indications that Mars once had an atmosphere as thick as Earth's during an earlier stage in its development, and that its pressure supported abundant liquid water at the surface. Although water appears to have once been present on the Martian surface, ground ice currently exists from mid-latitudes to the poles. The soil and atmosphere of Mars contain many of the main elements crucial to life, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon.
Any climate change induced in the near term is likely to be driven by greenhouse warming produced by an increase in atmospheric carbon dioxide () and a consequent increase in atmospheric water vapor. These two gases are the only likely sources of greenhouse warming that are available in large quantities in Mars' environment. Large amounts of water ice exist below the Martian surface, as well as on the surface at the poles, where it is mixed with dry ice, frozen . Significant amounts of water are located at the south pole of Mars, which, if melted, would correspond to a planetwide ocean 5–11 meters deep. Frozen carbon dioxide () at the poles sublimes into the atmosphere during the Martian summers, and small amounts of water residue are left behind, which fast winds sweep off the poles at speeds approaching . This seasonal occurrence transports large amounts of dust and water ice into the atmosphere, forming Earth-like ice clouds.
Most of the oxygen in the Martian atmosphere is present as carbon dioxide (), the main atmospheric component. Molecular oxygen (O2) only exists in trace amounts. Large amounts of oxygen can be also found in metal oxides on the Martian surface, and in the soil, in the form of per-nitrates. An analysis of soil samples taken by the Phoenix lander indicated the presence of perchlorate, which has been used to liberate oxygen in chemical oxygen generators. Electrolysis could be employed to separate water on Mars into oxygen and hydrogen if sufficient liquid water and electricity were available. However, if vented into the atmosphere it would escape into space.
Proposed methods and strategies
Terraforming Mars would entail three major interlaced changes: building up the magnetosphere, building up the atmosphere, and raising the temperature. The atmosphere of Mars is relatively thin and has a very low surface pressure. Because its atmosphere consists mainly of , a known greenhouse gas, once Mars begins to heat, the may help to keep thermal energy near the surface. Moreover, as it heats, more should enter the atmosphere from the frozen reserves on the poles, enhancing the greenhouse effect. This means that the two processes of building the atmosphere and heating it would augment each other, favoring terraforming. However, it would be difficult to keep the atmosphere together because of the lack of a protective global magnetic field against erosion by the solar wind.
Importing ammonia
One method of augmenting the Martian atmosphere is to introduce ammonia (NH3). Large amounts of ammonia are likely to exist in frozen form on minor planets orbiting in the outer Solar System. It might be possible to redirect the orbits of these or smaller ammonia-rich objects so that they collide with Mars, thereby transferring the ammonia into the Martian atmosphere. Ammonia is not stable in the Martian atmosphere, however. It breaks down into (diatomic) nitrogen and hydrogen after a few hours. Thus, though ammonia is a powerful greenhouse gas, it is unlikely to generate much planetary warming. Presumably, the nitrogen gas would eventually be depleted by the same processes that stripped Mars of much of its original atmosphere, but these processes are thought to have required hundreds of millions of years. Being much lighter, the hydrogen would be removed much more quickly. Carbon dioxide is 2.5 times the density of ammonia, and nitrogen gas, which Mars barely holds on to, is more than 1.5 times the density, so any imported ammonia that did not break down would also be lost quickly into space.
Importing hydrocarbons
Another way to create a Martian atmosphere would be to import methane (CH4) or other hydrocarbons, which are common in Titan's atmosphere and on its surface; the methane could be vented into the atmosphere where it would act to compound the greenhouse effect. However, like ammonia (NH3), methane (CH4) is a relatively light gas. It is in fact even less dense than ammonia and so would similarly be lost into space if it was introduced, and at a faster rate than ammonia. Even if a method could be found to prevent it escaping into space, methane can exist in the Martian atmosphere for only a limited period before it is destroyed. Estimates of its lifetime range from 0.6–4 years.
Use of fluorine compounds
Especially powerful greenhouse gases, such as sulfur hexafluoride, chlorofluorocarbons (CFCs), or perfluorocarbons (PFCs), have been suggested both as a means of initially warming Mars and of maintaining long-term climate stability. These gases are proposed for introduction because they generate a greenhouse effect thousands of times stronger than that of . Fluorine-based compounds such as sulphur hexafluoride and perfluorocarbons are preferable to chlorine-based ones as the latter destroys ozone. It has been estimated that approximately 0.3 microbars of CFCs would need to be introduced into Mars' atmosphere to sublimate the south polar glaciers. This is equivalent to a mass of approximately 39 million tonnes, that is, about three times the amount of CFCs manufactured on Earth from 1972 to 1992 (when CFC production was banned by international treaty). Maintaining the temperature would require continual production of such compounds as they are destroyed due to photolysis. It has been estimated that introducing 170 kilotons of optimal greenhouse compounds (CF3CF2CF3, CF3SCF2CF3, SF6, SF5CF3, SF4(CF3)2) annually would be sufficient to maintain a 70-K greenhouse effect given a terraformed atmosphere with earth-like pressure and composition.
Typical proposals envision producing the gases on Mars using locally extracted materials, nuclear power, and a significant industrial effort. The potential for mining fluorine-containing minerals to obtain the raw material necessary for the production of CFCs and PFCs is supported by mineralogical surveys of Mars that estimate the elemental presence of fluorine in the bulk composition of Mars at 32 ppm by mass (as compared to 19.4 ppm for the Earth).
Alternatively, CFCs might be introduced by sending rockets with payloads of compressed CFCs on collision courses with Mars. When the rockets crashed into the surface they would release their payloads into the atmosphere. A steady barrage of these "CFC rockets" would need to be sustained for a little over a decade while Mars is changed chemically and becomes warmer.
Use of conductive nanorods
A 2024 study proposed using nanorods consisting of a conductive material, such as aluminum or iron, made by processing Martian minerals. These nanorods would scatter and absorb the thermal infrared upwelling from the surface, thus warming the planet. This process is claimed to be over 5,000 times more effective (in terms of warming per unit mass) than warming using fluorine compounds.
Use of orbital mirrors
Mirrors made of thin aluminized PET film could be placed in orbit around Mars to increase the total insolation it receives. This would direct the sunlight onto the surface and could increase Mars's surface temperature directly. The 125 km radius mirror could be positioned as a statite, using its effectiveness as a solar sail to orbit in a stationary position relative to Mars, near the poles, to sublimate the ice sheet and contribute to the warming greenhouse effect. However, certain problems have been found with this. The main concern is the difficulty of launching large mirrors from Earth.
Use of nuclear weapons
Elon Musk has proposed terraforming Mars by detonating nuclear weapons on the Martian polar ice caps to vaporize them and release carbon dioxide and water vapor into the atmosphere. Carbon dioxide and water vapor are greenhouse gases, and the resultant thicker atmosphere would trap heat from the Sun, increasing the planet's temperature. The formation of liquid water could be very favorable for oxygen-producing plants, and thus, human survival.
Studies suggest that even if all the CO2 trapped in Mars' polar ice and regolith were released, it would not be enough to provide significant greenhouse warming to turn Mars into an Earth-like planet.
Another criticism is that it would stir up enough dust and particles to block out a significant portion of the incoming sunlight, causing a nuclear winter, the opposite of the goal.
Albedo reduction
Reducing the albedo of the Martian surface would also make more efficient use of incoming sunlight in terms of heat absorption. This could be done by spreading dark dust from Mars's moons, Phobos and Deimos, which are among the blackest bodies in the Solar System; or by introducing dark extremophile microbial life forms such as lichens, algae and bacteria. The ground would then absorb more sunlight, warming the atmosphere. However, Mars is already the second-darkest planet in the solar system, absorbing over 70% of incoming sunlight so the scope for darkening it further is small.
If algae or other green life were established, it would also contribute a small amount of oxygen to the atmosphere, though not enough to allow humans to breathe. The conversion process to produce oxygen is highly reliant upon water, without which the is mostly converted to carbohydrates. In addition, because on Mars atmospheric oxygen is lost into space (unless an artificial magnetosphere were to be created; see "Protecting the atmosphere" below), such life would need to be cultivated inside a closed system.
On April 26, 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR).
One final issue with albedo reduction is the common Martian dust storms. These cover the entire planet for weeks, and not only increase the albedo, but block sunlight from reaching the surface. This has been observed to cause a surface temperature drop which the planet takes months to recover from. Once the dust settles it then covers whatever it lands on, effectively erasing the albedo reduction material from the view of the Sun.
Funded research: ecopoiesis
Since 2014, the NASA Institute for Advanced Concepts (NIAC) program and Techshot Inc have been working together to develop sealed biodomes that would employ colonies of oxygen-producing cyanobacteria and algae for the production of molecular oxygen (O2) on Martian soil. But first they need to test if it works on a small scale on Mars. The proposal is called Mars Ecopoiesis Test Bed. Eugene Boland is the Chief Scientist at Techshot, a company located in Greenville, Indiana. They intend to send small canisters of extremophile photosynthetic algae and cyanobacteria aboard a future rover mission. The rover would cork-screw the canisters into selected sites likely to experience transients of liquid water, drawing some Martian soil and then release oxygen-producing microorganisms to grow within the sealed soil. The hardware would use Martian subsurface ice as its phase changes into liquid water. The system would then look for oxygen given off as metabolic byproduct and report results to a Mars-orbiting relay satellite.
If this experiment works on Mars, they will propose to build several large and sealed structures called biodomes, to produce and harvest oxygen for a future human mission to Mars life support systems. Being able to create oxygen there would provide considerable cost-savings to NASA and allow for longer human visits to Mars than would be possible if astronauts have to transport their own heavy oxygen tanks. This biological process, called ecopoiesis, would be isolated, in contained areas, and is not meant as a type of global planetary engineering for terraforming of Mars's atmosphere, but NASA states that "This will be the first major leap from laboratory studies into the implementation of experimental (as opposed to analytical) planetary in situ research of greatest interest to planetary biology, ecopoiesis, and terraforming."
Research at the University of Arkansas presented in June 2015 suggested that some methanogens could survive in Mars's low pressure. Rebecca Mickol found that in her laboratory, four species of methanogens survived low-pressure conditions that were similar to a subsurface liquid aquifer on Mars. The four species that she tested were Methanothermobacter wolfeii, Methanosarcina barkeri, Methanobacterium formicicum, and Methanococcus maripaludis. Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO2) as their carbon source, so they could exist in subsurface environments on Mars.
Protecting the atmosphere
One key aspect of terraforming Mars is to protect the atmosphere (both present and future-built) from being lost into space. Some scientists hypothesize that creating a planet-wide artificial magnetosphere would be helpful in resolving this issue. According to two NIFS Japanese scientists, it is feasible to do that with current technology by building a system of refrigerated latitudinal superconducting rings, each carrying a sufficient amount of direct current.
In the same report, it is claimed that the economic impact of the system can be minimized by using it also as a planetary energy transfer and storage system (SMES).
Magnetic shield at L1 orbit
During the Planetary Science Vision 2050 Workshop in late February 2017, NASA scientist Jim Green proposed a concept of placing a magnetic dipole field between the planet and the Sun to protect it from high-energy solar particles. It would be located at the Mars Lagrange orbit L1 at about 320 R♂, creating a partial and distant artificial magnetosphere. The field would need to be "Earth comparable" and sustain as measured at 1 Earth-radius. The paper abstract cites that this could be achieved by a magnet with a strength of . If constructed, the shield may allow the planet to partially restore its atmosphere.
Plasma torus along the orbit of Phobos
A plasma torus along the orbit of Phobos by ionizing and accelerating particles from the moon may be sufficient to create a magnetic field strong enough to protect a terraformed Mars.
Oxygen from electrolysis of water
An abundance of groundwater on Mars was discovered in 2024. It is estimated that 7 Zettawatt-hours of electricity would need to be produced form nuclear fusion or fission to produce oxygen levels equivalent to Earth's atmosphere, by splitting water into hydrogen and oxygen by electrolysis. 120 trillion tons of hydrogen and 880 trillion tons of oxygen would be produced in the process, along with water vapor from the power plants.
Paraterraforming and GMO designer plants
Paraterraforming is a concept to build habitable greenhouses or bio-domes to help build plant life on other planets. NASA's NIAC is sponsoring NC State which is working on designer plants/trees or genetically modified vegetation that could survive better on Mars. Using CRISPR gene editing from Extremophiles on Earth to help withstand the harsh Martian regolith and atmosphere, such as ultraviolet radiation, extreme cold, low atmospheric pressure, Perchlorates, and drought tolerance. The plants could be tested outdoors to try and start an ecosystem for the full terraforming of Mars.
Thermodynamics of terraforming
The overall energy required to sublimate the from the south polar ice cap was modeled by Zubrin and McKay in 1993. If using orbital mirrors, an estimated 120 MW-years of electrical energy would be required to produce mirrors large enough to vaporize the ice caps. This is considered the most effective method, though the least practical. If using powerful halocarbon greenhouse gases, an order of 1,000 MW-years of electrical energy would be required to accomplish this heating. However, if all of this were put into the atmosphere,
it would only double the current atmospheric pressure from 6 mbar to 12 mbar, amounting to about 1.2% of Earth's mean sea level pressure. The amount of warming that could be produced today by putting even 100 mbar of into the atmosphere is small, roughly of order . Additionally, once in the atmosphere, it likely would be removed quickly, either by diffusion into the subsurface and adsorption or by re-condensing onto the polar caps.
The surface or atmospheric temperature required to allow liquid water to exist has not been determined, and liquid water
conceivably could exist when atmospheric temperatures are as low as . However, a warming of is much less than thought necessary to produce liquid water.
See also
Areography (geography of Mars)
References
External links
Recent Arthur C Clarke interview mentions terraforming
Red Colony
Terraformers Society of Canada
Research Paper: Technological Requirements for Terraforming Mars
Peter Ahrens The Terraformation of Worlds
Climate of Mars
Exploration of Mars
Mars
Science fiction | Terraforming of Mars | [
"Engineering"
] | 4,930 | [
"Planetary engineering",
"Terraforming"
] |
4,923,982 | https://en.wikipedia.org/wiki/Terraforming%20of%20Venus | The terraforming of Venus or the terraformation of Venus is the hypothetical process of engineering the global environment of the planet Venus in order to make it suitable for human habitation. Adjustments to the existing environment of Venus to support human life would require at least three major changes to the planet's atmosphere:
Reducing Venus's surface temperature of
Eliminating most of the planet's dense carbon dioxide and sulfur dioxide atmosphere via removal or conversion to some other form
The addition of breathable oxygen to the atmosphere.
These three changes are closely interrelated because Venus's extreme temperature is due to the high pressure of its dense atmosphere and the greenhouse effect.
The most simple proposal is to "veil" the planet from the sun, thus dropping the temperature low enough to condense or solidify carbon dioxide which would then need to be removed or stored in some way.
History of the idea
Poul Anderson, a successful science fiction writer, had proposed the idea in his 1954 novelette "The Big Rain", a story belonging to his Psychotechnic League future history.
The first known suggestion to terraform Venus in a scholarly context was by the astronomer Carl Sagan in 1961.
Prior to the early 1960s, the atmosphere of Venus was believed by many astronomers to have an Earth-like temperature. When Venus was understood to have a thick carbon dioxide atmosphere with a consequence of a very large greenhouse effect, some scientists began to contemplate the idea of altering the atmosphere to make the surface more Earth-like. This hypothetical prospect, known as terraforming, was first proposed by Carl Sagan in 1961, as a final section of his classic article in the journal Science discussing the atmosphere and greenhouse effect of Venus. Sagan proposed injecting photosynthetic bacteria into the Venus atmosphere, which would convert the carbon dioxide into reduced carbon in organic form, thus reducing the carbon dioxide from the atmosphere.
The knowledge of Venus's atmosphere was still inexact in 1961, when Sagan made his original proposal. Thirty-three years after his original proposal, in his 1994 book Pale Blue Dot, Sagan conceded his original proposal for terraforming would not work because the atmosphere of Venus is far denser than was known in 1961:
"Here's the fatal flaw: In 1961, I thought the atmospheric pressure at the surface of Venus was a few bars ... We now know it to be 90 bars, so if the scheme worked, the result would be a surface buried in hundreds of meters of fine graphite, and an atmosphere made of 65 bars of almost pure molecular oxygen. Whether we would first implode under the atmospheric pressure or spontaneously burst into flames in all that oxygen is open to question. However, long before so much oxygen could build up, the graphite would spontaneously burn back into CO2, short-circuiting the process."
Following Sagan's paper, there was little scientific discussion of the concept until a resurgence of interest in the 1980s.
Proposed approaches to terraforming
A number of approaches to terraforming are reviewed by Martyn J. Fogg (1995) and by Geoffrey A. Landis (2011).
Eliminating the dense carbon dioxide atmosphere
The main problem with Venus today, from a terraformation standpoint, is the very thick carbon dioxide atmosphere. The ground level pressure of Venus is . This also, through the greenhouse effect, causes the temperature on the surface to be several hundred degrees too hot for any significant organisms. Therefore, all approaches to the terraforming of Venus include somehow removing almost all the carbon dioxide in the atmosphere.
Biological approaches
The method proposed in 1961 by Carl Sagan involves the use of genetically engineered algae to fix carbon into organic compounds. Although this method is still proposed in discussions of Venus terraforming, later discoveries showed that biological means alone would not be successful.
Difficulties include the fact that the production of organic molecules from carbon dioxide requires hydrogen, which is very rare on Venus. Because Venus lacks a protective magnetosphere, the upper atmosphere is exposed to direct erosion by the solar wind and has lost most of its original hydrogen to space. And, as Sagan noted, any carbon that was bound up in organic molecules would quickly be converted to carbon dioxide again by the hot surface environment. Venus would not begin to cool down until after most of the carbon dioxide had already been removed.
Although it is generally conceded that Venus could not be terraformed by introduction of photosynthetic biota alone, use of photosynthetic organisms to produce oxygen in the atmosphere continues to be a component of other proposed methods of terraforming.
Capture in carbonates
On Earth nearly all carbon is sequestered in the form of carbonate minerals or in different stages of the carbon cycle, while very little is present in the atmosphere in the form of carbon dioxide. On Venus, the situation is the opposite. Much of the carbon is present in the atmosphere, while comparatively little is sequestered in the lithosphere. Many approaches to terraforming therefore focus on getting rid of carbon dioxide by chemical reactions trapping and stabilising it in the form of carbonate minerals.
Modelling by astrobiologists Mark Bullock and David Grinspoon of Venus's atmospheric evolution suggests that the equilibrium between the current 92-bar atmosphere and existing surface minerals, particularly calcium and magnesium oxides, is quite unstable, and that the latter could serve as a sink of carbon dioxide and sulfur dioxide through conversion to carbonates. If these surface minerals were fully converted and saturated, then the atmospheric pressure would decline and the planet would cool somewhat. One of the possible end states modelled by Bullock and Grinspoon was an atmosphere of and a surface temperature of . To convert the rest of the carbon dioxide in the atmosphere, a larger portion of the crust would have to be artificially exposed to the atmosphere to allow more extensive carbonate conversion. In 1989, Alexander G. Smith proposed that Venus could be terraformed by lithosphere overturn, allowing crust to be converted into carbonates. Landis 2011 calculated that it would require the involvement of the entire surface crust down to a depth of over 1 km to produce enough rock surface area to convert enough of the atmosphere.
Natural formation of carbonate rock from minerals and carbon dioxide is a very slow process. Recent research into sequestering carbon dioxide into carbonate minerals in the context of mitigating global warming on Earth however points out that this process can be considerably accelerated (from hundreds or thousands of years to just 75 days) through the use of catalysts such as polystyrene microspheres. It could therefore be theorised that similar technologies might also be used in the context of terraformation on Venus. It can also be noted that the chemical reaction that converts minerals and carbon dioxide into carbonates is exothermic, in essence producing more energy than is consumed by the reaction. This opens up the possibility of creating self-reinforcing conversion processes with potential for exponential growth of the conversion rate until most of the atmospheric carbon dioxide can be converted.
Bombardment of Venus with refined magnesium and calcium from off-world could also sequester carbon dioxide in the form of calcium and magnesium carbonates. About 8 kg of calcium or 5 kg of magnesium would be required to convert all the carbon dioxide in the atmosphere, which would entail a great deal of mining and mineral refining (perhaps on Mercury which is notably mineral rich). 8 kg is a few times the mass of the asteroid 4 Vesta (more than in diameter).
Injection into volcanic basalt rock
Research projects in Iceland and the US state of Washington have shown that potentially large amounts of carbon dioxide could be removed from the atmosphere by high-pressure injection into subsurface porous basalt formations, where carbon dioxide is rapidly transformed into solid inert minerals.
Other studies predict that one cubic meter of porous basalt has the potential to sequester 47 kilograms of injected carbon dioxide. According to these estimates a volume of about 9.86 × 109 km3 of basalt rock would be needed to sequester all the carbon dioxide in the Venusian atmosphere. This is equal to the entire crust of Venus down to a depth of about 21.4 kilometers. Another study concluded that under optimal conditions, on average, 1 cubic meter of basalt rock can sequester 260 kg of carbon dioxide. Venus's crust appears to be thick and the planet is dominated by volcanic features. The surface is about 90% basalt, and about 65% consists of a mosaic of volcanic lava plains. There should therefore be ample volumes of basalt rock strata on the planet with very promising potential for carbon dioxide sequestration.
Research has also demonstrated that under the high temperature and high pressure conditions in the mantle, silicon dioxide, the most abundant mineral in the mantle (on Earth and probably also on Venus) can form carbonates that are stable under these conditions. This opens up the possibility of carbon dioxide sequestration in the mantle.
Introduction of hydrogen
According to Birch, bombarding Venus with hydrogen and reacting it with carbon dioxide could produce elemental carbon (graphite) and water by the Bosch reaction. It would take about 4 × 1019 kg of hydrogen to convert the whole Venusian atmosphere, and such a large amount of hydrogen could be obtained from the gas giants or their moons' ice. Another possible source of hydrogen could be somehow extracting it from possible reservoirs in the interior of the planet itself. According to some researchers, the Earth's mantle and/or core might hold large quantities of hydrogen left there since the original formation of Earth from the nebular cloud. Since the original formation and inner structure of Earth and Venus are generally believed to be somewhat similar, the same might be true for Venus.
Iron aerosol in the atmosphere will also be required for the reaction to work, and iron can come from Mercury, asteroids, or the Moon. (Loss of hydrogen due to the solar wind is unlikely to be significant on the timescale of terraforming.) Due to the planet's relatively flat surface, this water would cover about 80% of the surface, compared to 70% for Earth, even though it would amount to only roughly 10% of the water found on Earth.
The remaining atmosphere, at around 3 bars (about three times that of Earth), would mainly be composed of nitrogen, some of which will dissolve into the new oceans of water, reducing atmospheric pressure in accordance with Henry's law. To further reduce the pressure even more, nitrogen could also be fixated into nitrates.
Futurist Isaac Arthur has suggested using the hypothesized processes of starlifting and stellasing to create a particle beam of ionized hydrogen from the sun, tentatively dubbed a "hydro-cannon". This device could be used both to thin the dense atmosphere of Venus, but also to introduce hydrogen to react with carbon dioxide to create water, thereby further lowering the atmospheric pressure.
Direct removal of atmosphere
The thinning of the Venusian atmosphere could be attempted by a variety of methods, possibly in combination. Directly lifting atmospheric gas from Venus into space would probably prove difficult. Venus has sufficiently high escape velocity to make blasting it away with asteroid impacts impractical. Pollack and Sagan calculated in 1994 that an impactor of 700 km diameter striking Venus at greater than 20 km/s, would eject all the atmosphere above the horizon as seen from the point of impact, but because this is less than a thousandth of the total atmosphere and there would be diminishing returns as the atmosphere's density decreases, a very great number of such giant impactors would be required. Landis calculated that to lower the pressure from 92 bar to 1 bar would require a minimum of 2,000 impacts, even if the efficiency of atmosphere removal was perfect. Smaller objects would not work, either, because more would be required. The violence of the bombardment could well result in significant outgassing that would replace removed atmosphere. Most of the ejected atmosphere would go into solar orbit near Venus, and, without further intervention, could be captured by the Venerian gravitational field and become part of the atmosphere once again.
Another variant method involving bombardment would be to perturb a massive Kuiper belt object to put its orbit onto a collision path with Venus. If the object, made of mostly ices, had enough velocity to penetrate just a few kilometers past the Venusian surface, the resulting forces from the vaporization of ice from the impactor and the impact itself could stir the lithosphere and mantle thus ejecting a proportional amount of matter (as magma and gas) from Venus. A by-product of this method would be either a new moon for Venus or a new impactor-body of debris that would fall back to the surface at a later time.
Removal of atmospheric gas in a more controlled manner could also prove difficult. Venus's extremely slow rotation means that space elevators would be very difficult to construct because the planet's geostationary orbit lies an impractical distance above the surface, and the very thick atmosphere to be removed makes mass drivers useless for removing payloads from the planet's surface. Possible workarounds include placing mass drivers on high-altitude balloons or balloon-supported towers extending above the bulk of the atmosphere, using space fountains, or rotovators.
In addition, if the density of the atmosphere (and corresponding greenhouse effect) were dramatically reduced, the surface temperature (now effectively constant) would probably vary widely between day side and night side. Another side effect to atmospheric-density reduction could be the creation of zones of dramatic weather activity or storms at the terminator because large volumes of atmosphere would undergo rapid heating or cooling.
Cooling planet by solar shades
Venus receives about twice the sunlight that Earth does, which is thought to have contributed to its runaway greenhouse effect. One means of terraforming Venus could involve reducing the insolation at Venus's surface to prevent the planet from heating up again.
Space-based
Solar shades could be used to reduce the total insolation received by Venus, cooling the planet somewhat. A shade placed in the Sun–Venus Lagrangian point also would serve to block the solar wind, removing the radiation exposure problem on Venus.
A suitably large solar shade would be four times the diameter of Venus itself if at the point. This would necessitate construction in space. There would also be the difficulty of balancing a thin-film shade perpendicular to the Sun's rays at the Sun–Venus Lagrange point with the incoming radiation pressure, which would tend to turn the shade into a huge solar sail. If the shade were simply left at the point, the pressure would add force to the sunward side and the shade would accelerate and drift out of orbit. The shade could instead be positioned nearer to the Sun, using the solar pressure to balance the gravitational forces, in practice becoming a statite.
Other modifications to the solar shade design have also been suggested to solve the solar-sail problem. One suggested method is to use polar-orbiting, solar-synchronous mirrors that reflect light toward the back of the sunshade, from the non-sunward side of Venus. Photon pressure would push the support mirrors to an angle of 30 degrees away from the sunward side.
Paul Birch proposed a slatted system of mirrors near the point between Venus and the Sun. The shade's panels would not be perpendicular to the Sun's rays, but instead at an angle of 30 degrees, such that the reflected light would strike the next panel, negating the photon pressure. Each successive row of panels would be +/- 1 degree off the 30-degree deflection angle, causing the reflected light to be skewed 4 degrees from striking Venus.
Solar shades could also serve as solar power generators. Space-based solar shade techniques, and thin-film solar sails in general, are only in an early stage of development. The vast sizes require a quantity of material that is many orders of magnitude greater than any human-made object that has ever been brought into space or constructed in space.
Atmospheric or surface-based
Venus could also be cooled by placing reflectors in the atmosphere. Reflective balloons floating in the upper atmosphere could create shade. The number and/or size of the balloons would necessarily be great. Geoffrey A. Landis has suggested that if enough floating cities were built, they could form a solar shield around the planet, and could simultaneously be used to process the atmosphere into a more desirable form, thus combining the solar shield theory and the atmospheric processing theory with a scalable technology that would immediately provide living space in the Venusian atmosphere. If made from carbon nanotubes or graphene (a sheet-like carbon allotrope), then the major structural materials can be produced using carbon dioxide gathered in situ from the atmosphere. The recently synthesised amorphous carbonia might prove a useful structural material if it can be quenched to Standard Temperature and Pressure (STP) conditions, perhaps in a mixture with regular silica glass. According to Birch's analysis, such colonies and materials would provide an immediate economic return from colonizing Venus, funding further terraforming efforts.
Increasing the planet's albedo by deploying light-colored or reflective material on the surface (or at any level below the cloud tops) would not be useful, because the Venerian surface is already completely enshrouded by clouds, and almost no sunlight reaches the surface. Thus, it would be unlikely to be able to reflect more light than Venus's already-reflective clouds, with Bond albedo of 0.77.
Combination of solar shades and atmospheric condensation
Birch proposed that solar shades could be used to not merely cool the planet but to also reduce atmospheric pressure as well, by the process of freezing of the carbon dioxide. This requires Venus's temperature to be reduced, first to the liquefaction point, requiring a temperature less than ( or ) and partial pressures of CO2 to bring the atmospheric pressure down to (carbon dioxide's critical point); and from there reducing the temperature below ( or ) (carbon dioxide's triple point). Below that temperature, freezing of atmospheric carbon dioxide into dry ice will cause it to deposit onto the surface. He then proposed that the frozen CO2 could be buried and maintained in that condition by pressure, or even shipped off-world (perhaps to provide greenhouse gas needed for terraforming of Mars or the moons of Jupiter). After this process was complete, the shades could be removed or solettas added, allowing the planet to partially warm again to temperatures comfortable for Earth life. A source of hydrogen or water would still be needed, and some of the remaining 3.5 bar of atmospheric nitrogen would need to be fixed into the soil. Birch suggests disrupting an icy moon of Saturn, for example Hyperion, and bombarding Venus with its fragments.
Cooling planet by heat pipes, atmospheric vortex engines or radiative cooling
Paul Birch suggests that, in addition to cooling the planet with a sunshade in L1, "heat pipes" could be built on the planet to accelerate the cooling. The proposed mechanism would transport heat from the surface to colder regions higher up in the atmosphere, similar to a solar updraft tower, thereby facilitating radiation of excess heat out into space. A newly proposed variation of this technology is the atmospheric vortex engine, where instead of physical chimney pipes, the atmospheric updraft is achieved through the creation of a vortex, similar to a stationary tornado. In addition to this method being less material intensive and potentially more cost effective, this process also produces a net surplus of energy, which could be utilised to power venusian colonies or other aspects of the terraforming effort, while simultaneously contributing to speeding up the cooling of the planet. Another method to cool down the planet could be with the use of radiative cooling This technology could utilise the fact that in certain wavelengths, thermal radiation from the lower atmosphere of Venus can "escape" to space through partially transparent atmospheric "windows" – spectral gaps between strong CO2 and H2O absorption bands in the near infrared range . The outgoing thermal radiation is wavelength dependent and varies from the very surface at to approximately at . Nanophotonics and construction of metamaterials opens up new possibilities to tailor the emittance spectrum of a surface via properly designing periodic nano/micro-structures.
Recently there has been proposals of a device named a "emissive energy harvester" that can transfer heat to space through radiative cooling and convert part of the heat flow into surplus energy, opening up possibilities of a self-replicating system that could exponentially cool the planet.
Introduction of water
Since Venus has only a fraction of the water of Earth (less than half the Earth's water content in the atmosphere, and none on the surface), water would have to be introduced either by the aforementioned method of introduction of hydrogen, or from some other interplanetary or extraplanetary source.
Capture the Ice Moons
Paul Birch suggests the possibility of colliding Venus with one of the ice moons from the outer solar system, thereby bringing in all the water needed for terraformation in one go. This could be achieved through gravity assisted capture of Saturn's moons Enceladus and Hyperion or the Uranian moon Miranda. Simply changing the velocity of these moons enough to move them from their current orbit and enable gravity-assisted transport to Venus would require large amounts of energy. However, through complex gravity-assisted chain reactions the propulsion requirements could be reduced by several orders of magnitude. As Birch puts it, "[t]heoretically one could flick a pebble into the asteroid belt and end up dumping Mars into the Sun."
Outgassing from the mantle
Studies have shown that substantial amounts of water (in the form of hydrogen) might be present in the mantle of terrestrial planets. It has therefore been speculated that it would be technically possible to extract this water from the mantle to the surface even if no feasible method to accomplish this exists currently.
Altering day–night cycle
Venus rotates once every 243 Earth days—by far the slowest rotation period of any known object in the Solar System. A Venusian sidereal day thus lasts more than a Venusian year (243 versus 224.7 Earth days). However, the length of a solar day on Venus is significantly shorter than the sidereal day; to an observer on the surface of Venus, the time from one sunrise to the next would be 116.75 days. Therefore, the slow Venerian rotation rate would result in extremely long days and nights, similar to the day-night cycles in the polar regions of earth—shorter, but global. The exact period of a solar day is very important for terraforming since 117 days of daytime would be the equivalent of a summer in the more temperate regions of Alaska whereas 58 days of daytime would result in a very short growing season found in the high arctic. It could mean the difference between permafrost and perpetual ice or green lush boreal forests. The slow rotation might also account for the lack of a significant magnetic field.
Arguments for keeping the current day-night cycle unchanged
It has until recently been assumed that the rotation rate or day-night cycle of Venus would have to be increased for successful terraformation to be achieved. More recent research has shown, however, that the current slow rotation rate of Venus is not at all detrimental to the planet's capability to support an Earth-like climate. Rather, the slow rotation rate would, given an Earth-like atmosphere, enable the formation of thick cloud layers on the side of the planet facing the sun. This in turn would raise planetary albedo and act to cool the global temperature to Earth-like levels, despite the greater proximity to the Sun. According to calculations, maximum temperatures would be just around 35 °C (95 °F), given an Earth-like atmosphere. Speeding up the rotation rate would therefore be both impractical and detrimental to the terraforming effort. A terraformed Venus with the current slow rotation would result in a global climate with "day" and "night" periods each roughly 2 months (58 days) long, resembling the seasons at higher latitudes on Earth. The "day" would resemble a short summer with a warm, humid climate, a heavy overcast sky and ample rainfall. The "night" would resemble a short, very dark winter with quite cold temperature and snowfall. There would be periods with more temperate climate and clear weather at sunrise and sunset resembling a "spring" and "autumn".
Space mirrors
The problem of very dark conditions during the roughly two-month long "night" period could be solved through the use of a space mirror in a 24-hour orbit (the same distance as a geostationary orbit on Earth) similar to the Znamya (satellite) project experiments. Extrapolating the numbers from those experiments and applying them to Venerian conditions would mean that a space mirror just under 1700 meters in diameter could illuminate the entire nightside of the planet with the luminosity of 10-20 full moons and create an artificial 24-hour light cycle. An even bigger mirror could potentially create even stronger illumination conditions. Further extrapolation suggests that to achieve illumination levels of about 400 lux (similar to normal office lighting or a sunrise on a clear day on earth) a circular mirror about 55 kilometers across would be needed.
Paul Birch suggested keeping the entire planet protected from sunlight by a permanent system of slated shades in L1, and the surface illuminated by a rotating soletta mirror in a polar orbit, which would produce a 24-hour light cycle.
Changing rotation speed
If increasing the rotation speed of the planet would be desired (despite the above-mentioned potentially positive climatic effects of the current rotational speed), it would require energy of a magnitude many orders greater than the construction of orbiting solar mirrors, or even than the removal of the Venerian atmosphere. Birch calculates that increasing the rotation of Venus to an Earth-like solar cycle would require about 1.6 × 1029 Joules (50 billion petawatt-hours).
Scientific research suggests that close flybys of asteroids or cometary bodies larger than 100 kilometres (60 mi) across could be used to move a planet in its orbit, or increase the speed of rotation. The energy required to do this is large. In his book on terraforming, one of the concepts Fogg discusses is to increase the spin of Venus using three quadrillion objects circulating between Venus and the Sun every 2 hours, each traveling at 10% of the speed of light.
G. David Nordley has suggested, in fiction, that Venus might be spun up to a day length of 30 Earth days by exporting the atmosphere of Venus into space via mass drivers. A proposal by Birch involves the use of dynamic compression members to transfer energy and momentum via high-velocity mass streams to a band around the equator of Venus. He calculated that a sufficiently high-velocity mass stream, at about 10% of the speed of light, could give Venus a day of 24 hours in 30 years.
Creating an artificial magnetosphere
Protecting the new atmosphere from the solar wind, to avoid the loss of hydrogen, would require an artificial magnetosphere. Venus presently lacks an intrinsic magnetic field, therefore creating an artificial planetary magnetic field is needed to form a magnetosphere via its interaction with the solar wind. According to two NIFS Japanese scientists, it is feasible to do that with current technology by building a system of refrigerated latitudinal superconducting rings, each carrying a sufficient amount of direct current. In the same report, it is claimed that the economic impact of the system can be minimized by using it also as a planetary energy transfer and storage system (SMES).
Another study proposes the possibility of deployment of a magnetic dipole shield at the L1 Lagrange point, thereby creating an artificial magnetosphere that would protect the whole planet from solar wind and radiation.
See also
Terraforming
Colonization of Venus
Terraforming of Mars
Space sunshade
References
External links
Visualizing the steps of solar system terraforming
A fictional account of the terraformation of Venus
Terraform Venus (discussion on the New Mars forum)
Terraforming Venus - The Latest Thinking (discussion on the New Mars forum)
Planetary engineering
Space colonization
Venus
Venus | Terraforming of Venus | [
"Engineering"
] | 5,750 | [
"Planetary engineering",
"Terraforming"
] |
4,924,578 | https://en.wikipedia.org/wiki/Structure%20factor | In condensed matter physics and crystallography, the static structure factor (or structure factor for short) is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns (interference patterns) obtained in X-ray, electron and neutron diffraction experiments.
Confusingly, there are two different mathematical expressions in use, both called 'structure factor'. One is usually written ; it is more generally valid, and relates the observed diffracted intensity per atom to that produced by a single scattering unit. The other is usually written or and is only valid for systems with long-range positional order — crystals. This expression relates the amplitude and phase of the beam diffracted by the planes of the crystal ( are the Miller indices of the planes) to that produced by a single scattering unit at the vertices of the primitive unit cell. is not a special case of ; gives the scattering intensity, but gives the amplitude. It is the modulus squared that gives the scattering intensity. is defined for a perfect crystal, and is used in crystallography, while is most useful for disordered systems. For partially ordered systems such as crystalline polymers there is obviously overlap, and experts will switch from one expression to the other as needed.
The static structure factor is measured without resolving the energy of scattered photons/electrons/neutrons. Energy-resolved measurements yield the dynamic structure factor.
Derivation of
Consider the scattering of a beam of wavelength by an assembly of particles or atoms stationary at positions . Assume that the scattering is weak, so that the amplitude of the incident beam is constant throughout the sample volume (Born approximation), and absorption, refraction and multiple scattering can be neglected (kinematic diffraction). The direction of any scattered wave is defined by its scattering vector . , where and ( ) are the scattered and incident beam wavevectors, and is the angle between them. For elastic scattering, and , limiting the possible range of (see Ewald sphere). The amplitude and phase of this scattered wave will be the vector sum of the scattered waves from all the atoms
For an assembly of atoms, is the atomic form factor of the -th atom. The scattered intensity is obtained by multiplying this function by its complex conjugate
The structure factor is defined as this intensity normalized by
If all the atoms are identical, then Equation () becomes and so
Another useful simplification is if the material is isotropic, like a powder or a simple liquid. In that case, the intensity depends on and . In three dimensions, Equation () then simplifies to the Debye scattering equation:
An alternative derivation gives good insight, but uses Fourier transforms and convolution. To be general, consider a scalar (real) quantity defined in a volume ; this may correspond, for instance, to a mass or charge distribution or to the refractive index of an inhomogeneous medium. If the scalar function is integrable, we can write its Fourier transform as . In the Born approximation the amplitude of the scattered wave corresponding to the scattering vector is proportional to the Fourier transform . When the system under study is composed of a number of identical constituents (atoms, molecules, colloidal particles, etc.) each of which has a distribution of mass or charge then the total distribution can be considered the convolution of this function with a set of delta functions.
with the particle positions as before. Using the property that the Fourier transform of a convolution product is simply the product of the Fourier transforms of the two factors, we have , so that:
This is clearly the same as Equation () with all particles identical, except that here is shown explicitly as a function of .
In general, the particle positions are not fixed and the measurement takes place over a finite exposure time and with a macroscopic sample (much larger than the interparticle distance). The experimentally accessible intensity is thus an averaged one ; we need not specify whether denotes a time or ensemble average. To take this into account we can rewrite Equation () as:
Perfect crystals
In a crystal, the constitutive particles are arranged periodically, with translational symmetry forming a lattice. The crystal structure can be described as a Bravais lattice with a group of atoms, called the basis, placed at every lattice point; that is, [crystal structure] = [lattice] [basis]. If the lattice is infinite and completely regular, the system is a perfect crystal. For such a system, only a set of specific values for can give scattering, and the scattering amplitude for all other values is zero. This set of values forms a lattice, called the reciprocal lattice, which is the Fourier transform of the real-space crystal lattice.
In principle the scattering factor can be used to determine the scattering from a perfect crystal; in the simple case when the basis is a single atom at the origin (and again neglecting all thermal motion, so that there is no need for averaging) all the atoms have identical environments. Equation () can be written as
and .
The structure factor is then simply the squared modulus of the Fourier transform of the lattice, and shows the directions in which scattering can have non-zero intensity. At these values of the wave from every lattice point is in phase. The value of the structure factor is the same for all these reciprocal lattice points, and the intensity varies only due to changes in with .
Units
The units of the structure-factor amplitude depend on the incident radiation. For X-ray crystallography they are multiples of the unit of scattering by a single electron (2.82 m); for neutron scattering by atomic nuclei the unit of scattering length of m is commonly used.
The above discussion uses the wave vectors and . However, crystallography often uses wave vectors and . Therefore, when comparing equations from different sources, the factor may appear and disappear, and care to maintain consistent quantities is required to get correct numerical results.
Definition of
In crystallography, the basis and lattice are treated separately. For a perfect crystal the lattice gives the reciprocal lattice, which determines the positions (angles) of diffracted beams, and the basis gives the structure factor which determines the amplitude and phase of the diffracted beams:
where the sum is over all atoms in the unit cell, are the positional coordinates of the -th atom, and is the scattering factor of the -th atom. The coordinates have the directions and dimensions of the lattice vectors . That is, (0,0,0) is at the lattice point, the origin of position in the unit cell; (1,0,0) is at the next lattice point along and (1/2, 1/2, 1/2) is at the body center of the unit cell. defines a reciprocal lattice point at which corresponds to the real-space plane defined by the Miller indices (see Bragg's law).
is the vector sum of waves from all atoms within the unit cell. An atom at any lattice point has the reference phase angle zero for all since then is always an integer. A wave scattered from an atom at (1/2, 0, 0) will be in phase if is even, out of phase if is odd.
Again an alternative view using convolution can be helpful. Since [crystal structure] = [lattice] [basis], [crystal structure] = [lattice] [basis]; that is, scattering [reciprocal lattice] [structure factor].
Examples of in 3-D
Body-centered cubic (BCC)
For the body-centered cubic Bravais lattice (cI), we use the points and which leads us to
and hence
Face-centered cubic (FCC)
The FCC lattice is a Bravais lattice, and its Fourier transform is a body-centered cubic lattice. However to obtain without this shortcut, consider an FCC crystal with one atom at each lattice point as a primitive or simple cubic with a basis of 4 atoms, at the origin and at the three adjacent face centers, , and . Equation () becomes
with the result
The most intense diffraction peak from a material that crystallizes in the FCC structure is typically the (111). Films of FCC materials like gold tend to grow in a (111) orientation with a triangular surface symmetry. A zero diffracted intensity for a group of diffracted beams (here, of mixed parity) is called a systematic absence.
Diamond crystal structure
The diamond cubic crystal structure occurs for example diamond (carbon), tin, and most semiconductors. There are 8 atoms in the cubic unit cell. We can consider the structure as a simple cubic with a basis of 8 atoms, at positions
But comparing this to the FCC above, we see that it is simpler to describe the structure as FCC with a basis of two atoms at (0, 0, 0) and (1/4, 1/4, 1/4). For this basis, Equation () becomes:
And then the structure factor for the diamond cubic structure is the product of this and the structure factor for FCC above, (only including the atomic form factor once)
with the result
If h, k, ℓ are of mixed parity (odd and even values combined) the first (FCC) term is zero, so
If h, k, ℓ are all even or all odd then the first (FCC) term is 4
if h+k+ℓ is odd then
if h+k+ℓ is even and exactly divisible by 4 () then
if h+k+ℓ is even but not exactly divisible by 4 () the second term is zero and
These points are encapsulated by the following equations:
where is an integer.
Zincblende crystal structure
The zincblende structure is similar to the diamond structure except that it is a compound of two distinct interpenetrating fcc lattices, rather than all the same element. Denoting the two elements in the compound by and , the resulting structure factor is
Cesium chloride
Cesium chloride is a simple cubic crystal lattice with a basis of Cs at (0,0,0) and Cl at (1/2, 1/2, 1/2) (or the other way around, it makes no difference). Equation () becomes
We then arrive at the following result for the structure factor for scattering from a plane :
and for scattered intensity,
Hexagonal close-packed (HCP)
In an HCP crystal such as graphite, the two coordinates include the origin and the next plane up the c axis located at c/2, and hence , which gives us
From this it is convenient to define dummy variable , and from there consider the modulus squared so hence
This leads us to the following conditions for the structure factor:
Perfect crystals in one and two dimensions
The reciprocal lattice is easily constructed in one dimension: for particles on a line with a period , the reciprocal lattice is an infinite array of points with spacing . In two dimensions, there are only five Bravais lattices. The corresponding reciprocal lattices have the same symmetry as the direct lattice. 2-D lattices are excellent for demonstrating simple diffraction geometry on a flat screen, as below.
Equations (1)–(7) for structure factor apply with a scattering vector of limited dimensionality and a crystallographic structure factor can be defined in 2-D as .
However, recall that real 2-D crystals such as graphene exist in 3-D. The reciprocal lattice of a 2-D hexagonal sheet that exists in 3-D space in the plane is a hexagonal array of lines parallel to the or axis that extend to and intersect any plane of constant in a hexagonal array of points.
The Figure shows the construction of one vector of a 2-D reciprocal lattice and its relation to a scattering experiment.
A parallel beam, with wave vector is incident on a square lattice of parameter . The scattered wave is detected at a certain angle, which defines the wave vector of the outgoing beam, (under the assumption of elastic scattering, ). One can equally define the scattering vector and construct the harmonic pattern . In the depicted example, the spacing of this pattern coincides to the distance between particle rows: , so that contributions to the scattering from all particles are in phase (constructive interference). Thus, the total signal in direction is strong, and belongs to the reciprocal lattice. It is easily shown that this configuration fulfills Bragg's law.
Imperfect crystals
Technically a perfect crystal must be infinite, so a finite size is an imperfection. Real crystals always exhibit imperfections of their order besides their finite size, and these imperfections can have profound effects on the properties of the material. André Guinier proposed a widely employed distinction between imperfections that preserve the long-range order of the crystal that he called disorder of the first kind and those that destroy it called disorder of the second kind. An example of the first is thermal vibration; an example of the second is some density of dislocations.
The generally applicable structure factor can be used to include the effect of any imperfection. In crystallography, these effects are treated as separate from the structure factor , so separate factors for size or thermal effects are introduced into the expressions for scattered intensity, leaving the perfect crystal structure factor unchanged. Therefore, a detailed description of these factors in crystallographic structure modeling and structure determination by diffraction is not appropriate in this article.
Finite-size effects
For a finite crystal means that the sums in equations 1-7 are now over a finite . The effect is most easily demonstrated with a 1-D lattice of points. The sum of the phase factors is a geometric series and the structure factor becomes:
This function is shown in the Figure for different values of .
When the scattering from every particle is in phase, which is when the scattering is at a reciprocal lattice point , the sum of the amplitudes must be and so the maxima in intensity are . Taking the above expression for and estimating the limit using, for instance, L'Hôpital's rule) shows that as seen in the Figure. At the midpoint (by direct evaluation) and the peak width decreases like . In the large limit, the peaks become infinitely sharp Dirac delta functions, the reciprocal lattice of the perfect 1-D lattice.
In crystallography when is used, is large, and the formal size effect on diffraction is taken as , which is the same as the expression for above near to the reciprocal lattice points, . Using convolution, we can describe the finite real crystal structure as [lattice] [basis] rectangular function, where the rectangular function has a value 1 inside the crystal and 0 outside it. Then [crystal structure] = [lattice] [basis] [rectangular function]; that is, scattering [reciprocal lattice] [structure factor] [ sinc function]. Thus the intensity, which is a delta function of position for the perfect crystal, becomes a function around every point with a maximum , a width , area .
Disorder of the first kind
This model for disorder in a crystal starts with the structure factor of a perfect crystal. In one-dimension for simplicity and with N planes, we then start with the expression above for a perfect finite lattice, and then this disorder only changes by a multiplicative factor, to give
where the disorder is measured by the mean-square displacement of the positions from their positions in a perfect one-dimensional lattice: , i.e., , where is a small (much less than ) random displacement. For disorder of the first kind, each random displacement is independent of the others, and with respect to a perfect lattice. Thus the displacements do not destroy the translational order of the crystal. This has the consequence that for infinite crystals () the structure factor still has delta-function Bragg peaks – the peak width still goes to zero as , with this kind of disorder. However, it does reduce the amplitude of the peaks, and due to the factor of in the exponential factor, it reduces peaks at large much more than peaks at small .
The structure is simply reduced by a and disorder dependent term because all disorder of the first-kind does is smear out the scattering planes, effectively reducing the form factor.
In three dimensions the effect is the same, the structure is again reduced by a multiplicative factor, and this factor is often called the Debye–Waller factor. Note that the Debye–Waller factor is often ascribed to thermal motion, i.e., the are due to thermal motion, but any random displacements about a perfect lattice, not just thermal ones, will contribute to the Debye–Waller factor.
Disorder of the second kind
However, fluctuations that cause the correlations between pairs of atoms to decrease as their separation increases, causes the Bragg peaks in the structure factor of a crystal to broaden. To see how this works, we consider a one-dimensional toy model: a stack of plates with mean spacing . The derivation follows that in chapter 9 of Guinier's textbook. This model has been pioneered by and applied to a number of materials by Hosemann and collaborators over a number of years. Guinier and they termed this disorder of the second kind, and Hosemann in particular referred to this imperfect crystalline ordering as paracrystalline ordering. Disorder of the first kind is the source of the Debye–Waller factor.
To derive the model we start with the definition (in one dimension) of the
To start with we will consider, for simplicity an infinite crystal, i.e., . We will consider a finite crystal with disorder of the second-type below.
For our infinite crystal, we want to consider pairs of lattice sites. For large each plane of an infinite crystal, there are two neighbours planes away, so the above double sum becomes a single sum over pairs of neighbours either side of an atom, at positions and lattice spacings away, times . So, then
where is the probability density function for the separation of a pair of planes, lattice spacings apart. For the separation of neighbouring planes we assume for simplicity that the fluctuations around the mean neighbour spacing of a are Gaussian, i.e., that
and we also assume that the fluctuations between a plane and its neighbour, and between this neighbour and the next plane, are independent. Then is just the convolution of two s, etc. As the convolution of two Gaussians is just another Gaussian, we have that
The sum in is then just a sum of Fourier transforms of Gaussians, and so
for . The sum is just the real part of the sum and so the structure factor of the infinite but disordered crystal is
This has peaks at maxima , where . These peaks have heights
i.e., the height of successive peaks drop off as the order of the peak (and so ) squared. Unlike finite-size effects that broaden peaks but do not decrease their height, disorder lowers peak heights. Note that here we assuming that the disorder is relatively weak, so that we still have relatively well defined peaks. This is the limit , where . In this limit, near a peak we can approximate , with and obtain
which is a Lorentzian or Cauchy function, of FWHM , i.e., the FWHM increases as the square of the order of peak, and so as the square of the wave vector at the peak.
Finally, the product of the peak height and the FWHM is constant and equals , in the limit. For the first few peaks where is not large, this is just the limit.
Finite crystals with disorder of the second kind
For a one-dimensional crystal of size
where the factor in parentheses comes from the fact the sum is over nearest-neighbour pairs (), next nearest-neighbours (), ... and for a crystal of planes, there are pairs of nearest neighbours, pairs of next-nearest neighbours, etc.
Liquids
In contrast with crystals, liquids have no long-range order (in particular, there is no regular lattice), so the structure factor does not exhibit sharp peaks. They do however show a certain degree of short-range order, depending on their density and on the strength of the interaction between particles. Liquids are isotropic, so that, after the averaging operation in Equation (), the structure factor only depends on the absolute magnitude of the scattering vector . For further evaluation, it is convenient to separate the diagonal terms in the double sum, whose phase is identically zero, and therefore each contribute a unit constant:
One can obtain an alternative expression for in terms of the radial distribution function :
Ideal gas
In the limiting case of no interaction, the system is an ideal gas and the structure factor is completely featureless: , because there is no correlation between the positions and of different particles (they are independent random variables), so the off-diagonal terms in Equation () average to zero: .
High- limit
Even for interacting particles, at high scattering vector the structure factor goes to 1. This result follows from Equation (), since is the Fourier transform of the "regular" function and thus goes to zero for high values of the argument . This reasoning does not hold for a perfect crystal, where the distribution function exhibits infinitely sharp peaks.
Low- limit
In the low- limit, as the system is probed over large length scales, the structure factor contains thermodynamic information, being related to the isothermal compressibility of the liquid by the compressibility equation:
.
Hard-sphere liquids
In the hard sphere model, the particles are described as impenetrable spheres with radius ; thus, their center-to-center distance and they experience no interaction beyond this distance. Their interaction potential can be written as:
This model has an analytical solution in the Percus–Yevick approximation. Although highly simplified, it provides a good description for systems ranging from liquid metals to colloidal suspensions. In an illustration, the structure factor for a hard-sphere fluid is shown in the Figure, for volume fractions from 1% to 40%.
Polymers
In polymer systems, the general definition () holds; the elementary constituents are now the monomers making up the chains. However, the structure factor being a measure of the correlation between particle positions, one can reasonably expect that this correlation will be different for monomers belonging to the same chain or to different chains.
Let us assume that the volume contains identical molecules, each composed of monomers, such that ( is also known as the degree of polymerization). We can rewrite () as:
where indices label the different molecules and the different monomers along each molecule. On the right-hand side we separated intramolecular () and intermolecular () terms. Using the equivalence of the chains, () can be simplified:
where is the single-chain structure factor.
See also
R-factor (crystallography)
Patterson function
Ornstein–Zernike equation
Notes
References
Als-Nielsen, N. and McMorrow, D. (2011). Elements of Modern X-ray Physics (2nd edition). John Wiley & Sons.
Guinier, A. (1963). X-ray Diffraction. In Crystals, Imperfect Crystals, and Amorphous Bodies. W. H. Freeman and Co.
Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press.
Hansen, J. P. and McDonald, I. R. (2005). Theory of Simple Liquids (3rd edition). Academic Press.
Teraoka, I. (2002). Polymer Solutions: An Introduction to Physical Properties. John Wiley & Sons.
External links
Structure Factor Tutorial located at the University of York.
Definition of by IUCr
Learning Crystallography, from the CSIC
Crystallography | Structure factor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,890 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
4,924,962 | https://en.wikipedia.org/wiki/Vertex%20model | A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model (representing an atom or particle). This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces yields an exactly-solvable vertex model.
Although the model can be applied to various geometries in any number of dimensions, with any number of possible states for a given bond, the most fundamental examples occur for two dimensional lattices, the simplest being a square lattice where each bond has two possible states. In this model, every particle is connected to four other particles, and each of the four bonds adjacent to the particle has two possible states, indicated by the direction of an arrow on the bond. In this model, each vertex can adopt possible configurations. The energy for a given vertex can be given by , with a state of the lattice is an assignment of a state of each bond, with the total energy of the state being the sum of the vertex energies. As the energy is often divergent for an infinite lattice, the model is studied for a finite lattice as the lattice approaches infinite size. Periodic or domain wall boundary conditions may be imposed on the model.
Discussion
For a given state of the lattice, the Boltzmann weight can be written as the product over the vertices of the Boltzmann weights of the corresponding vertex states
where the Boltzmann weights for the vertices are written
,
and the i, j, k, l range over the possible statuses of each of the four edges attached to the vertex. The vertex states of adjacent vertices must satisfy compatibility conditions along the connecting edges (bonds) in order for the state to be admissible.
The probability of the system being in any given state at a particular time, and hence the properties of the system are determined by the partition function, for which an analytic form is desired.
where β = 1/kT, T is temperature and k is the Boltzmann constant. The probability that the system is in any given state (microstate) is given by
so that the average value of the energy of the system is given by
In order to evaluate the partition function, firstly examine the states of a row of vertices.
The external edges are free variables, with summation over the internal bonds. Hence, form the row partition function
This can be reformulated in terms of an auxiliary n-dimensional vector space V, with a basis , and as
and as
thereby implying that T can be written as
where the indices indicate the factors of the tensor product on which R operates. Summing over the states of the bonds in the first row with the periodic boundary conditions , gives
where is the row-transfer matrix.
By summing the contributions over two rows, the result is
which upon summation over the vertical bonds connecting the first two rows gives:
for M rows, this gives
and then applying the periodic boundary conditions to the vertical columns, the partition function can be expressed in terms of the transfer matrix as
where is the largest eigenvalue of . The approximation follows from the fact that the eigenvalues of are the eigenvalues of to the power of M, and as , the power of the largest eigenvalue becomes much larger than the others. As the trace is the sum of the eigenvalues, the problem of calculating reduces to the problem of finding the maximum eigenvalue of . This in itself is another field of study. However, a standard approach to the problem of finding the largest eigenvalue of is to find a large family of operators which commute with . This implies that the eigenspaces are common, and restricts the possible space of solutions. Such a family of commuting operators is usually found by means of the Yang–Baxter equation, which thus relates statistical mechanics to the study of quantum groups.
Integrability
Definition: A vertex model is integrable if, such that
This is a parameterized version of the Yang–Baxter equation, corresponding to the possible dependence of the vertex energies, and hence the Boltzmann weights R on external parameters, such as temperature, external fields, etc.
The integrability condition implies the following relation.
Proposition: For an integrable vertex model, with and defined as above, then
as endomorphisms of , where acts on the first two vectors of the tensor product.
It follows by multiplying both sides of the above equation on the right by and using the cyclic property of the trace operator that the following corollary holds.
Corollary: For an integrable vertex model for which is invertible , the transfer matrix commutes with .
This illustrates the role of the Yang–Baxter equation in the solution of solvable lattice models. Since the transfer matrices commute for all , the eigenvectors of are common, and hence independent of the parameterization. It is a recurring theme which appears in many other types of statistical mechanical models to look for these commuting transfer matrices.
From the definition of R above, it follows that for every solution of the Yang–Baxter equation in the tensor product of two n-dimensional vector spaces, there is a corresponding 2-dimensional solvable vertex model where each of the bonds can be in the possible states , where R is an endomorphism in the space spanned by . This motivates the classification of all the finite-dimensional irreducible representations of a given Quantum algebra in order to find solvable models corresponding to it.
Notable vertex models
Six-vertex model
Eight-vertex model
Nineteen-vertex model (Izergin-Korepin model)
References
Statistical mechanics
Lattice models | Vertex model | [
"Physics",
"Materials_science"
] | 1,228 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
22,528,148 | https://en.wikipedia.org/wiki/Ethanol-induced%20non-lamellar%20phases%20in%20phospholipids | The presence of ethanol can lead to the formations of non-lamellar phases also known as non-bilayer phases. Ethanol has been recognized as being an excellent solvent in an aqueous solution for inducing non-lamellar phases in phospholipids. The formation of non-lamellar phases in phospholipids is not completely understood, but it is significant that this amphiphilic molecule is capable of doing so. The formation of non-lamellar phases is significant in biomedical studies which include drug delivery, the transport of polar and non-polar ions using solvents capable of penetrating the biomembrane, increasing the elasticity of the biomembrane when it is being disrupted by unwanted substances (viruses, bacteria, solvents, etc.) and functioning as a channel or transporter of biomaterial.
Biomembranes and phospholipid bilayers
Biological membranes are found in both prokaryotic and eukaryotic cells. They surround cells and organelles with a semi-permeable barrier that prevents free flow of substances. The membrane consists of a phospholipid bilayer structure and often embedded or otherwise associated proteins, along with cholesterol and glycolipids. The phospholipid bilayer is a two-layer structure mainly composed of phospholipids, which are amphiphilic molecules that have hydrophilic and hydrophobic regions. The hydrophilic region contains the polar head group. This region is exposed to aqueous substances located mainly in the exterior portion of the biomembrane. The hydrophobic region consists of the non-polar acyl chains or fatty acids groups facing the interior of the biomembrane. Phospholipids consist of two non-polar hydrocarbon chains with ester or ether bonds to the phosphate group which is also linked by ester or ether bonds to the polar hydrophilic region. The phospholipid carries a negative charge due to the presence of the phosphate group. Its overall polarity depends on the charges of the hydroxyl groups or alcohols such as choline, ethanolamine, inositol, serine, etc. attached to the phosphate group. There are six basic functions that are associated with biomembranes:
Controlling chemical potential and gradient for chemical species and charges across opposite sides of the membrane
Organizing enzymes and protein complexes for signal transduction or signaling
Managing protein and lipid interactions
Functioning as a substrate
Transferring vital information and material across the membrane
Compartmentalization by maintaining physical separation amongst membranes but still allowing proper communication
Factors that affect biomembranes and lipid formations
There are two basic terms used to describe lipid phases: lamellar and non-lamellar phases. Lipids can undergo polymorphic or mesomorphic changes leading to the formation of lamellar or non-lamellar phases.
Various factors can affect the overall function of the biomembrane and decrease its ability to function as a protective barrier and maintained the order of the inner components. The bilayer thickness, surface charge, intermolecular forces, amphiphilic molecules, changes in free energy, alternating or spontaneous curvatures, increase or decrease in temperature, solvents, and the environment are all examples of different conditions that cause changes in biomembranes. For example, the strength of the intermolecular forces within the biomembrane are fairly strong but when lipids are extracted from biomembranes for analytical purposes there is a decrease in the constraints by the intermolecular forces against the phospholipids which may cause the lipid to undergo polymorphism as well as a temporary rearrangement of other lipids or proteins in the biomembrane. The thickness of the biomembrane determines the permeability of the membrane and ethanol, which can be used as a solvent, is able to reduce the thickness of the biomembrane which is one way this amphiphilic molecule is able to permeate through the biomembrane. There can also be free energy changes that can increase or decrease during the phase transitions of the phospholipids during polymorphism or mesmorphism which can also affect the curvature of lipids. All lipids can experience some sort of positive or negative alternating or spontaneous curvature due to variations in sizes between the hydrophobic and the hydrophilic region. Temperature changes can also lead to changes in the biomembrane.
Non-lamellar phases vs. lamellar phases
When lipids are extracted or isolated from biomembranes, polymorphism and mesomorphism can occur because they are then no longer under the intermolecular constraints that are present within the biomembrane. This can lead to formation of non-lamellar (non-bilayer) or lamellar phases in phospholipids. "Polymorphism" refers to formation of diverse structures such as three-dimensional tubes, rods, and structures with cubic symmetry. Mesomorphism refers to phase transitions when heat is applied. For example, a lipid can be in the lamellar phase at a lower temperature, but as the temperature increases, it transitions into a non-lamellar phase. It is important to consider the size of the hydrophilic region versus the hydrophobic region. For example, if the hydrophilic region and hydrophobic region are similar, a cylindrical shape lipid bilayer is formed; but when the hydrophilic regions is smaller than the hydrophobic region a cone-shaped lipid bilayer is formed. Another example is the formation of micelles which has a non-lamellar formation in which the hydrophilic region is significantly larger compared to the hydrophobic region. There are various liquid-crystalline phases that can exist in lipids. Liquid-crystalline phases are when the hydrophobic chain regions are not motionless but are allowed to move about freely in a fluid-like melted state. The lamellar phase (Lα) is the most common and dominant phase in lipids and are aligned as stacks of bilayers on top of bilayers oriented in one single direction.
Non-lamellar phases are known as non-bilayer liquid-crystalline phases without lamellar symmetry (Lα). They include hexagonal (I), hexagonal (II), and three-dimensional cubic phases. Hexagonal (I) phases are non-inverted or oil-in-water phases in which a net convex curvature is present and this is similar to micelles. Hexagonal(II) phases are inverted water-in-oil phases with net concave curvatures describing the lipid and water interactions. Cubic phases (Pn3m, Im3m, la3d, etc.) or bicontinuous cubic phases composed of multiple connected bilayers that resemble a three-dimensional cube. The presence of non-lamellar lipids in biomembranes affect the elasticity of the lipid bilayer, especially when it is disrupted, for example during phase transitions, membrane fusion and fission or interactions with membrane peptides and proteins.
Analytical techniques used for characterizing lipids
There are various analytical instruments and techniques used to characterized and monitor the different properties of lipids; X-ray diffraction, differential scanning calorimetry (DSC), nuclear magnetic resonance which include 2HNMR and 31PNMR, thin layer chromatography (TLC), fluorescence recovery after photobleaching (FRAP), nearest-neighbor recognition (NNR), and atomic molecular dynamics simulations (AMDS).
X-ray diffraction
X-ray scattering techniques are some of the most useful techniques for determining the structural identification and shape of lipids. An X-ray beam of light is applied to the lipid in which a distinct X-ray pattern is revealed. This lattice pattern is based on the electron density and localization of electrons dispersed throughout the lipid in order to determine atomic positions. The disadvantage is that it can be difficult to determine patterns in lipids that are not well oriented such as non-lamellar phases. Although this can be a limitation in producing electron density reconstructions in lipids, X-ray diffraction is still a reliable method for obtaining structural information and distinguishing between lamellar and non-lamellar phases.
Differential scanning calorimetry
Differential scanning calorimetry (DSC) is an analytical technique used to examine thermodynamic properties of molecules. It can study the thermal behavior of materials as they undergo physical and chemical changes during heat treatment. The parameters that are measured are referred to as the glass transition value (Tg) and melting temperature (Tm). These values are measured over time and are comparable between an inert reference sample and the analyte. Changes in the (Tm) and (Tg) values evaluate phase changes (solid, liquid-gel, liquid, etc.) in which an endothermic or exothermic process occurs. This technique is useful for monitoring the phase changes in phospholipids by providing information such as the amount of heat released or absorbed and time for phase transitions to occur, etc. DSC monitoring can occur at slow rates which is a disadvantage in monitoring fast phase transitions within phospholipids.
Hydrogen nuclear magnetic resonance
Hydrogen nuclear magnetic resonance (2HNMR) is a technique that uses an external magnetic field and deuterium to replace the ordinary form of hydrogen. The ordinary form of hydrogen refers to the elemental form of hydrogen with a molecular weight of approximately 1 g/mol. It contains only one proton and has no neutrons. Deuterium is the isotope form of hydrogen which has a heavier mass compared to ordinary hydrogen. It contains one proton and neutron and has a molecular weight of approximately 2 g/mol. This technique can be used to investigate motions of acyl chains in lipids. It measures carbon and deuterium interactions and the mobility of these interactions within various regions of the lipid and also determines order parameters. The process involves using quadrupole signaling properties for examining lamellar versus non–lamellar phases as well. An external magnetic field monitors the alignment of paramagnetic compounds and uses changes in the positive or negative magnetic spin values to detect these changes.
Phosphorus nuclear magnetic resonance
Phosphorus nuclear magnetic resonance (31PNMR) is a type of nuclear magnetic resonance technique that utilizes 31phosphorus instead of deuterium. 31P is dependent upon changes in the mobility and diffusion of a molecule. It also applies an external magnetic field to analyze the alignment of the paramagnetic compounds and uses changes in the positive or negative magnetic spin values to detect these changes. It is useful in distinguishing between lamellar and hexagonal phases that contain phosphate groups based on their distinct patterns and signals. A disadvantage to this technique is that it is limited to phospholipids.
Thin layer chromatography
Thin layer chromatography (TLC) is a type of chromatography technique that is used characterized or separate lipids. The lipids are separated based on the polarity of the head groups or hydrophilic region, not the hydrophobic region. Certain stains like iodine can be used to label the lipids but will sometimes destroy the lipids. This process can also be used to determine whether or not lipids have denatured. For example, originally a TLC analysis shows the presence of two lipids. One week later the same sample is reanalyzed but shows the presence of more lipids, which indicates the lipid has denatured.
Fluorescence recovery after photobleaching
Fluorescence recovery after photobleaching (FRAP) is a photochemical process applied to fluorophores when they lose their fluorescent properties. It can be used to measure the viscosity and lateral diffusion of a lipid bilayer. It also rejuvenates the fluorescence of the fluorophore and monitors how long this process takes to occur over time.
Nearest neighbor recognition
Nearest neighbor recognition (NNR) is a technique used to describe molecular interactions and patterns between lipid formations. Under thermal conditions it is used to recognize the preferences of lipids to closely interact with another lipid that has similar or different properties. It provides a molecular depiction of lipid bilayer formations by detecting and quantifying the tendency of exchangeable monomers to become what is termed as "nearest-neighbors" of one another in similar environments.
Molecular dynamics simulations
Molecular dynamics (MD) simulations are useful for simulating the motions of atoms and molecules according to physical laws. MD simulations are often applied to lipids to study atom-scale properties that may be difficult to observe otherwise. Force field parameters vary based on atom and molecule types. MD simulations may observe interactions between targeted lipids, proteins, hydrocarbons, water, hydrophilic/hydrophobic regions, ions, solvents, and other components that are present near the exterior and interior of a biomembrane.
Current issues
There are various usages of ethanol which include an additive to gasoline, a primary ingredient for food preservation as well as alcoholic beverages and being used for transdermal drug delivery. For example, it can function as an antiseptic in topical creams to kill bacteria by denaturing proteins. Ethanol is an amphiphilic molecule meaning that it has chemical and physical properties associated with hydrophobic and hydrophilic molecules. Although, studies show that when penetrating through the biomembrane its hydrophobic abilities appear to be limited based on its preference to bind closely to the hydrophilic region of the phospholipids. There are various issues presented in regards to ethanol's ability to penetrate through the biomembrane and cause a reorganization of the phospholipids towards non-lamellar phases. The issues are: 1) how the alteration of the phospholipids' phase occurs 2) understanding the significance of ethanol's interaction with membrane proteins and membrane phospholipids 3) understanding the permeability of the biomembrane based on the tolerance and adaptation level in the presence of ethanol although this process appears to be concentration-dependent 4) determining the significance of ethanol's amphiphilic character as it relates to its ability to partition throughout the membrane by increasing the fluidity of it. Ethanol's hydrophobic properties are limited and primarily binds close the hydrophilic region of the phospholipid. This bonds creates strong hydrogen bonds and leads to a strong interlocking amongst the acyl chains 5) why the presence of cholesterol; a sterol compound, inhibits ethanol's ability to disrupt the membrane and 6) deriving the molecular-level mechanism of the entire process.
Research areas
NNR
Research overview:
This study involves creating a combination of model membranes which contain 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) and 1,2-distearoly-sn-glycero-3-phosphocholine (DSPC) referred to as the "host membranes", phospholipids labeled as 1,2,& 3 referred to as "exchanging molecules" or "reporting molecules" and varied cholesterol mole percentages in the presence of an aqueous solution containing 5% ethanol (v/v). The host membranes were chosen because their phase diagrams are well understood and have been extensively characterized by different analytical techniques.6 The nearest neighbor recognition technique is being applied to the formation of the modeled membranes to observe the association between cholesterol and phospholipids as well as the effects that the presence of ethanol has against this interaction. Researchers are observing whether or not ethanol enhances or disrupts the liquid-ordered phase by reorganizing this formation into a liquid-disordered phase. The liquid-ordered phase is similar to a lamellar phase and the liquid-disordered phase represents the non-lamellar phases but the exact type of each phase (hexagonal, cubic, etc.) is not described. As previously mentioned several different combinations of the host membranes, exchanging molecules, and cholesterol are created to form the model membranes. It is important to mention that the exchanging molecules selected have similar properties to the host membranes. The exchanging lipids contain disulfide bonds as well as diacylglycerol groups that are not necessarily present in the host membranes. Studies provide evidence through monolayer measurements, condensing properties, and nearly identical gel to liquid-crystalline phase transition temperatures (Tm) to the host membranes that the presence of these bonds do not play a major role or interfere in the recognition or packing formation of the modeled membranes in the presence of ethanol. The disulfide bonds, diacylglycerol bonds, and similar sterol framework are only present to mimic the physical properties of DSPC, DPPC, and cholesterol as well as aid in the monomer exchanging processes to form exchangeable dimers. The exchangeable lipids undergo a monomer interchanging process through the disulfide bridges in which they either mix ideally, homogenously, or heterogeneously. Their interactions are measured by the equilibrium constant (K) which will be described in further detail under the significance of results section. Overall, the monomer interchanging process is necessary in order to demonstrate the nearest neighbor recognition technique effective by observing changes in the phase composition of the host membranes/phospholipids. Each model membrane consists of a high concentration of one of the host membranes/phospholipids (95% mol %), low concentrations of two exchanging lipids (2.5 mol% each for a total of 5%), varied mole percentages of cholesterol (0–30 mol %) plus a constant concentration of ethanol (5% v/v). An aqueous buffer solution contains the 5% ethanol (v/v) which is desired but due to evaporation, the value is lowered to approximately 2.9% ethanol.
Significance of research:
All experiments are carried out at 60 °C. Changes in the equilibrium constant (K) are used to determine what type of lipid interactions are occurring within the modeled membrane as well as observe liquid-ordered versus liquid-disorder regions. The value of the equilibrium constant determine the following: 1) if monomers are mixed ideally (K = 4.0) 2) when the monomers are mixed homogenously also referred to as a homo-association (K < 4.0) and 3) if the monomers have interchanged heterogeneously which is referred to as an hetero-association (K > 4.0) A plot of (K) is then created versus the cholesterol mol%. Each plot has similar trends in which the value of the equilibrium constant increased as the mol% increased with and without the presence of ethanol indicating a linear regression. Initially all the model membranes were organized in a liquid order phase but as the addition of cholesterol increase a liquid-disorder phase was observed. The following was determined regarding the liquid-order and liquid-disordered transitions during the addition of cholesterol in the presence of ethanol in each model membrane: 1) 0–15 mol% cholesterol a liquid-disordered phase was present 2) from 15 to 30 mol% there was a co-existence of both phases and 3) above 27 mole% of cholesterol the model membrane completed converted back to the original liquid-order phase within a two-hour time frame. The linear regression maxed out at 30 mol% of cholesterol. It is important to mention that ESR studies were also performed that show a coexistence of the liquid-order/liquid-disorder phase from 0 to 8 mole% and as well as 8–27 mol%. The model membrane containing DPPC, cholesterol, and exchanging lipids 1 and 2 show a drastic increase in the linear relationship between (K) versus the mol% of cholesterol. At approximately 8 mol% of cholesterol the start of the liquid-disordered phase begins. This same relationship is observed in the DSPC, cholesterol, and exchanging lipids 2 and 3 but the start of the liquid-disorder phase occurs at approximately 5.2 mole% with and without the presence of ethanol. Also, there is a higher equilibrium constant value in which the studies relate it to the stronger acyl chain interactions due to this region having longer carbon chains which results in a higher melting point as well. This study not only proves that in the presence of ethanol a reorganization or induced phase change takes place between the cholesterol-phospholipid interaction but that by using higher concentrations of sterol compounds like cholesterol it can hinder the effects of ethanol. The research also suggests that ethanol enhances the association between cholesterol-phospholipids within the liquid-ordered bilayers. The mechanism on how ethanol induces the liquid-disorder phase as well as enhances the cholesterol-phospholipid association is still not understood. The researchers have mentioned that part of the liquid-disorder formation occurs possibly be interrupting the hydrophobic region of the phospholipids, by binding closely towards the hydrophilic region of the phospholipid, and acting as "filler" since ethanol cannot closely align with the neighboring phospholipids. All of these possible mechanisms can be contributed to ethanol's amphiphilic nature.
AMDS
Research overview:
In this study there are several atomic-scale molecular dynamics simulations created to illustrate how ethanol affects biomembranes containing phospholipids. The phospholipid membrane systems are comparable to the model membranes above but it only consists of one phospholipid which is palmitoyl-oleoyl-phosphatidylcholine (POPC) or palmitoyl-oleoyl-phosphatidylethanolamine (POPE). The primary difference between the phosphatidlycholine (PC) and phosphatidylethanolamine (PE) is that the three methyl groups attached to the Nitrogen atom for the PC structure is replaced by three hydrogen groups. The overall purpose of this study is similar to the study described above determining the effects of ethanol on biomembranes and how it is able to increase disorder in the membrane interior region forming non-lamellar phases in phospholipids. The experimental method and analytical technique is quite different. In the previous study, it emphasized the NNR technique using a set of host phospholipids, exchanging lipids, ethanol, and cholesterol to create model membranes. An aqueous solution containing 5% ethanol (v/v) was maintained but the concentration of cholesterol was varied to prove how this sterol compound can inhibit the effects of ethanol (inducing a liquid-disorder phase or non-lamellar phases) which is depicted in the different plots of the equilibrium constant (K) versus the mol% of cholesterol for each model membrane. In this study, phospholipid membrane is comparable to the model membrane which consists of POPC, ethanol, water and in some cases the addition of monovalent ions (Na+, K+, and Cl−) that are transported throughout the membrane in the presence of ethanol. The concentration of ethanol varies ranging from 2.5 to 30 mol% in an aqueous solution but there is no addition of any sterol compound. The atomic-scale molecular dynamics simulations are used to monitor the changes in the phospholipid membrane. All the simulations are carried out using GROMACS simulation suite software along with other methods that are essential to perform the simulations. The temperature and pressure are controlled at 310K and 1bar. The simulations are measured at various time frames which include ficoseconds (fs), picoseconds (ps), and nanoseconds (ns). A typical simulation is composed of approximately 128 POPC lipids and 8000 solvent molecules which include water and ethanol. In each simulation ethanol molecules, water molecules, head group regions, acyl chains, and the monovalent ions are all color-coded which aids in interpreting the results of the simulations. The concentrations of ethanol are 2.5, 5.0, 15.0 and 30 mol%. The amount of ethanol molecules depend on the concentration of ethanol present in the phospholipid membrane. Force field parameters are measured for the POPC lipids and monovalent ions (Na+, K+, and Cl−), which are very important. A summary of the atomic-scale molecular dynamics simulations is then provided which contains important information as follows: 1) a system number that corresponds to a particular phospholipid simulation 2) the concentration of ethanol mol% used in a particular simulation 3) the concentration of ethanol (v/v%) used for the simulation 3) the ethanol/lipid ratio that is derived from the simulation 4) the area (nm2) of the phospholipid membrane which details the expansion of the membranes as the concentration of ethanol is increased 5) the thickness of the membrane which is based on the distance between the average positions of the phosphorus atoms on opposite sides of the phospholipid membrane and 6) the tilt of the head group of the POPC lipid based on changes in the angle towards the interior region of the phospholipid membrane which was surprisingly not very significant.
Significance of research:
The summary of the POPC simulations described above shows that the POPC system's initial area per lipid value was initially .65 ± .01 but it increases by more than 70% to 1.09 ± .03 at 10 mol% of ethanol which indicates the membrane begins to swell and expand as ethanol permeates through its exterior region. Due to the expansion of the membrane, the membrane thickness decreases from 3.83 ± .06 to 2.92 ± .05 which relates to the distance between the phosphorus atoms on opposite sides of the membrane. The study also supports the fact that ethanol prefers to bond just below the hydrophilic region of the phospholipids near the phosphate groups. The location of the ethanol creates a strong hydrogen bond between the water molecules. The results are depicted in the simulations and supported by mass density profiles as well. The mass density profiles show the location of the POPC lipids, water, and ethanol relevant to the hydrophobic core of the membrane and the concentration of ethanol. The mass density of ethanol increases as the concentration increases which indicates ethanol is moving towards the hydrophobic core of the membrane. The membrane becomes partially destroyed. The simulations also support that the interior of the membrane starts to become more hydrophilic due to the presence of water molecules in the interior region once the membrane is partially destroyed. The presence of ethanol also induced the formation of non-lamellar phases (non-bilayer) within the interior region (hydrophobic cored) of the phospholipid membrane. The results are supported by the simulations which show that at approximately 12 mol% of ethanol the membrane was no longer able to tolerate and adapt to the presence of the ethanol resulting in non-lamellar phases. The formations of the non-lamellar phases are described as being irreversible inverted-micelles. This irreversibility of the inverted-micelles are supported by mass density profiles which display an overlapping of leaflets from opposite membranes that interact forming a strong interlocking between the acyl chains or hydrophobic region with and without the presence of ethanol. Snapshots of the simulations are produced at 100 ns which compared the phospholipid membrane system in the presence of ethanol and in the absence of ethanol which continues to support ethanol's preference to bind near the hydrophilic region of the phospholipid. The researchers also added monovalent ions as salt ions (NaCl) to the phospholipid membrane system which formed non-lamellar phases (micelles) as well. This phenomenon is important because they predict that in the presence of ethanol the micelles can serve as transporters for hydrophilic structures across the membrane. Overall, in this study it shows that ethanol is able to penetrate throughout the membrane. I very important point that was revealed in this study is the fact that ethanol can destroy epithelial tissues (lips, throat, stomach, mouth) in humans. Therefore, one must consider some of the damaging effects of some alcoholic beverages that can contain up to 40% of ethanol (v/v).
Conclusion and possible further research studies
The following was concluded based on ethanol's ability to induce non-lamellar phases:
Ethanol does induce non-lamellar phases (non-bilayer) but this process is concentration-dependent. On average the bilayers is preserved at approximately less than 10 mol%.
Ethanol prefers to bond in the hydrophilic region near phosphate groups which could be contributed to its amphiphilic character.
The effects of ethanol can be reversed or hindered in the presence of cholesterol (sterol compounds)
It may be necessary to perform a future study to compare the maximum amount of cholesterol (30 mol%) obtained in the NNR study to varied concentrations of ethanol as depicted in the AMDS study to see if ethanol is still hindered in the presence of sterol compounds.
Notes
Ethanol
Phospholipids | Ethanol-induced non-lamellar phases in phospholipids | [
"Chemistry"
] | 6,119 | [
"Phospholipids",
"Signal transduction"
] |
27,264,213 | https://en.wikipedia.org/wiki/Organic%20light-emitting%20transistor | An organic light-emitting transistor (OLET) is a form of transistor that emits light. These transistors have potential for digital displays and on-chip optical interconnects.
OLET is a new light-emission concept, providing planar light sources that can be easily integrated in substrates like silicon, glass, and paper using standard microelectronic techniques.
OLETs differ from OLEDs in that an active matrix can be made entirely of OLETs, whereas OLEDs must be combined with switching elements such as TFTs.
See also
Light-emitting diode (LED)
Light-emitting transistor (LET)
Organic field-effect transistor (OFET)
Organic light-emitting diode (OLED)
References
Molecular electronics
Nonlinear optics
Organic electronics
Photonics | Organic light-emitting transistor | [
"Chemistry",
"Materials_science"
] | 170 | [
"Nanotechnology",
"Molecular physics",
"Molecular electronics"
] |
27,268,344 | https://en.wikipedia.org/wiki/Maximum%20ramp%20weight | The maximum ramp weight (MRW) (also known as the maximum taxi weight (MTW)) is the maximum weight authorised for manoeuvring (taxiing or towing) an aircraft on the ground as limited by aircraft strength and airworthiness requirements. It includes the weight of taxi and run-up fuel for the engines and the auxiliary power unit (APU).
It is greater than the maximum takeoff weight due to the fuel that will be burned during the taxi and run-up operations.
The difference between the maximum taxi/ramp weight and the maximum take-off weight (maximum taxi fuel allowance) depends on the size of the aircraft, the number of engines, APU operation, and engines/APU fuel consumption, and is typically assumed for 10 to 15 minutes allowance of taxi and run-up operations.
See also
Aircraft gross weight
Maximum takeoff weight (MTOW)
Maximum landing weight (MLW)
Maximum zero-fuel weight (MZFW)
Manufacturer's empty weight (MEW)
References
External links
Synthesis of subsonic airplane design
Aircraft weight and balance
Aircraft Design Synthesis and Analysis
Pilot's Handbook of Aeronautical Knowledge (2008)
Aircraft weight measurements | Maximum ramp weight | [
"Physics",
"Engineering"
] | 240 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
27,268,816 | https://en.wikipedia.org/wiki/Cycle%20decomposition%20%28graph%20theory%29 | In graph theory, a cycle decomposition is a decomposition (a partitioning of a graph's edges) into cycles. Every vertex in a graph that has a cycle decomposition must have even degree.
Cycle decomposition of Kn and Kn − I
Brian Alspach and Heather Gavlas established necessary and sufficient conditions for the existence of a decomposition of a complete graph of even order minus a 1-factor (a perfect matching) into even cycles and a complete graph of odd order into odd cycles. Their proof relies on Cayley graphs, in particular, circulant graphs, and many of their decompositions come from the action of a permutation on a fixed subgraph.
They proved that for positive even integers and with , the graph (where is a 1-factor) can be decomposed into cycles of length if and only if the number of edges in is a multiple of . Also, for positive odd integers and with , the graph can be decomposed into cycles of length if and only if the number of edges in is a multiple of .
References
.
Graph theory | Cycle decomposition (graph theory) | [
"Mathematics"
] | 220 | [
"Graph theory stubs",
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations"
] |
27,274,102 | https://en.wikipedia.org/wiki/Relief%20well | Relief wells are used both in the natural gas and petroleum industry and in flood control.
Fossil fuels
In the natural gas and petroleum industry, a relief well is drilled to intersect an oil or gas well that has experienced a blowout. Specialized liquid, such as heavy (dense) drilling mud followed by cement, can then be pumped down the relief well in order to stop the flow from the reservoir in the damaged well.
The first use of a relief well was in Texas in the mid-1930s when one was drilled to pump water into an oil well that had cratered and caught on fire.
Flood control
In flood control, a different type of relief well is used adjacent to earthen levees to relieve the pressure on the lake or river side of the levee and thus to prevent the collapse of the levee. The greater flow of water in the water source, typically during a flood, creates a pressure gradient such that more water infiltrates the soil of the levee. Water may then flow through the soil towards the dry side of the levee, resulting in sand boil, liquefaction of the soil, and ultimately destruction of the levee. Relief wells act like valves to relieve the water pressure and allow excess water to be diverted safely, for example, to a canal. Relief wells can prevent sand boils from occurring by relieving the water pressure as described.
References and external links
Oil wells
Oilfield terminology | Relief well | [
"Chemistry"
] | 288 | [
"Petroleum",
"Petroleum technology",
"Oil wells",
"Petroleum stubs"
] |
27,274,503 | https://en.wikipedia.org/wiki/Time%20triple%20modular%20redundancy | Time triple modular redundancy, also known as TTMR, is a patented single-event upset mitigation technique that detects and corrects errors in a computer or microprocessor. TTMR allows the use of very long instruction word (VLIW) style microprocessors in space or other applications where external sources, such as radiation, would cause an elevated rate of errors. TTMR permits triple modular redundancy (TMR) protection in a single processor.
Space Micro Inc developed and patented TTMR. It has been implemented in Space Micro's space qualified single-board computers, such as the Proton200k.
References
External links
TTMR Patent
Error detection and correction | Time triple modular redundancy | [
"Engineering"
] | 150 | [
"Error detection and correction",
"Reliability engineering"
] |
21,029,001 | https://en.wikipedia.org/wiki/Nikolay%20Fedorov%20%28painter%29 | Nikolay Ivanovich Fedorov (Russian: Николай Иванович Фёдоров, 1918-10-13 in Vyatka (now Kirov) to 1990-11-16) was a Soviet painter and textile designer.
Collections of his works were acquired by the Russian Museum in St. Petersburg and by the Museum of Decorative and Applied Arts in Moscow and also are permanently exhibited in the State Darwin Museum in Moscow and in the museums in Tomsk and Krasnoyarsk.
Honors and exhibitions
Member of the USSR's Union of Artists since 1956 and in 1978 awarded with the title of Honored Artist of Russia.
The collection of textile elaborated with his participation won the Grand Prix, Diploma of 1st degree and a gold medal at the World Exhibition in Brussels in 1958. His tapestries have been exhibited various times at the Leipzig Fair. He is one of the authors of curtains for the Bolshoi Theatre in Moscow, curtains for concert hall in the Hotel Russia (along with Kausov), the assembly hall of the Palace of Culture of Moscow's Textile Institute (along with Shubnikova) and author of curtains for Concert Hall at the Palace of Culture of the Ministry of Internal Affairs.
The joint collection of his and Shubnikova's art works represented Russian textile art of 1940's-1950's in the exposition of Russian Museum in St. Petersburg in 1993. This exhibition has made a long tour through several counties in Europe in the 1990s.
Documenting the search for Tunguska meteorite
In 1939, he participated as an artist in the last Leonid Kulik’s expedition for the search of Tunguska "meteorite" (it is still a controversy what exactly caused the event). Later, in 1984 and 1988, he also participated in the Tunguska meteorite expeditions under the guidance of Academician Vasiliev. His paintings describing eyewitness reports and later scientific theories were exhibited in many museums and used in several books.
Textiles
Nikolay's textiles were produced for many years by the Moscow Weaving and Finishing Complex (MTOK) and widely used. Some textile samples were purchased by the Moscow Film Studio (Mosfilm) and utilized in many popular films as curtains at the set. One textile based on French classical tapestry was used in the popular Russian TV series "Twelve Chairs".
References
Roy A. Gallant, The day the sky split apart: investigating a cosmic mystery, Atheneum Books for Young Readers, 1995. , .
Roy A. Gallant, Meteorite Hunter: The Search for Siberian Meteorite Craters, McGraw-Hill, 2002. , .
External links
Nikolay Ivanovich Fedorov Official web site
1918 births
1990 deaths
Soviet painters
Tunguska event | Nikolay Fedorov (painter) | [
"Physics"
] | 559 | [
"Unsolved problems in physics",
"Tunguska event"
] |
21,031,034 | https://en.wikipedia.org/wiki/Particle%20segregation | In particle segregation, particulate solids, and also quasi-solids such as foams, tend to segregate by virtue of differences in the size, and also physical properties such as volume, density, shape and other properties of particles of which they are composed. Segregation occurs mainly during the powder handling and it is pronounced in free-flowing powders. One of the effective methods to control granular segregation is to make mixture's constituents sticky using a coating agent. This is especially useful when a highly active ingredient, like an enzyme, is present in the mixture. Powders that are inherently not free flowing and exhibit high levels of cohesion/adhesion between the compositions are sometimes difficult to mix as they tend to form agglomerates. The clumps of particles can be broken down in such cases by the use of mixtures that generate high shear forces or that subject the powder to impact. When these powders have been mixed, however, they are less susceptible to segregation because of the relatively high inter-particulate forces that resist inter-particulate motion, leading to unmixing.
Granular segregation is also called "demixing" in industrial environment.
Segregation mechanisms
The five major segregation mechanisms are
Percolation segregation
Flotation segregation
Elutriation
Transport segregation
Agglomeration segregation
Percolation
Sifting occurs when there is a significant variation of particle diameter in a mixture. Relative movement of particles causes the finer particles to sift through the coarser ones.
Flotation
Vibration of the mixture is bringing the small particles below the coarse one, which has as an effect to have coarse particles closer to the surface of the mixture.
Elutriation
In this mechanism, the lighter or fluffier particles form a 'fluidized' layer. Only coarser particles can penetrate the fluidized fines and the finer particles remain in the top layer.
Transport
The finer particles in a mix are susceptible to be airborne in the presence of airflow. They move away from the deposition point whereas the coarser particles tend to remain close to the deposition point.
Agglomeration
It can happen that some components form lumps. Those lumps will create non homogeneity in the mix since locally they will concentrate a lot of material of one case.
References
Granularity of materials | Particle segregation | [
"Physics",
"Chemistry"
] | 462 | [
"Particle technology",
"Materials",
"Granularity of materials",
"Matter"
] |
8,416,361 | https://en.wikipedia.org/wiki/Sanitary%20engineering | Sanitary engineering, also known as public health engineering or wastewater engineering, is the application of engineering methods to improve sanitation of human communities, primarily by providing the removal and disposal of human waste, and in addition to the supply of safe potable water. Traditionally a branch of civil engineering and now a subset of environmental engineering, in the mid-19th century, the discipline concentrated on the reduction of disease, then thought to be caused by miasma. This was accomplished mainly by the collection and segregation of sewerage flow in London specifically, and Great Britain generally. These and later regulatory improvements were reported in the United States as early as 1865.
It is also concerned with environmental factors that do not have an immediate and clearly understood effect on public health. Areas outside the purview of sanitary engineering include aesthetic concerns such as landscaping, and environmental conservation as it pertains to plants and animals.
Skills within this field are usually employed for the primary goal of disease prevention within human beings by assuring a supply of healthy drinking water, treatment of waste water, and removal of garbage from inhabited areas.
Compared to (for example) electrical engineering or mechanical engineering which are concerned primarily with closed systems, sanitary engineering is a very interdisciplinary field which may involve such elements as plumbing, fire protection, hydraulics, life safety, constructive modelling, information technology, project design, microbiology, pathology and the many divisions within environmental science and environmental technology. In some cases, considerations that fall within the field of social sciences and urban planning must be factored in as well.
Although sanitary engineering may be most associated with the design of sewers, sewage treatment and wastewater treatment facilities, recycling centers, public landfills and other things which are constructed, the term applies equally to a plan of action to reverse the effects of water pollution or soil contamination in a specific area.
History
Irrigation systems were invented five to seven thousand years ago as a means of supplying water to agriculture-based societies. Aqueducts and irrigation systems were among the first forms of wastewater engineering. As population centers became more dense, they were used to remove sewage from settlements. The Romans were among the first to demonstrate the effectiveness of the aqueduct. The Dark Ages marked a period where progress in water management came to a halt.
As populations grew, the management of human waste became a growing concern and a public health threat. By the 1850s in London, more than 400,000 tons of sewage were flushed into the River Thames each day - around 150 million tons per year. Diseases such as smallpox, diphtheria, measles, scarlet fever, typhus, cholera, and typhoid were spread via the contaminated water supply. During the 19th century, major cities started building sewage systems to remove human waste out of cities and into rivers.
Sanitation in the 1900's
During the 1900s, the activated sludge process was invented. The activated sludge process is a form of water purification that uses bacteria to consume human feces. Chlorine is used later in the process to kill off the bacteria. In the 1950s, the public health reports provided plans for supplying clean water for the public by first looking at potential hazards. The organization looked carefully at water contamination as well as how drinking water was being treated. They also prioritized finding methods that were effective, yet not too costly. Sanitation cost is the main issue for many foreign (not the United States) countries. The average cost of home water and sanitation systems start at $50 a month, when many citizens don't make enough money to use on non-necessities.
Over the centuries, much has changed in the field of wastewater engineering. Advancements in microbiology, chemistry, and engineering have drastically changed the field. Today, wastewater engineers also work on the collection of clean water for drinking, chemically treating it, and using UV light to kill off micro-organisms. They also treat water pollution in wastewater (blackwater and greywater) so that this water may be made safe for use without endangering the population and environment around it. Wastewater treatment and water reclamation are areas of concern in this field.
Harm Huizenga
Prior to modern forms of sanitation in neighborhoods and cities, people would simply leave their trash on the street. In 1892, it was such an issue, that a man named Harm Huizenga volunteered to clean up the mess by himself. The Dutch man went around the streets in his wagon, picking up the garbage of the city of Chicago. Little efforts like that were present throughout the early 1900s, until around 1968. Huizenga's grandson, Wayne Huizenga, made his grandfather's idea into a business, Waste Management. By the seventies, waste management as a whole was seen as a necessary practice by the public.
Sanitation in the United States
California/Counties
In the early 1940s, many counties in the state had problems with their disposal of waste, especially in the Lake Tahoe area. Citizens of these towns feared that their city's poor sewage systems would cause outbreaks in illnesses, like poliomyelitis, cholera, and hepatitis, to name a few. Cholera in particular is the biggest health risk attached to waste management. The illness is caused by bacteria, especially when a person ingests food or water that contain the bacteria. In poorer areas, this is extremely likely due to the cross contamination of waste and drinking water.
Counties
El Dorado county has numerous garbage collection facilities, some private companies. In residential areas, the main source of waste is oil. Since then, many waste management facilities have been built in El Dorado county, reducing the risk of these illnesses. Since the fifties, the county has been utilizing the contacts from the companies to provide a low-cost and successful method of keeping the towns clean. Today, there are 7 franchises assigned to the county with different areas of pickup, such as El Dorado Disposal and American River Disposal.
San Joaquin valley is very recycling focused. The website for the San Joaquin county's waste management shows many tips for how to recycle all recyclable items, in hopes that their county will comply. One of the tips is to verify that all items in the recycling bins are recyclable, because the load might not get recycled at all. The website is very helpful for the public for to help with waste management in residential areas.
Education
Engineering
Wastewater engineering is not usually its own degree course, but a specialization from degrees such as environmental and sanitary engineering, sanitary engineering, civil engineering, environmental engineering, bio-chemical engineering, or chemical engineering. Formal education for wastewater engineers begins in high school with students taking classes such as chemistry, biology, physics, and higher mathematics including calculus. After high school most jobs require certification from a state agency. Those wanting to advance in the industry should pursue a sanitary engineering, environmental and sanitary engineering, civil engineering, mechanical engineering, environmental engineering, or a facilities engineering degree. Gaining experience through internships and working while in college is a common pathway toward advancement.
Education about waste treatment requires course work in systems design, machinery design principles, water chemistry, and similar coursework. Other classes may include Chemistry of Plant Processes, and various plant operations courses.
Wastewater engineers may advance in their careers through additional education and experience. With additional knowledge and experience one can become the manager of an entire plant. The accreditation body certifying the education for the degree and license is the Accreditation Board for Engineering and Technology (ABET). Over time, some companies may require the wastewater engineer to continue their education to keep up with any changes in technology.
Obtaining one's master's degree is encouraged since many companies list it as a preference in selection.
In this field 76 percent of those employed have a bachelor's degree, 17 percent have a master's degree and three percent have a post-doctoral degree as of 2013. The average annual salary is approximately $83,360.
Plant Operations
Initial employment in wastewater engineering can be obtained by those with and without advanced formal education. The California State Water Resources Control Board (SWRCB), for example, shows how individuals can advance through a progression of certifications as Waste Water Treatment Operators. The Board uses a five level classification system to classify water treatment facilities into categories I-V according to the population served and the complexity of the treatment system.
The Operator Certification requirements for water treatment operators and waste water treatment operators are described in detail by State law. To meet certification requirements, operators must submit an application to SWRCB, have the necessary work experience, meet the educational requirements, and pass an examination based on the knowledge, skill, and abilities described in the regulations. Operators are required to renew their certificates every three years. To be eligible for renewal, certified operators must complete a specified number of continuing education hours after the previous issuance of a certificate.
Job description and typical tasks
Important job types working in sanitary engineering include sanitation workers, waste collectors and wastewater engineers.
Wastewater engineers use a variety of skills and must have knowledge of mechanical and environmental engineering. They are required to perform tasks and demonstrate knowledge in design, mathematics, English, construction, physics, chemistry, biology, management, and personnel. Wastewater engineers must have skills in complex problem solving, critical thinking, mathematics, active listening, judgement, reading comprehension, speaking, writing, science, and system analysis. Typical work activities include problem solving, communication with management and staff, gathering information, analyzing data, evaluating standards and complying with them, and communicating with others in the field.
Wastewater engineers perform these activities by combining their knowledge and skills to perform tasks. These tasks are to understand computer-aided design programs, and to conduct studies for the construction of facilities, water supply systems and collection systems. They may design systems for wastewater collection machinery, as well as system components. They may perform water flow analysis, then select designs and equipment based on government and industry standards. Some are involved with a specific area of concern such as waste collection or the maintenance of waste water facilities and stormwater drainage systems within an area. Others cover a broader scope of activities that might include maintenance of the public water supply, collection of residential yard waste program, disposal of hazardous waste, recycling strategies and even community programs where individuals or businesses "adopt" an area and either maintain it themselves or donate funds for doing so.
Wastewater engineers may also map out topographical and geographical features of Earth to determine the best means of collection, design pipe and pumped collection systems, and design treatment processes for collected wastewater.
Typical employers
Wastewater engineers work for private companies, state and local governments, and special districts.
Modern challenges
Water scarcity
Water managers confront new challenges and the need for new technology as water levels decrease due to increasingly frequent and extended droughts. Technologies such as sonar mapping are being used in wells to determine the volume of water that they can hold. For example, the United States Geological Survey and the State of New York worked together to map underground aquifers since the 1980s. Today they have thorough maps of these aquifers to assist in water management.
Desalination plants may be required in the future for those regions hardest hit by water scarcity. Desalination is a process of cleaning water by means of evaporation. Water is evaporated and it passes through membranes. The water is then cooled and condenses allowing it to flow either back into the main water line or out to sea.
Smart Sanitation
Smart Sanitation: Advances in sensor technology, data analytics, and automation are enabling the development of smart sanitation systems that can monitor water quality, detect leaks, optimize treatment processes, and improve overall efficiency. Sanitary engineers need to leverage these technologies to enhance the performance and reliability of sanitation infrastructure.
Climate change
Wastewater treatment contributes to global warming in many ways. One of the factors that contributes to global warming is wastewater treatment facilities and their emissions of greenhouse gases. Some of those gases are carbon dioxide, methane, and nitrous oxide. These gases occur because of the decomposition of organic material from the anaerobic bacteria. These bacteria clean the leftover waste. Even if the anaerobic bacteria decomposition produces these gases, the percentage of greenhouse gases that other equipment produce is still greater than the contribution of the anaerobic bacteria. Also, the power usage from those machinery is very high. That is why many facilities are undergoing renovation to use higher levels of anaerobic bacteria compared to other types of equipment.
Impacts of climate change on sanitary engineering vary based on region and the sanitation solutions employed there. In the Arctic, permafrost melting has caused damage to pipes and other infrastructure. In the Northeastern United States, increased precipitation has overwhelmed aging infrastructure not equipped to handle the massive volume of water from heavy precipitation. In the Western United States, prolonged drought has decreased water availability. This has led some wastewater facilities to expand recycled and reclaimed water programs. Climate change has also affected water distribution pipes. Physical stress from climate change-related conditions such as extreme rainfall or drought increases the rate of pipe corrosion, adding to facility cost.
References
Engineering disciplines
Sanitation
Waste management concepts
Waste treatment technology | Sanitary engineering | [
"Chemistry",
"Engineering"
] | 2,630 | [
"Water treatment",
"Building engineering",
"Chemical engineering",
"Civil engineering",
"nan",
"Environmental engineering",
"Waste treatment technology",
"Architecture"
] |
8,420,153 | https://en.wikipedia.org/wiki/Biohydrometallurgy | Biohydrometallurgy is a technique in the world of metallurgy that utilizes biological agents (bacteria) to recover and treat metals such as copper. Modern biohydrometallurgy advances started with the bioleaching of copper more efficiently in the 1950s
Important Definitions
Bio: Shortened form of Biology; refers to usage of bacteria.
Hydro: Term referring to the usage of water; process occurs in aqueous environments
Metallurgy: A process involving the separating and refining of metals from other substances;
Bioleaching: Using biological agents (bacteria) to extract metals or soils; general term used to encompass all forms biotechnological forms of extraction (hydrometallurgy, biohydrometallurgy, biomining, etc)
General Information
Interdisciplinary field involving processes that
make use of microbes, usually bacteria and archaea
mainly take place in aqueous environment
deal with metal production and treatment of metal containing materials and solutions
"Biohydrometallurgy may generally referred to as the branch of biotechnology dealing with the study and application of the economic potential of the interactions between microbes and minerals. It concerns, thus, all those engaged, directly or indirectly, in the exploitation of mineral resources and in environmental protection: geologists, economic geologists, mining engineers, metallurgists, hydrometallurgists, chemists and chemical engineers. In addition to these specialists, there are the microbiologists whose work is indispensable in the design, implementation and running of biohydrometallurgical processes."
Biohydrometallurgy was first used more than 300 years ago to recover copper. The uses have evolved to extracting gold, uranium, and other metals.
Hydrometallurgy
Hydrometallurgy refers to a specific process involving the chemical properties of water to create an aqueous solution for metal extraction through a series of chemical reactions
Biohydrometallurgy as a Science
Biohydrometallurgy represents the overlap of the world of microorganisms to the process of hydrometallurgy. The usage of microorganisms can be used for recovery and extraction of metals.
Applications
Biohydrometallurgy is used to perform processes involving metals, for example, microbial mining, oil recovery, bioleaching, water-treatment and others. Biohydrometallurgy is mainly used to recover certain metals from sulfide ores. It is usually utilized when conventional mining procedures are too expensive or ineffective in recovering a metal such as copper, cobalt, gold, lead, nickel, uranium and zinc.
See also
Bacterial oxidation
metallurgy
hydrometallurgy
biotechnology
Bacteria
References
External links
BioMineWiki –a wiki on biohydrometallurgy
Metallurgy
Biotechnology | Biohydrometallurgy | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 561 | [
"Metallurgy",
"Materials science",
"nan",
"Biotechnology"
] |
8,421,381 | https://en.wikipedia.org/wiki/Hypersensitive%20site | In genetics a hypersensitive site is a short region of chromatin and is detected by its super sensitivity to cleavage by DNase I and other various nucleases (DNase II and micrococcal nucleases). In a hypersensitive site, the nucleosomal structure is less compacted, increasing the availability of the DNA to binding by proteins, such as transcription factors and DNase I. These sites account for many inherited tendencies.
Location
Hypersensitive sites are found on every active gene, and many of these genes often have more than one hypersensitive site. Most often, hypersensitive sites are found only in chromatin of cells in which the associated gene is being expressed, and do not occur when the gene is inactive.
In DNA being transcribed, 5'hypersensitive sites appear before transcription begins, and the DNA sequences within the hypersensitive sites are required for gene expression. Note: hypersensitive sites precede active promoters.
Hypersensitive sites are generated as a result of the binding of transcription factors that displace histone octamers.
They can also be located by indirect end labelling. A fragment of DNA is cut once at the hypersensitive site with DNase and at another site with a restriction enzyme. The distance from the known restriction site to the DNase cut is then measured to give the location.
References
Genetics
Molecular biology | Hypersensitive site | [
"Chemistry",
"Biology"
] | 297 | [
"Genetics",
"Biotechnology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry"
] |
8,421,442 | https://en.wikipedia.org/wiki/Trans-acting | In the field of molecular biology, trans-acting (trans-regulatory, trans-regulation), in general, means "acting from a different molecule" (i.e., intermolecular). It may be considered the opposite of cis-acting (cis-regulatory, cis-regulation), which, in general, means "acting from the same molecule" (i.e., intramolecular).
In the context of transcription regulation, a trans-acting factor is usually a regulatory protein that binds to DNA. The binding of a trans-acting factor to a cis-regulatory element in DNA can cause changes in transcriptional expression levels. microRNAs or other diffusible molecules are also examples of trans-acting factors that can regulate target sequences.
The trans-acting gene may be on a different chromosome to the target gene, but the activity is via the intermediary protein or RNA that it encodes. Cis-acting elements, on the other hand, do not code for protein or RNA. Both the trans-acting gene and the protein/RNA that it encodes are said to "act in trans" on the target gene.
Transcription factors are categorized as trans-acting factors.
See also
Trans-regulatory element
Transactivation
Transrepression
References
Genetics terms
Molecular biology | Trans-acting | [
"Chemistry",
"Biology"
] | 268 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Genetics terms",
"Molecular biology",
"Biochemistry"
] |
8,422,168 | https://en.wikipedia.org/wiki/Nimbus%20Dam | The Nimbus Dam is a base load hydroelectric dam on the American River near Folsom, California. Approximately of water is retained by the dam. It is responsible for the impoundment of water from the American River to create the Lake Natoma reservoir. The dam stands 87 feet and spans 1,093 feet. The Nimbus powerplant consists of two generators. Each generator produces enough electrical power to power over 200,000 100-watt light bulbs, about 15,500 kilowatts of electrical power. Nimbus Dam consists of 18 radial gates, each with their own gate bays. These 18 gates today are the ones that were completed in 1955 along with the rest of the dam. Of the eighteen gates, four of them have had their coating system replaced. This protects the gates from a faster rate of corrosion. The other fourteen gates have the original coating.
As part of the Central Valley Project (CVP), a federal water project that provides irrigation and municipal water to much of California's Central Valley, it was authorized in 1949 as a regulating reservoir for Folsom Dam, and a diversion pool for the Folsom South Canal. Construction began in 1952, and it opened in 1955.
The Nimbus Powerplant
The Nimbus Powerplant is located on the north side of the American River and on the left side of Nimbus Dam via looking east. The powerplant provides backup to the main powerplant that is located upstream at Folsom Dam. Each of the two generators contain approximately 7,700 kilowatts of electrical power. What drives the two generators through six penstocks, each about 47 feet long, are the two turbines with 9,400 horsepower. Water is supplied through these turbines. The Western Area Power Administration markets the power that is generated by the powerplants at Nimbus Dam and Folsom Dam.
The dam serves as a diversion to direct water into the Folsom South Canal, which carries water to an area approximately 10 miles northeast of the city of Lodi. The canal once provided cooling water for the SMUD nuclear power plant, Rancho Seco. Today, it continues to provide water for irrigation, water supply, and industrial purposes to its surrounding area.
The Nimbus Dam Radial Gates Project
The United States Bureau of Reclamation released a final environmental assessment for the Nimbus Dam Radial Gates Maintenance Project in May 2015. This report laid out the purpose and need for the project, the details of what the construction would consist of, and the environmental impact of the project on existing federal wildlife protection acts. The report argues that over half the radial gates of Nimbus Dam need a new coating system, along with other repairs that fall outside of normal maintenance. No major work has been done to the gates since the construction of the dam in the 1950s. In 2014, the Bureau of Reclamation contributed an $11,141,820 contract for the construction. The project is expected to be completed by the end of 2019 and will focus on replacing the coating on the fourteen gates that have the original quoting. The project will also include the construction of a storage facility.
Along with laying out the plan for the project, managers were to abide by federal and state environmental regulations. In the writing of the final report for the maintenance project, the regulations that the project would abide by include the Fish and Wildlife Coordination Act, Endangered Species Act, Migratory Bird Treaty, executive orders for floodplain management and the protection of wetlands, Clean Air Act, and the Clean Water Act. Complying with these regulations means the project would be completed without significant environmental damage.
Impact on Fish and Wildlife
The water in Lake Natoma, which is the lake created by Nimbus Dam, is too cold for warmwater production of fish, and never has the lake been a natural producer of fish. The rapid water exchange from Nimbus Dam sharply decreases the production of plankton, which inhibits trout growth. The Department of Fish and Game annually plants Lake Natoma with 20,000 to 30,000 catchable-sized trout. The water exchange in the lake during the summer season increases with the operation of the Auburn-Folsom South Project. This ultimately lowers water temperatures.
125 miles of habitat for Chinook and steelhead salmon were accessible in the American River Watershed before the construction of Nimbus Dam. When the dam was constructed in the 1950s, this habitat area was dramatically decreased in size, as the salmon were not able to pass through the dam. To make sure the salmon had a place to spawn, through the California Department of Fish and Game, the Bureau of Reclamation opened the Nimbus Fish Hatchery in 1958 downstream from Nimbus Dam. The purpose of the hatchery was to provide the salmon with an artificial spawning habitat before they are let back out into the wild.
Fishing has been a popular recreational use of the waters below Nimbus Dam since the Nimbus Fish Hatchery was built in 1958. However, the amount of fishing that was being done severely impacted the spawning locations of fish once passed the fish ladder at the hatchery. On March 1, 2018, the Nimbus Basin became permanently closed to all fishing. The closure of the basin is an effort to reorienting the fish ladder as part of the Nimbus Hatchery Fish Passage Project, which would create a system of collecting steelhead and adult salmon for the hatchery. The project would also include changes that would minimize flow fluctuations in the river associated with the weir of the hatchery, as well as eliminate safety concerns associated with the existing weir.
Hydrology of the Lower American River and Water Quality
Nearly half of the annual precipitation in the Sacramento area occurs during a span of 60 days in the winter months, while during the summer, only about one percent of the annual precipitation falls. In the American River Basin, approximately 40% of the annual runoff is a result of melting snow in the Sierra Nevada Mountains. As a result, low natural flow rates in the American River system in the later summer months. As a result, the dam fluctuates water output throughout the year.
Due to the contamination of the groundwater because of environmental degradation in the lower American River, the County of Sacramento created a Water Forum in 1993. Working together with water managers from Eldorado and Placer Counties, the Forum plans to provide a clean and reliable supply of water to the region by 2030 and to protect the wildlife and fish in the Lower American River. Signed in April 2000, the Water Forum Agreement called for the implementation of increased surface water diversions, habitat management, water conservation, and an improved standard of flow
See also
List of dams and reservoirs in California
References
External links
United States Bureau of Reclamation.gov: Nimbus Dam fact sheet
Parks.ca.gov: Folsom Lake State Recreation Area
Dams on the American River
Buildings and structures in Sacramento County, California
Central Valley Project
United States Bureau of Reclamation dams
Dams completed in 1955
Dams in California
1955 establishments in California | Nimbus Dam | [
"Engineering"
] | 1,414 | [
"Irrigation projects",
"Central Valley Project"
] |
8,422,399 | https://en.wikipedia.org/wiki/Microbial%20food%20web | The microbial food web refers to the combined trophic interactions among microbes in aquatic environments. These microbes include viruses, bacteria, algae, heterotrophic protists (such as ciliates and flagellates). In aquatic ecosystems, microbial food webs are essential because they form the basis for the cycling of nutrients and energy. These webs are vital to the stability and production of ecosystems in a variety of aquatic environments, including lakes, rivers, and oceans. By converting dissolved organic carbon (DOC) and other nutrients into biomass that larger organisms may eat, microbial food webs maintain higher trophic levels. Thus, these webs are crucial for energy flow and nutrient cycling in both freshwater and marine ecosystems.
Role of Different Microbes
In aquatic environments, microbes constitute the base of the food web. Single celled photosynthetic organisms such as diatoms and cyanobacteria are generally the most important primary producers in the open ocean. Many of these cells, especially cyanobacteria, are too small to be captured and consumed by small crustaceans and planktonic larvae. Instead, these cells are consumed by phagotrophic protists which are readily consumed by larger organisms.
Viruses
Aquatic ecosystems are full of viruses, which are essential for managing microbial populations. They release organic matter back into the environment by infecting and lysing planktonic algae (phycoviruses) and bacterial cells (bacteriophages). This mechanism, called the viral shunt, promotes nutrient recycling and aids in the control of microbial populations. Viral particles and dissolved organic carbon (DOC), which can be further used by other microorganisms, are released when bacterial cells are lysed. Viruses can infect and break open bacterial cells and (to a lesser extent), planktonic algae (a.k.a. phytoplankton). Therefore, viruses in the microbial food web act to reduce the population of bacteria and, by lysing bacterial cells, release particulate and dissolved organic carbon (DOC).
Bacteria
In the microbial food web, bacteria play a crucial role in breaking down organic materials and recycling nutrients. They transform DOC into bacterial biomass so that protists and other higher trophic levels can consume it. Additionally, bacteria take part in the nitrogen and carbon cycles, among other biogeochemical cycles.
Algae
In aquatic ecosystems, single-celled photosynthetic organisms like cyanobacteria and diatoms are the main producers. Through the process of photosynthesis, they transform sunlight into chemical energy and create organic matter, which is the foundation of the food chain. Particularly significant in nutrient-poor environments are cyanobacteria because of their capacity to fix atmospheric nitrogen. When vital nutrients like nitrogen and phosphorus are scarce during periods of uneven development, algal cells have the potential to produce DOC. DOC may also be released into the environment by algal cells. One of the reasons phytoplankton release DOC termed "unbalanced growth" is when essential nutrients (e.g. nitrogen and phosphorus) are limiting. Therefore, carbon produced during photosynthesis is not used for the synthesis of proteins (and subsequent cell growth), but is limited due to a lack of the nutrients necessary for macromolecules. Excess photosynthate, or DOC is then released, or exuded.
Heterotrophic Protists
In the microbial food web, protists including ciliates and flagellates are significant consumers. By consuming bacteria, algae, and other tiny particles, they move nutrients and energy up the food chain. Larger creatures like zooplankton feed on these protists in turn.
Microbial Interactions
The food web's microbial interactions are varied and diverse. Predation, rivalry, and symbiotic connections are some of these interactions. For instance, certain bacteria and algae create mutualistic relationships in which the bacteria give the algae vital nutrients, and the algae give the bacteria organic carbon. Microbial communities can be shaped by competition for resources like light and nutrition, which can affect their makeup and functionality.
Environmental Factors
Environmental factors that have a significant impact on microbial food webs include temperature, availability of light, and nutrient concentrations. Microbe development and metabolic rates are influenced by temperature, and photosynthetic organisms are impacted by light availability. The availability of nutrients, especially phosphorus and nitrogen, might restrict the growth and productivity of microorganisms. For instance, during times of nitrogen constraint, phytoplankton may emit DOC, a phenomenon referred to as imbalanced growth.
Human Impact
A major impact of human activity on microbial food webs is eutrophication, pollution, and climate change. The activities of microbial communities can be disturbed by pollutants like pesticides and heavy metals. Microbial growth and dispersal are impacted by temperature and precipitation changes brought about by climate change. The entire aquatic food chain may be impacted by eutrophication, which is brought on by nutrient runoff from cities and farms. Eutrophication can also result in toxic algal blooms and hypoxic conditions.
Technological Advances
Technological developments have completely changed the way that microbial food webs are studied. By analyzing genetic material from environmental samples, researchers can get insights into the diversity and roles of microbial communities using metagenomics. The utilization of remote sensing technology facilitates the large-scale monitoring of environmental variables and microbial activity, consequently augmenting our comprehension of microbial dynamics across various ecosystems.
The Microbial Loop
The microbial loop describes a pathway in the microbial food web where DOC is returned to higher trophic levels via the incorporation into bacterial biomass. This loop makes sure that the DOC created by photosynthetic organisms is used by heterotrophic bacteria and then moves up the food chain, which is crucial for sustaining the flow of nutrients and energy within the ecosystem.
Conclusion
By facilitating the transfer of nutrients and energy, microbial food webs are essential for the health and stability of aquatic ecosystems. It is crucial to comprehend these complex relationships to address environmental issues and advance sustainable management of aquatic resources. Technological developments keep expanding our understanding and illuminating the complex mechanisms that support life in the oceans of our planet.
See also
Microbial cooperation
Microbial intelligence
Microbial population biology
References
Other references
Michaels, A.F. and Silver, M.W. (1988) "Primary production, sinking fluxes and the microbial food web". Deep Sea Research Part A. Oceanographic Research Papers, 35(4): 473–90.
Microbiology | Microbial food web | [
"Chemistry",
"Biology"
] | 1,364 | [
"Microbiology",
"Microscopy"
] |
8,424,033 | https://en.wikipedia.org/wiki/DEAP | DEAP (Dark matter Experiment using Argon Pulse-shape discrimination) is a direct dark matter search experiment which uses liquid argon as a target material. DEAP utilizes background discrimination based on the characteristic scintillation pulse-shape of argon. A first-generation detector (DEAP-1) with a 7 kg target mass was operated at Queen's University to test the performance of pulse-shape discrimination at low recoil energies in liquid argon. DEAP-1 was then moved to SNOLAB, 2 km below Earth's surface, in October 2007 and collected data into 2011.
DEAP-3600 was designed with 3600 kg of active liquid argon mass to achieve sensitivity to WIMP-nucleon scattering cross-sections as low as 10−46 cm2 for a dark matter particle mass of 100 GeV/c2. The DEAP-3600 detector finished construction and began data collection in 2016. An incident with the detector forced a short pause in the data collection in 2016. As of 2019, the experiment is collecting data.
To reach even better sensitivity to dark matter, the Global Argon Dark Matter Collaboration was formed with scientists from DEAP, DarkSide, CLEAN and ArDM experiments. A detector with a liquid argon mass above 20 tonnes (DarkSide-20k) is planned for operation at Laboratori Nazionali del Gran Sasso. Research and development efforts are working towards a next generation detector (ARGO) with a multi-hundred tonne liquid argon target mass designed to reach the neutrino floor, planned to operate at SNOLAB due to its extremely low-background radiation environment.
Argon scintillation properties and background rejection
Since liquid argon is a scintillating material a particle interacting with it produces light in proportion to the energy deposited from the incident particle, this is a linear effect for low energies before quenching becomes a major contributing factor. The interaction of a particle with the argon causes ionization and recoiling along the path of interaction. The recoiling argon nuclei undergo recombination or self-trapping, ultimately resulting in the emission of 128 nm vacuum ultra-violet (VUV) photons. Additionally liquid argon has the unique property of being transparent to its own scintillation light, this allows for light yields of tens of thousands of photons produced for every MeV of energy deposited.
The elastic scattering of a WIMP dark matter particle with an argon nucleus is expected to cause the nucleus to recoil. This is expected to be a very low energy interaction (keV) and requires a low detection threshold in order to be sensitive. Due to the necessarily low detection threshold, the number of background events detected is very high. The faint signature of a dark matter particle such as a WIMP will be masked by the many different types of possible background events. A technique for identifying these non-dark matter events is pulse shape discrimination (PSD), which characterizes an event based on the timing signature of the scintillation light from liquid argon.
PSD is possible in a liquid argon detector because interactions due to different incident particles such as electrons, high energy photons, alphas, and neutrons create different proportions of excited states of the recoiling argon nuclei, these are known as singlet and triplet states and they decay with characteristic lifetimes of 6 ns and 1300 ns respectively. Interactions from gammas and electrons produce primarily triplet excited states through electronic recoils, while neutron and alpha interactions produce primarily singlet excited states through nuclear recoils. It is expected that WIMP-nucleon interactions also produce a nuclear recoil type signal due to the elastic scattering of the dark matter particle with the argon nucleus.
By using the arrival time distribution of light for an event, it is possible to identify its likely source. This is done quantitatively by measuring the ratio of the light measured by the photo-detectors in a "prompt" window (<60 ns) over the light measured in a "late" window (<10,000 ns). In DEAP this parameter is called Fprompt. Nuclear recoil type events have high Fprompt (~0.7) values while electronic recoil events have a low Fprompt value (~0.3). Due to this separation in Fprompt for WIMP-like (Nuclear Recoil) and background-like (Electronic Recoil) events, it is possible to uniquely identify the most dominant sources of background in the detector.
The most abundant background in DEAP comes from the beta decay of Argon-39 which has an activity of approximately 1 Bq/kg in atmospheric argon. Discrimination of beta and gamma background events from nuclear recoils in the energy region of interest (near 20 keV of electron energy) is required to be better than 1 in 108 to sufficiently suppress these backgrounds for a dark matter search in liquid atmospheric argon.
DEAP-1
The first stage of the DEAP project, DEAP-1, was designed in order to characterize several properties of liquid argon, demonstrate pulse-shape discrimination, and refine engineering. This detector was too small to perform dark matter searches.
DEAP-1 used 7 kg of liquid argon as a target for WIMP interactions. Two photomultiplier tubes (PMTs) were used to detect the scintillation light produced by a particle interacting with the liquid argon. As the scintillation light produced is of short wavelength (128 nm) a wavelength-shifting film was used to absorb the ultraviolet scintillation light and re-emit in the visible spectrum (440 nm) enabling the light to pass through ordinary windows without any losses and eventually be detected by the PMTs.
DEAP-1 demonstrated good pulse-shape discrimination of backgrounds on the surface and began operation at SNOLAB. The deep underground location reduced unwanted cosmogenic background events. DEAP-1 ran from 2007 to 2011, including two changes in the experimental setup. DEAP-1 characterized background events, determining design improvements needed in DEAP-3600.
DEAP-3600
The DEAP-3600 detector was designed to use 3600 kg of liquid argon, with a 1000 kg fiducial volume, the remaining volume is used as self-shielding and background veto. This is contained in a ~2 m diameter spherical acrylic vessel, the first of its kind ever created. The acrylic vessel is surrounded by 255 high quantum efficiency photomultiplier tubes (PMTs) to detect the argon scintillation light. The acrylic vessel is housed in a stainless steel shell submerged in a 7.8m diameter shield tank filled with ultra-pure water. The outside of the steel shell has additional 48 veto PMTs to detect Cherenkov radiation produced by incoming cosmic particles, primarily muons.
The materials used in the DEAP detector were required to adhere to strict radio-purity standards to reduce background event contamination. All materials used were assayed to determine levels of radiation present, and inner detector components had strict requirements for radon emanation, which emits alpha radiation from its decay daughters. The inner vessel is coated with wavelength shifting material TPB which was vacuum evaporated onto the surface. TPB is a common wavelength shifting material used in liquid argon and liquid xenon experiments due to its fast re-emission and high light yield, with an emission spectra peaked at 425 nm, in the sensitivity region for most PMTs.
The projected sensitivity of DEAP in terms of spin-independent WIMP-nucleus cross-section is 10−46 cm2 at 100 GeV/c2 after three live years of data taking.
Collaborating institutions
Collaborating institutions include:
University of Alberta
AstroCeNT
University of California, Riverside
Canadian Nuclear Laboratories
Carleton University
CIEMAT
INFN
Kurchatov Institute
Laurentian University
Johannes Gutenberg University Mainz
National Autonomous University of Mexico
Princeton University
Queen's University
Royal Holloway University of London
Rutherford Appleton Laboratory
SNOLAB
University of Sussex
Technical University of Munich
TRIUMF
This collaboration benefits largely from the experience many of the members and institutions gained on the Sudbury Neutrino Observatory (SNO) project, which studied neutrinos, another weakly interacting particle.
Status of DEAP-3600
After construction was completed, the DEAP-3600 detector started taking commissioning and calibration data in February 2015 with nitrogen gas purge in the detector. The detector fill was completed and data-taking to search for dark matter was started on August 5, 2016.
Shortly after the initial fill of the detector with liquid argon, a butyl O-ring seal failed on August 17, 2016 and contaminated the argon with 100 ppm of N2 The detector was then vented and re-filled, but this time to a level of 3300 kg to avoid a re-occurrence of the seal failure: this second fill was completed in November 2016. The first dark matter search results with an exposure of 4.44 live days from the initial fill were published in August 2017, giving a cross-section limit of 1.2×10−44 cm2 for a 100 GeV/c2 WIMP mass.
Improved sensitivity to dark matter was achieved in February 2019, with an analysis of data collected over 231 live days from the second fill in 2016-2017, giving a cross-section limit of 3.9×10−45 cm2 for a 100 GeV/c2 WIMP mass.
This updated analysis demonstrated the best performance ever achieved in liquid argon at threshold, for the pulse-shape discrimination technique against beta and gamma backgrounds. The collaboration also developed new techniques to reject rare nuclear recoil backgrounds, using the observed distribution of light in space and time after a scintillation event.
In January 2022 the experiment published its results setting constraints for dark matter with Planck-scale mass with mass between 8.3×106 GeV/c2 and 1.2×1019 GeV/c2 and cross section from 1×10−23 cm2 to 2.4×10−18 cm2. These were the first results for dark matter on this super-heavy mass scale.
The DEAP-3600 experiment is currently (as of June 2024) undergoing upgrades and the team will operate it for another couple of years with even better sensitivity to dark matter.
References
External links
DEAP-3600 website
DEAP-1 Project website
SNOLAB Website
SNO experiment
Experiments for dark matter search | DEAP | [
"Physics"
] | 2,134 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
19,920,701 | https://en.wikipedia.org/wiki/Chilled%20water | Chilled water is a commodity often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. The chilled water can be supplied by a vendor, such as a public utility, or created at the location of the building that will use it, which has been the norm.
Use
Chilled water cooling is not very different from typical residential air conditioning where water is pumped from the chiller to the air handler unit to cool the air.
Regardless of who provides it, the chilled water (between ) is pumped through an air handler, which captures the heat from the air, then disperses the air throughout the area to be cooled.
Site generated
As part of a chilled water system, the condenser water absorbs heat from the refrigerant in the condenser barrel of the water chiller and is then sent via return lines to a cooling tower, which is a heat exchange device used to transfer waste heat to the atmosphere. The extent to which the cooling tower decreases the temperature depends upon the outside temperature, the relative humidity and the atmospheric pressure. The water in the chilled water circuit will be lowered to the Wet-bulb temperature or dry-bulb temperature before proceeding to the water chiller, where it is cooled to between 3 and 6 °C and pumped to the air handler, where the cycle is repeated. The equipment required includes chillers, cooling towers, pumps and electrical control equipment. The initial capital outlay for these is substantial and maintenance costs can fluctuate. Adequate space must be included in building design for the physical plant and access to equipment.
Utility generated
The chilled water, having absorbed heat from the air, is sent via return lines back to the utility facility, where the process described in the previous section occurs. Utility generated chilled water eliminates the need for chillers and cooling towers at the property, reduces capital outlays and eliminates ongoing maintenance costs. The physical space saved can also become rentable, increasing revenue.
Utility supplied chilled water has been used successfully since the 1960s in many cities, and technological advances in the equipment, controls and trenchless installation have increased efficiency and lowered costs.
The advantage of utility-supplied chilled water is based on economy of scale. A utility can operate one large system more economically than a customer can operate the individual system in one building. The utility's system also has back-up capacity to protect against sudden outages. The cost of such "insurance" is also markedly lower than what it would be for an individual structure.
The use of utility supplied chilled water is most cost effective when it is designed into the building's infrastructure or when chiller/cooling tower equipment must be replaced. Commercial customers often lower their air conditioning costs from 10 to 20% by purchasing chilled water.
Chilled water storage
Water can also be chilled at night, where electricity is available at off-peak rates, then stored in a large, insulated tank until needed, the next day, for cooling.
References
External links
Chilled Water Plant Design and Specification Guide
Cooling technology
Heating, ventilation, and air conditioning
Mechanical engineering | Chilled water | [
"Physics",
"Engineering"
] | 627 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
19,922,557 | https://en.wikipedia.org/wiki/Diffusion-limited%20enzyme | A diffusion-limited enzyme catalyses a reaction so efficiently that the rate limiting step is that of substrate diffusion into the active site, or product diffusion out. This is also known as kinetic perfection or catalytic perfection. Since the rate of catalysis of such enzymes is set by the diffusion-controlled reaction, it therefore represents an intrinsic, physical constraint on evolution (a maximum peak height in the fitness landscape). Diffusion limited perfect enzymes are very rare. Most enzymes catalyse their reactions to a rate that is 1,000-10,000 times slower than this limit. This is due to both the chemical limitations of difficult reactions, and the evolutionary limitations that such high reaction rates do not confer any extra fitness.
History
The theory of diffusion-controlled reaction was originally utilized by R.A. Alberty, Gordon Hammes, and Manfred Eigen to estimate the upper limit of enzyme-substrate reaction. According to their estimation, the upper limit of enzyme-substrate reaction was 109 M−1 s−1.
In 1972, it was observed that in the dehydration of H2CO3 catalyzed by carbonic anhydrase, the second-order rate constant obtained experimentally was about 1.5 × 1010 M−1 s−1, which was one order of magnitude higher than the upper limit estimated by Alberty, Hammes, and Eigen based on a simplified model.
To address such a paradox, Kuo-Chen Chou and his co-workers proposed a model by taking into account the spatial factor and force field factor between the enzyme and its substrate and found that the upper limit could reach 1010 M−1 s−1, and can be used to explain some surprisingly high reaction rates in molecular biology.
The new upper limit found by Chou et al. for enzyme-substrate reaction was further discussed and analyzed by a series of follow-up studies.
A detailed comparison between the simplified Alberty-Hammes-Eigen's model (a) and the Chou's model (b) in calculating the diffusion-controlled reaction rate of enzyme with its substrate, or the upper limit of enzyme-substrate reaction, was elaborated in the paper.
Mechanism
Kinetically perfect enzymes have a specificity constant, kcat/Km, on the order of 108 to 109 M−1 s−1. The rate of the enzyme-catalysed reaction is limited by diffusion and so the enzyme 'processes' the substrate well before it encounters another molecule.
Some enzymes operate with kinetics which are faster than diffusion rates, which would seem to be impossible. Several mechanisms have been invoked to explain this phenomenon. Some proteins are believed to accelerate catalysis by drawing their substrate in and preorienting them by using dipolar electric fields. Some invoke a quantum-mechanical tunneling explanation whereby a proton or an electron can tunnel through activation barriers. If the proton tunneling theory remained a controversial idea, it has been proven to be the only possible mechanism in the case of the soybean lipoxygenase.
Evolution
There are not many kinetically perfect enzymes. This can be explained in terms of natural selection. An increase in catalytic speed may be favoured as it could confer some advantage to the organism. However, when the catalytic speed outstrips diffusion speed (i.e. substrates entering and leaving the active site, and also encountering substrates) there is no more advantage to increase the speed even further. The diffusion limit represents an absolute physical constraint on evolution. Increasing the catalytic speed past the diffusion speed will not aid the organism in any way and so represents a global maximum in a fitness landscape. Therefore, these perfect enzymes must have come about by 'lucky' random mutation which happened to spread, or because the faster speed was once useful as part of a different reaction in the enzyme's ancestry.
Examples
Acetylcholinesterase
β-lactamase
Catalase
Carbonic anhydrase
Carbon monoxide dehydrogenase
Cytochrome c peroxidase
Fumarase
Superoxide dismutase
Triosephosphate isomerase
See also
Diffusion-controlled reaction
Enzyme
Enzyme catalysis
Enzyme kinetics
Enzyme engineering
References
Catalysis
Enzyme kinetics
Chemical reaction engineering | Diffusion-limited enzyme | [
"Chemistry",
"Engineering"
] | 851 | [
"Catalysis",
"Chemical reaction engineering",
"Enzyme kinetics",
"Chemical engineering",
"Chemical kinetics"
] |
3,633,138 | https://en.wikipedia.org/wiki/Source%20transformation | Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively.
Process
Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit.
Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources.
Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has value .
Example calculation
Source transformations are easy to compute using Ohm's law. If there is a voltage source in series with an impedance, it is possible to find the value of the equivalent current source in parallel with the impedance by dividing the value of the voltage source by the value of the impedance. The converse also holds: if a current source in parallel with an impedance is present, multiplying the value of the current source with the value of the impedance provides the equivalent voltage source in series with the impedance. A visual example of a source transformation can be seen in Figure 1.
A brief proof of the theorem
The transformation can be derived from the uniqueness theorem. In the present context, it implies that a black box with two terminals must have a unique well-defined relation between its voltage and current. It is readily to verify that the above transformation indeed gives the same V-I curve, and therefore the transformation is valid.
See also
Ohm's Law
Thévenin's theorem
Current source
Voltage source
Electrical impedance
References
Electrical engineering
Electronic engineering
Electrical circuits
Electronic circuits
Electronic design
Circuit theorems | Source transformation | [
"Physics",
"Technology",
"Engineering"
] | 618 | [
"Computer engineering",
"Equations of physics",
"Electronic design",
"Electronic circuits",
"Electronic engineering",
"Electrical circuits",
"Circuit theorems",
"Electrical engineering",
"Design",
"Physics theorems"
] |
3,633,309 | https://en.wikipedia.org/wiki/Structural%20load | A structural load or structural action is a mechanical load (more generally a force) applied to structural elements. A load causes stress, deformation, displacement or acceleration in a structure. Structural analysis, a discipline in engineering, analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure, so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft, satellites, rockets, space stations, ships, and submarines—are subject to their own particular structural loads and actions. Engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for acceptance testing and inspection.
Types
In civil engineering, specified loads are the best estimate of the actual loads a structure is expected to carry. These loads come in many different forms, such as people, equipment, vehicles, wind, rain, snow, earthquakes, the building materials themselves, etc. Specified loads also known as characteristic loads in many cases.
Buildings will be subject to loads from various sources. The principal ones can be classified as live loads (loads which are not always present in the structure), dead loads (loads which are permanent and immovable excepting redesign or renovation) and wind load, as described below. In some cases structures may be subject to other loads, such as those due to earthquakes or pressures from retained material. The expected maximum magnitude of each is referred to as the characteristic load.
Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression. The term can refer to a laboratory test method or to the normal usage of a material or structure.
Live loads are usually variable or moving loads. These can have a significant dynamic element and may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc.
An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material.
Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration.
Imposed loads are those associated with occupation and use of the building; their magnitude is less clearly defined and is generally related to the use of the building.
Loads on architectural and civil engineering structures
Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials. Structural loads are split into categories by their originating cause. In terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models.
To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design, loads are increased by load factors. These load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. They are developed to help achieve the desired level of reliability of a structure based on probabilistic studies that take into account the load's originating cause, recurrence, distribution, and static or dynamic nature.
Dead load
The dead load includes loads that are relatively constant over time, including the weight of the structure itself, and immovable fixtures such as walls, plasterboard or carpet. The roof is also a dead load. Dead loads are also known as permanent or static loads. Building materials are not dead loads until constructed in permanent position. IS875(part 1)-1987 give unit weight of building materials, parts, components.
Live load
Live loads, or imposed loads, are temporary, of short duration, or a moving load. These dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids and material fatigue.
Live loads, sometimes also referred to as probabilistic loads, include all the forces that are variable within the object's normal operation cycle not including construction or environmental loads.
Roof and floor live loads are produced during maintenance by workers, equipment and materials, and during the life of the structure by movable objects, such as planters and people.
Bridge live loads are produced by vehicles traveling over the deck of the bridge.
Environmental loads
Environmental loads are structural loads caused by natural forces such as wind, rain, snow, earthquake or extreme temperatures.
Wind loads
Snow, rain and ice loads
Seismic loads
Hydrostatic loads
Temperature changes leading to thermal expansion cause thermal loads
Ponding loads
Frost heaving
Lateral pressure of soil, groundwater or bulk materials
Loads from fluids or floods
Permafrost melting
Dust loads
Other loads
Engineers must also be aware of other actions that may affect a structure, such as:
Foundation settlement or displacement
Fire
Corrosion
Explosion
Creep or shrinkage
Impact from vehicles or machinery vibration
Construction loads
Load combinations
A load combination results when more than one load type acts on the structure. Building codes usually specify a variety of load combinations together with load factors (weightings) for each load type in order to ensure the safety of the structure under different maximum expected loading scenarios. For example, in designing a staircase, a dead load factor may be 1.2 times the weight of the structure, and a live load factor may be 1.6 times the maximum expected live load. These two "factored loads" are combined (added) to determine the "required strength" of the staircase.
The size of the load factor is based on the probability of exceeding any specified design load. Dead loads have small load factors, such as 1.2, because weight is mostly known and accounted for, such as structural members, architectural elements and finishes, large pieces of mechanical, electrical and plumbing (MEP) equipment, and for buildings, it's common to include a Super Imposed Dead Load (SIDL) of around 5 pounds per square foot (psf) accounting for miscellaneous weight such as bolts and other fasteners, cabling, and various fixtures or small architectural elements. Live loads, on the other hand, can be furniture, moveable equipment, or the people themselves, and may increase beyond normal or expected amounts in some situations, so a larger factor of 1.6 attempts to quantify this extra variability. Snow will also use a maximum factor of 1.6, while lateral loads (earthquakes and wind) are defined such that a 1.0 load factor is practical. Multiple loads may be added together in different ways, such as 1.2*Dead + 1.0*Live + 1.0*Earthquake + 0.2*Snow, or 1.2*Dead + 1.6(Snow, Live(roof), OR Rain) + (1.0*Live OR 0.5*Wind).
Aircraft structural loads
For aircraft, loading is divided into two major categories: limit loads and ultimate loads. Limit loads are the maximum loads a component or structure may carry safely. Ultimate loads are the limit loads times a factor of 1.5 or the point beyond which the component or structure will fail. Gust loads are determined statistically and are provided by an agency such as the Federal Aviation Administration. Crash loads are loosely bounded by the ability of structures to survive the deceleration of a major ground impact. Other loads that may be critical are pressure loads (for pressurized, high-altitude aircraft) and ground loads. Loads on the ground can be from adverse braking or maneuvering during taxiing. Aircraft are constantly subjected to cyclic loading. These cyclic loads can cause metal fatigue.
See also
Hotel New World disaster – caused by omitting the dead load of the building in load calculations
Influence line
Probabilistic design
Mechanical load
Structural testing
Southwell plot
References
External links
Luebkeman, Chris H., and Donald Petting "Lecture 17: Primary Loads". University of Oregon. 1996
Fisette, Paul, and the American Wood Council. "Understanding Loads and Using Span Tables". 1997.
www.govinfo.gov/content/pkg/GOVPUB-C13-03121e193fe7b5a13f0f635aaae922aa/pdf/GOVPUB-C13-03121e193fe7b5a13f0f635aaae922aa.pdf
Civil engineering
Structural engineering
Building engineering
Mechanical engineering
Structural analysis | Structural load | [
"Physics",
"Engineering"
] | 1,722 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Building engineering",
"Structural analysis",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering",
"Architecture"
] |
3,636,103 | https://en.wikipedia.org/wiki/Atiyah%E2%80%93Bott%20fixed-point%20theorem | In mathematics, the Atiyah–Bott fixed-point theorem, proven by Michael Atiyah and Raoul Bott in the 1960s, is a general form of the Lefschetz fixed-point theorem for smooth manifolds M, which uses an elliptic complex on M. This is a system of elliptic differential operators on vector bundles, generalizing the de Rham complex constructed from smooth differential forms which appears in the original Lefschetz fixed-point theorem.
Formulation
The idea is to find the correct replacement for the Lefschetz number, which in the classical result is an integer counting the correct contribution of a fixed point of a smooth mapping
Intuitively, the fixed points are the points of intersection of the graph of f with the diagonal (graph of the identity mapping) in , and the Lefschetz number thereby becomes an intersection number. The Atiyah–Bott theorem is an equation in which the LHS must be the outcome of a global topological (homological) calculation, and the RHS a sum of the local contributions at fixed points of f.
Counting codimensions in , a transversality assumption for the graph of f and the diagonal should ensure that the fixed point set is zero-dimensional. Assuming M a closed manifold should ensure then that the set of intersections is finite, yielding a finite summation as the RHS of the expected formula. Further data needed relates to the elliptic complex of vector bundles , namely a bundle map
for each j, such that the resulting maps on sections give rise to an endomorphism of an elliptic complex . Such an endomorphism has Lefschetz number
which by definition is the alternating sum of its traces on each graded part of the homology of the elliptic complex.
The form of the theorem is then
Here trace means the trace of at a fixed point x of f, and is the determinant of the endomorphism at x, with the derivative of f (the non-vanishing of this is a consequence of transversality). The outer summation is over the fixed points x, and the inner summation over the index j in the elliptic complex.
Specializing the Atiyah–Bott theorem to the de Rham complex of smooth differential forms yields the original Lefschetz fixed-point formula. A famous application of the Atiyah–Bott theorem is a simple proof of the Weyl character formula in the theory of Lie groups.
History
The early history of this result is entangled with that of the Atiyah–Singer index theorem. There was other input, as is suggested by the alternate name Woods Hole fixed-point theorem that was used in the past (referring properly to the case of isolated fixed points). A 1964 meeting at Woods Hole brought together a varied group:
Eichler started the interaction between fixed-point theorems and automorphic forms. Shimura played an important part in this development by explaining this to Bott at the Woods Hole conference in 1964.
As Atiyah puts it:
[at the conference]...Bott and I learnt of a conjecture of Shimura concerning a generalization of the Lefschetz formula for holomorphic maps. After much effort we convinced ourselves that there should be a general formula of this type [...]; .
and they were led to a version for elliptic complexes.
In the recollection of William Fulton, who was also present at the conference, the first to produce a proof was Jean-Louis Verdier.
Proofs
In the context of algebraic geometry, the statement applies for smooth and proper varieties over an algebraically closed field. This variant of the Atiyah–Bott fixed point formula was proved by by expressing both sides of the formula as appropriately chosen categorical traces.
See also
Bott residue formula
Notes
References
. This states a theorem calculating the Lefschetz number of an endomorphism of an elliptic complex.
and . These gives the proofs and some applications of the results announced in the previous paper.
External links
Fixed-point theorems
Theorems in differential topology | Atiyah–Bott fixed-point theorem | [
"Mathematics"
] | 834 | [
"Theorems in mathematical analysis",
"Theorems in differential topology",
"Fixed-point theorems",
"Theorems in topology"
] |
3,636,821 | https://en.wikipedia.org/wiki/T-box%20transcription%20factor%20T | T-box transcription factor T, also known as Brachyury protein, is encoded for in humans and other apes by the TBXT gene. Brachyury functions as a transcription factor within the T-box family of genes. Brachyury homologs have been found in all bilaterian animals that have been screened, as well as the freshwater cnidarian Hydra.
History
The brachyury mutation was first described in mice by Nadezhda Alexandrovna Dobrovolskaya-Zavadskaya in 1927 as a mutation that affected tail length and sacral vertebrae in heterozygous animals. In homozygous animals, the brachyury mutation is lethal at around embryonic day 10 due to defects in mesoderm formation, notochord differentiation and the absence of structures posterior to the forelimb bud (Dobrovolskaïa-Zavadskaïa, 1927). The name brachyury comes from the Greek brakhus meaning short and oura meaning tail.
In 2018, HGNC updated the human gene name from T to TBXT, presumably to overcome difficulties associated with searching for a single letter gene symbol.
Tbxt was cloned by Bernhard Herrmann and colleagues and proved to encode a 436 amino acid embryonic nuclear transcription factor. Tbxt binds to a specific DNA element, a near palindromic sequence TCACACCT through a region in its N-terminus, called the T-box. Tbxt is the founding member of the T-box family which in mammals currently consists of 18 T-box genes.
The crystal structure of the human brachyury protein was solved in 2017 by Opher Gileadi and colleagues at the Structural Genomics Consortium in Oxford.
Role in development
The gene brachyury appears to have a conserved role in defining the midline of a bilaterian organism, and thus the establishment of the anterior-posterior axis; this function is apparent in chordates and molluscs.
Its ancestral role, or at least the role it plays in the Cnidaria, appears to be in defining the blastopore. It also defines the mesoderm during gastrulation. Tissue-culture based techniques have demonstrated one of its roles may be in controlling the velocity of cells as they leave the primitive streak. It effects transcription of genes required for mesoderm formation and cellular differentiation.
Brachyury has also been shown to help establish the cervical vertebral blueprint during fetal development. The number of cervical vertebrae is highly conserved among all mammals; however, a spontaneous vertebral and spinal dysplasia (VSD) mutation in this gene has been associated with the development of six or fewer cervical vertebrae instead of the usual seven.
Expression
In mice, T is expressed in the inner cell mass of the blastocyst stage embryo (but not in the majority of mouse embryonic stem cells) followed by the primitive streak (see image). In later development, expression is localised to the node and notochord.
In Xenopus laevis, Xbra (the Xenopus T homologue, also recently renamed t) is expressed in the mesodermal marginal zone of the pre-gastrula embryo followed by localisation to the blastopore and notochord at the mid-gastrula stage.
Orthologs
The Danio rerio ortholog is known as ntl (no tail).
Role in hominid evolution
Tail development
TBXT is a transcription factor observed in vertebrate organisms. As such, it is primarily responsible for the genotype that codes for tail formation due to its observed role in axial development and the construction of posterior mesoderm within the lumbar and sacral regions. TBXT transcribes genes that form notochord cells, which are responsible for the flexibility, length, and balance of the spine, including tail vertebrae. Because of the role that the transcription factor plays in spinal development, it is cited as being the protein that is primarily responsible for tail development in mammals. However, due to being a genetically-induced phenotype, it is possible for tail-encoding material to be effectively silenced by mutation. This is the mechanism by which the ntl ortholog developed in the hominidae taxa.
Alu elements
In particular, an Alu element in TBXT is responsible for the taillessness (ntl) ortholog. An Alu element is evolved, mobile RNA that is exclusively in primates. These elements are capable of mobilizing around a genome, making Alu elements transposons. The Alu element that is observed to catalyze taillessness in TBXT is AluY. While normally Alu elements are not individually impactful, the presence of another Alu element active in TBXT, AluSx1, is coded such that its nucleotides are the inverse of AluY’s. Because of this, the two elements are paired together in the replication process, leading up to the formation of a stem-loop structure and an alternative splicing event that fundamentally influences transcription. The structure isolates and positions codons held between the two Alu elements in a hairpin-esque loop that consequently cannot be paired or transcribed. The trapped material, most notably, includes the 6th exon that codes in TBXT. In a stem-loop structure, genetic material trapped within the loop is recognized by transcription-coupled nucleotide excision repair (TC NER) proteins as damage due to RNA polymerase being ostensibly stalled at the neck of the loop. This is also how lesions are able to occur at all–the stalled transcription process serves as a beacon for TC NER proteins to ascertain the location of the stem-loop. Once TBXT is cleaved, trapped nucleotides–including exon 6–are excised from the completed transcription process by the TC NER mechanisms. Because of the resulting excision of exon 6, information contained within the exon is, too, removed from transcription. Consequently, it is posited that the material stored in exon 6 is, in part, responsible for full hominid tail growth.
As a result of the effect on TBXT's tail-encoding material that AluY has alongside AluSx1, isoform TBXT-Δexon6 is created. Isoforms are often a result of mutation, polymorphism, and recombination, and happen to share often highly similar functions to the proteins they derive from. However often they can have some key differences due to either containing added instructions or missing instructions the original protein is known to possess. TBXT-Δexon6 falls into this category, as it is an isoform that lacks the ability to process the code that enables proper tail formation in TBXT-containing organisms. This is because exon 6's material that helps encode for tail formation is excised from the contents of the transcribed RNA. As a result, it is effectively missing in the isoform, and is thus the key factor in determining the isoform's name. Other common examples of influential isoforms include those involved in AMP-induced protein kinase that insert phosphate groups into specific sites of the cell depending on the subunit.
Speciation
The first insertion of the AluY element occurred approximately 20-25 million years ago, with the earliest hominid ancestor known to exhibit this mutation being the Hominoidea family of apes. Taillessness has become an overwhelmingly dominant phenotype, such that it contributes to speciation. Over time, the mutation occurred more regularly due to the influence of natural selection and fixation to stabilize and expand its presence in the ape gene pool prior to the eventual speciation of homo sapiens. There are several potential reasons for why taillessness has become the standard phenotype in the Hominidae taxa that offset the genetically disadvantageous aspects of tail mitigation, but little is known with certainty. Some experts hypothesize that taillessness contributes to a stronger, more upright stance. The stance observed by primates with a smaller lumbar is seen to be effective. Grounded mobility and maintaining balance in climbing are more feasible given the evenly distributed body weight observed in hominids. The presence of an additional appendage can also mean another appendage for predators to grab, and one that also consumes energy to move and takes up more space.
Role in disease
Cancer
Brachyury is implicated in the initiation and/or progression of a number of tumor types including chordoma, germ cell tumors, hemangioblastoma, GIST, lung cancer, small cell carcinoma of the lung, breast cancer, colon cancer, hepatocellular carcinoma, prostate cancer, and oral squamous carcinoma.
In breast cancer, brachyury expression is associated with recurrence, metastasis and reduced survival. It is also associated with resistance to tamoxifen and to cytotoxic chemotherapy.
In lung cancer, brachyury expression is associated with recurrence and decreased survival. It is also associated with resistance to cytotoxic chemotherapy, radiation, and EGFR kinase inhibitors.
In prostate cancer, brachyury expression is associated with Gleason score, perineural, invasion and capsular invasion.
In addition to its role in common cancers, brachyury has been identified as a definitive diagnostic marker, key driver and therapeutic target for chordoma, a rare malignant tumor that arises from remnant notochordal cells lodged in the vertebrae. The evidence regarding brachyury's role in chordoma includes:
Brachyury is highly expressed in all chordomas except for the dedifferentiated subtype, which accounts for less than 5% of cases.
Germ line duplication of the brachyury gene is responsible for familial chordoma.
A germline SNP in brachyury is present in 97% of chordoma patients.
Somatic amplifications of brachyury are seen in a subset of sporadic chordomas either by aneuploidy or focal duplication.
Brachyury is the most selectively essential gene in chordoma relative to other cancer types.
Brachyury is associated with a large superenhancer in chordoma tumors and cell lines, and is the most highly expressed superenhancer-associated transcription factor.
Brachyury is an important factor in promoting the epithelial–mesenchymal transition (EMT). Cells that over-express brachyury have down-regulated expression of the adhesion molecule E-cadherin, which allows them to undergo EMT. This process is at least partially mediated by the transcription factors AKT and Snail.
Overexpression of brachyury has been linked to hepatocellular carcinoma (HCC, also called malignant hepatoma), a common type of liver cancer. While brachyury is promoting EMT, it can also induce metastasis of HCC cells. Brachyury expression is a prognostic biomarker for HCC, and the gene may be a target for cancer treatments in the future.
Development
Research posits that there are some downsides that are more likely to occur in the embryonic stage due to the tailless mutation of TBXT-Δexon6. Exon 6's excision fundamentally affects the manner in which TBXT-encoded cells divide, distribute information, and form tissue because of how stem-loop sites create genetic instability. As such, it is seen by experts that tail loss has contributed to the existence and frequency of developmental defects in the neural tube and sacral region. Primarily, spina bifida and sacral agenesis are the most likely suspects due to their direct relation to lumbar development. Spina bifida is an error in the build of the spinal neural tube, causing it to not fully close and leaving nerves exposed within the spinal cord. Sacral agenesis, on the other hand, is a series of physical malformations in the hips that result from the omission of sacral matter during the developmental process. Because both of these developmental disorders result in the displacement of organs and other bodily mechanisms, they are both directly related to outright malfunction of the kidney, bladder, and nervous system. This can lead to higher likelihood of diseases related to their functionality or infrastructure, such as neurogenic bladder dysfunction or hydrocephalus.
Other diseases
Overexpression of brachyury may play a part in EMT associated with benign disease such as renal fibrosis.
Role as a therapeutic target
Because brachyury is expressed in tumors but not in normal adult tissues it has been proposed as a potential drug target with applicability across tumor types. In particular, brachyury-specific peptides are presented on HLA receptors of cells in which it is expressed, representing a tumor specific antigen. Various therapeutic vaccines have been developed which are intended to stimulate an immune response to brachyury expressing cells.
See also
Homeobox protein NANOG
POU5F1
SOX2
MIXL1
GSC
Transcription factors
Gene regulatory network
Bioinformatics
Chordoma
References
Further reading
External links
Protein Atlas entry for Brachyury
Mouse Genome Informatics entry for Brachyury
European Bioinformatics Institute InterPro entry for Brachyury
Information Hyperlinked Over Proteins entry for Brachyury
Xenbase Gene entry for Brachyury
Transcription factors
Embryology | T-box transcription factor T | [
"Chemistry",
"Biology"
] | 2,778 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,067,081 | https://en.wikipedia.org/wiki/Terzaghi%27s%20principle | Terzaghi's Principle states that when stress is applied to a porous material, it is opposed by the fluid pressure filling the pores in the material.
Karl von Terzaghi introduced the idea in a series of papers in the 1920s based on his examination of building consolidation on soil. The principle states that all quantifiable changes in stress to a porous medium are a direct result of a change in effective stress. The effective stress, , is related to total stress, , and the pore pressure, , by
,
where is the identity matrix. The negative sign is there because the pore pressure serves to lessen the volume-changing stress; physically this is because there is fluid in the pores which bears a part of the total stress, so partially unloading the solid matrix from normal stresses.
Terzaghi's principle applies well to porous materials whose solid constituents are incompressible - soil, for example, is composed of grains of incompressible silica so that the volume change in soil during consolidation is due solely to the rearrangement of these constituents with respect to one another. Generalizing Terzaghi's principle to include compressible solid constituents was accomplished by Maurice Anthony Biot in the 1940s, giving birth to the theory of poroelasticity and poromechanics.
Assumptions of Terzaghi's Principle
The soil is homogenous (uniform in composition throughout) and isotropic (show same physical property in each direction).
The soil is fully saturated (zero air voids due to water content being so high).
The solid particles are incompressible.
Compression and flow are one-dimensional (vertical axis being the one of interest).
Strains in the soil are relatively small.
Darcy's Law is valid for all hydraulic gradients.
The coefficient of permeability and the coefficient of volume compressibility remain constant throughout the process.
There is a unique relationship, independent of time, between the void ratio and effective stress
Validity
Though the first 5 assumptions are either likely to hold, or deviation will have no discernible effect, experimental results contradict the final 3. Darcy's Law does not seem to hold at high hydraulic gradients, and both the coefficients of permeability and volume compressibility decrease during consolidation. This is due to the non-linearity of the relationship between void ratio and effective stress, although for small stress increments assumption 7 is reasonable. Finally, the relationship between void ratio and effective stress is not independent of time, again proven by experimental results.
Over the past century several formulations have been proposed for the effective stress according to several work hypotheses (e.g. compressibility of grains, their brittle or plastic behavior, high confining stress etc.). By way of example, at high pressures (e.g. in the Earth crust, at depth of some km, where the lithostatic load can reach values of several hundreds of MPa), Terzaghi’s formulation shows relevant deviation from experimental data and the formulation provided by Alec Skempton should be utilized, in order to achieve more accurate results. Substantially, the effective stress definition is conventional and related to the problem being treated. Among various effective stress formulations, Terzaghi's one seems particularly appropriate, for its simplicity and as it describes with excellent approximation a wide variety of real cases.
See also
Karl von Terzaghi
References
External links
Amazon.com link
Richard E. Goodman on Terzaghi
Soil mechanics | Terzaghi's principle | [
"Physics"
] | 720 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
13,070,117 | https://en.wikipedia.org/wiki/Spectral%20density%20estimation | In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density (also known as the power spectral density) of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.
Overview
Spectrum analysis, also referred to as frequency domain analysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (or phase) can be called spectrum analysis.
Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes called frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as ) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category of Fourier analysis.
The Fourier transform of a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as a complex number, or as magnitude (amplitude) and phase in polar coordinates (i.e., as a phasor). A common technique in signal processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a power spectrum.
Because of reversibility, the Fourier transform is called a representation of the function, in terms of frequency instead of time; thus, it is a frequency domain representation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, only non-linear or time-variant operations can create new frequencies in the frequency spectrum.
In practice, nearly all software and electronic devices that generate frequency spectra utilize a discrete Fourier transform (DFT), which operates on samples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm called fast Fourier transform (FFT). The array of squared-magnitude components of a DFT is a type of power spectrum called periodogram, which is widely used for examining the frequency characteristics of noise-free functions such as filter impulse responses and window functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method) or over frequency (smoothing). Welch's method is widely used for spectral density estimation (SDE). However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section.
Techniques
Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided into non-parametric, parametric, and more recently semi-parametric (also called sparse) methods. The non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g. Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlying stationary stochastic process has a certain structure that can be described using a small number of parameters (for example, using an auto-regressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a non-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Similar approaches may also be used for missing data recovery as well as signal reconstruction.
Following is a partial list of spectral density estimation techniques:
Non-parametric methods for which the signal samples can be unevenly spaced in time (records can be incomplete)
Least-squares spectral analysis, based on least squares fitting to known frequencies
Lomb–Scargle periodogram, an approximation of the Least-squares spectral analysis
Non-uniform discrete Fourier transform
Non-parametric methods for which the signal samples must be evenly spaced in time (records must be complete):
Periodogram, the modulus squared of the discrete Fourier transform
Bartlett's method is the average of the periodograms taken of multiple segments of the signal to reduce variance of the spectral density estimate
Welch's method a windowed version of Bartlett's method that uses overlapping segments
Multitaper is a periodogram-based method that uses multiple tapers, or windows, to form independent estimates of the spectral density to reduce variance of the spectral density estimate
Singular spectrum analysis is a nonparametric method that uses a singular value decomposition of the covariance matrix to estimate the spectral density
Short-time Fourier transform
Critical filter is a nonparametric method based on information field theory that can deal with noise, incomplete data, and instrumental response functions
Parametric techniques (an incomplete list):
Autoregressive model (AR) estimation, which assumes that the nth sample is correlated with the previous p samples.
Moving-average model (MA) estimation, which assumes that the nth sample is correlated with noise terms in the previous p samples.
Autoregressive moving-average (ARMA) estimation, which generalizes the AR and MA models.
MUltiple SIgnal Classification (MUSIC) is a popular superresolution method.
Estimation of signal parameters via rotational invariance techniques (ESPRIT) is another superresolution method.
Maximum entropy spectral estimation is an all-poles method useful for SDE when singular spectral features, such as sharp peaks, are expected.
Semi-parametric techniques (an incomplete list):
SParse Iterative Covariance-based Estimation (SPICE) estimation, and the more generalized -SPICE.
Iterative Adaptive Approach (IAA) estimation.
Lasso, similar to least-squares spectral analysis but with a sparsity enforcing penalty.
Parametric estimation
In parametric spectral estimation, one assumes that the signal is modeled by a stationary process which has a spectral density function (SDF) that is a function of the frequency and parameters . The estimation problem then becomes one of estimating these parameters.
The most common form of parametric SDF estimate uses as a model an autoregressive model of order . A signal sequence obeying a zero mean process satisfies the equation
where the are fixed coefficients and is a white noise process with zero mean and innovation variance . The SDF for this process is
with the sampling time interval and the Nyquist frequency.
There are a number of approaches to estimating the parameters of the process and thus the spectral density:
The Yule–Walker estimators are found by recursively solving the Yule–Walker equations for an process
The Burg estimators are found by treating the Yule–Walker equations as a form of ordinary least squares problem. The Burg estimators are generally considered superior to the Yule–Walker estimators. Burg associated these with maximum entropy spectral estimation.
The forward-backward least-squares estimators treat the process as a regression problem and solves that problem using forward-backward method. They are competitive with the Burg estimators.
The maximum likelihood estimators estimate the parameters using a maximum likelihood approach. This involves a nonlinear optimization and is more complex than the first three.
Alternative parametric methods include fitting to a moving-average model (MA) and to a full autoregressive moving-average model (ARMA).
Frequency estimation
Frequency estimation is the process of estimating the frequency, amplitude, and phase-shift of a signal in the presence of noise given assumptions about the number of the components. This contrasts with the general methods above, which do not make prior assumptions about the components.
Single tone
If one only wants to estimate the frequency of the single loudest pure-tone signal, one can use a pitch detection algorithm.
If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency estimation include those based on the Wigner–Ville distribution and higher order ambiguity functions.
If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a multiple-tone approach.
Multiple tones
A typical model for a signal consists of a sum of complex exponentials in the presence of white noise,
.
The power spectral density of is composed of impulse functions in addition to the spectral density function due to noise.
The most common methods for frequency estimation involve identifying the noise subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation are Pisarenko's method, the multiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.
Pisarenko's method
MUSIC ,
Eigenvector method
Minimum norm method
Example calculation
Suppose , from to is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):
The variance of is, for a zero-mean function as above, given by
If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.
Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
and
The root mean square of is , so the variance of is Hence, the contribution to the average power of coming from the component with frequency is All these contributions add up to the average power of
Then the power as a function of frequency is and its statistical cumulative distribution function will be
is a step function, monotonically non-decreasing. Its jumps occur at the frequencies of the periodic components of , and the value of each jump is the power or variance of that component.
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of , we can take the covariance of with , and define this to be the autocorrelation function of the signal (or data) :
If it exists, it is an even function of If the average power is bounded, then exists everywhere, is finite, and is bounded by which is the average power or variance of the data.
It can be shown that can be decomposed into periodic components with the same periods as :
This is in fact the spectral decomposition of over the different frequencies, and is related to the distribution of power of over the frequencies: the amplitude of a frequency component of is its contribution to the average power of the signal.
The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.
See also
Multidimensional spectral estimation
Periodogram
SigSpec
Spectrogram
Time–frequency analysis
Time–frequency representation
Whittle likelihood
Spectral power distribution
References
Further reading
Statistical signal processing
Signal estimation
Frequency-domain analysis
Spectrum (physical sciences) | Spectral density estimation | [
"Physics",
"Engineering"
] | 2,720 | [
"Physical phenomena",
"Statistical signal processing",
"Spectrum (physical sciences)",
"Frequency-domain analysis",
"Waves",
"Engineering statistics"
] |
13,070,930 | https://en.wikipedia.org/wiki/NS102 | NS102 is a kainate receptor antagonist.
References
Nitroarenes
Kainate receptor antagonists | NS102 | [
"Chemistry",
"Biology"
] | 22 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
13,070,931 | https://en.wikipedia.org/wiki/Pumiliotoxin%20251D | Pumiliotoxin 251D is a toxic organic compound. It is found in the skin of poison frogs from the genera Dendrobates, Epipedobates, Minyobates, and Phyllobates and toads from the genus Melanophryniscus. Its name comes from the pumiliotoxin family (PTXs) and its molecular mass of 251 daltons. When the toxin enters the bloodstream through cuts in the skin or by ingestion, it can cause hyperactivity, convulsions, cardiac arrest and ultimately death. It is especially toxic to arthropods (e.g. mosquitoes), even at low (naturally occurring) concentrations.
Chemical properties
Structure
The chiral centers in pumiliotoxin 251D can give several stereoisomers of the compound. Only one form of the toxin is present in nature and has toxic properties.
Two enantiomers of pumiliotoxin 251D. On the left the plus enantiomer is shown which is toxic. On the right side the minus enantiomer, which is not toxic, is shown.
The side chain conformation of substituents at the C-2’ position plays an important role in the toxicity of the compound.
Synthesis
The synthesis of pumiliotoxin 251D is quite complex and contains multiple steps.
One of the starting materials of the synthesis include the N-Boc derivative of L-proline methyl ester (1). Then, a Wittig type of reaction followed by dehydration with thionyl chloride and pyridine results in alkene 2. When alkene 2 undergoes epoxidation with m-chloroperbenzoic acid (MCPBA), epoxide 3 is formed. This then reacts with the lithium salt of dibromoalkene (6) to afford compound 7. Deprotection of compound 7 followed by cyclization and iodination results in vinyl iodide 8. After purification, this yields the hydrochloride of pumiliotoxin (+)-251D (9).
Pumiliotoxin (−)-251D can be synthesized in a similar way with minor alterations to the overall synthesis.
Accumulation
Like many other frog poisons, pumiliotoxin 251D originates from arthropods. The frogs have a diet of insects that could contain the toxin then it is accumulated in secretory granular glands of the skin of the frog. Some frog species of the Dendrobates genera can convert pumiliotoxin 251D in allopumiliotoxin 267A which is five times more toxic than pumiliotoxin 251D. Only one of the enantiomers can be hydroxylated to this more potent form of the toxin.
The lack of pumiliotoxin 251D in eggs and tadpoles confirms that the toxin is not passed over from adult frogs to their offspring. The tadpoles are therefore not readily protected from predators.
Toxicity
Mechanism of action
In general, pumiliotoxins are known as positive modulators of voltage-gated sodium channels (VGSCs, membrane proteins). Pumiliotoxin 251D is not such a poison. However, it does block the influx of Na+ ions in mammalian VGSCs.
Pumiliotoxin 251D is able to shift the V1/2. This is the potential at which the sodium open probability is half maximal. Both the steady-state activation and inactivation curves of each mammalian VGSCs are shifted to a more negative potential.
PTX 251D shifts the V1/2 of insect VGSCs even further than the mammalian VSGCs. This explains why it is especially toxic to insects, like mosquitoes. Furthermore, the presence of PTX 251D results in a six time higher permeability of the VGSCs for K+ ions. This severely disturbs the delicate sodium-potassium equilibrium in the nerve system.
The effect of pumiliotoxin 251D on the voltage-gated potassium channels (VGPCs) currents is quite small. The toxic has an effect on the deactivation kinetics of the potassium channel. It inhibits its inactivation. This effect is still under investigation.
PTX 251D also completely inhibits the activity of Ca2+-stimulated ATPase. This results in a decreased reuptake of Ca2+ and thus a high concentration of free Ca2+ in the organism. This may be related to the potentiation and prolongation of muscle twitch caused by the inhibition.
The mechanism of biotransformation of PTX 251D is still unknown.
Effects
Pumiliotoxin is a toxin found in poison dart frogs (genus Dendrobates and Phyllobates). It affects the calcium channels, interfering with muscle contraction in the heart and skeletal muscle.
PTX 251D has several effects. It rapidly induces convulsions and death to mice and insects (LD50 being, respectively, 10 mg/kg and 150 ng/larvae). These convulsions are the result of the uncontrollable distortion of the sodium-potassium equilibrium in the neurons. This is caused by the inhibition of the VGSCs.
It also acts as cardiac depressor, causing cardiac arrest. This can be explained by its negative effect on the cardiac VGSC hNav1.5/β1.
Although nothing is known of how well PTX 251D penetrates into the brain where convulsions are originated, the observation of convulsions can be explained through inhibition of VGPCs.
Treatment
Symptomatic treatment of PTX 251D poisoning include reducing the convulsions using carbamazepine. This drug targets the affected VGSCs. Phenobarbital also shows positive effects by interacting with the affected Ca2+ channels. Ineffective drugs include diazepam and dizocilpine.
References
Amphibian toxins
Alkaloids
Ion channel toxins
Indolizidines
Tertiary alcohols | Pumiliotoxin 251D | [
"Chemistry"
] | 1,267 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Natural products",
"Alkaloids"
] |
25,412,108 | https://en.wikipedia.org/wiki/Differential%20invariant | In mathematics, a differential invariant is an invariant for the action of a Lie group on a space that involves the derivatives of graphs of functions in the space. Differential invariants are fundamental in projective differential geometry, and the curvature is often studied from this point of view. Differential invariants were introduced in special cases by Sophus Lie in the early 1880s and studied by Georges Henri Halphen at the same time. was the first general work on differential invariants, and established the relationship between differential invariants, invariant differential equations, and invariant differential operators.
Differential invariants are contrasted with geometric invariants. Whereas differential invariants can involve a distinguished choice of independent variables (or a parameterization), geometric invariants do not. Élie Cartan's method of moving frames is a refinement that, while less general than Lie's methods of differential invariants, always yields invariants of the geometrical kind.
Definition
The simplest case is for differential invariants for one independent variable x and one dependent variable y. Let G be a Lie group acting on R2. Then G also acts, locally, on the space of all graphs of the form y = ƒ(x). Roughly speaking, a k-th order differential invariant is a function
depending on y and its first k derivatives with respect to x, that is invariant under the action of the group.
The group can act on the higher-order derivatives in a nontrivial manner that requires computing the prolongation of the group action. The action of G on the first derivative, for instance, is such that the chain rule continues to hold: if
then
Similar considerations apply for the computation of higher prolongations. This method of computing the prolongation is impractical, however, and it is much simpler to work infinitesimally at the level of Lie algebras and the Lie derivative along the G action.
More generally, differential invariants can be considered for mappings from any smooth manifold X into another smooth manifold Y for a Lie group acting on the Cartesian product X×Y. The graph of a mapping X → Y is a submanifold of X×Y that is everywhere transverse to the fibers over X. The group G acts, locally, on the space of such graphs, and induces an action on the k-th prolongation Y(k) consisting of graphs passing through each point modulo the relation of k-th order contact. A differential invariant is a function on Y(k) that is invariant under the prolongation of the group action.
Applications
Solving equivalence problems
Differential invariants can be applied to the study of systems of partial differential equations: seeking similarity solutions that are invariant under the action of a particular group can reduce the dimension of the problem (i.e. yield a "reduced system").
Noether's theorem implies the existence of differential invariants corresponding to every differentiable symmetry of a variational problem.
Flow characteristics using computer vision
Geometric integration
See also
Cartan's equivalence method
Notes
References
.
; English translation: .
.
.
External links
Invariant Variation Problems
Differential geometry
Invariant theory
Projective geometry | Differential invariant | [
"Physics"
] | 633 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
25,412,598 | https://en.wikipedia.org/wiki/Carey%20Foster%20bridge | In electronics, the Carey Foster bridge is a bridge circuit used to measure medium resistances, or to measure small differences between two large resistances. It was invented by Carey Foster as a variant on the Wheatstone bridge. He first described it in his 1872 paper "On a Modified Form of Wheatstone's Bridge, and Methods of Measuring Small Resistances" (Telegraph Engineer's Journal, 1872–1873, 1, 196).
Use
In the adjacent diagram, X and Y are resistances to be compared. P and Q are nearly equal resistances, forming the other half of the bridge. The bridge wire EF has a jockey contact D placed along it and is slid until the galvanometer G measures zero. The thick-bordered areas are thick copper busbars of very low resistance, to limit the influence on the measurement.
Place a known resistance in position Y.
Place the unknown resistance in position X.
Adjust the contact D along the bridge wire EF so as to null the galvanometer. This position (as a percentage of distance from E to F) is .
Swap X and Y. Adjust D to the new null point. This position is .
If the resistance of the wire per percentage is , then the resistance difference is the resistance of the length of bridge wire between and :
To measure a low unknown resistance X, replace Y with a copper busbar that can be assumed to be of zero resistance.
In practical use, when the bridge is unbalanced, the galvanometer is shunted with a low resistance to avoid burning it out. It is only used at full sensitivity when the anticipated
measurement is close to the null point.
To measure σ
To measure the unit resistance of the bridge wire EF, put a known resistance (e.g., a standard 1 ohm resistance) that is less than that of the wire as X, and a copper busbar of assumed zero resistance as Y.
Theory
Two resistances to be compared, X and Y, are connected in series with the bridge wire. Thus, considered as a Wheatstone bridge, the two resistances are X plus a length of bridge wire, and Y plus the remaining bridge wire. The two remaining arms are the nearly equal resistances P and Q, connected in the inner gaps of the bridge.
Let be the null point D on the bridge wire EF in percent. is the unknown left-side extra resistance EX and is the unknown right-side extra resistance FY, and is the resistance per percent length of the bridge wire:
and add 1 to each side:
(equation 1)
Now swap X and Y. is the new null point reading in percent:
and add 1 to each side:
(equation 2)
Equations 1 and 2 have the same left-hand side and the same numerator on the right-hand side, meaning the denominator on the right-hand side must also be equal:
Thus: the difference between X and Y is the resistance of the bridge wire between and .
The bridge is most sensitive when P, Q, X and Y are all of comparable magnitude.
References
Analog circuits
Bridge circuits
English inventions
Impedance measurements | Carey Foster bridge | [
"Physics",
"Engineering"
] | 638 | [
"Physical quantities",
"Analog circuits",
"Electronic engineering",
"Impedance measurements",
"Electrical resistance and conductance"
] |
25,416,659 | https://en.wikipedia.org/wiki/Mitochondrial%20replacement%20therapy | Mitochondrial replacement therapy (MRT), sometimes called mitochondrial donation, is the replacement of mitochondria in one or more cells to prevent or ameliorate disease. MRT originated as a special form of in vitro fertilisation in which some or all of the future baby's mitochondrial DNA (mtDNA) comes from a third party. This technique is used in cases when mothers carry genes for mitochondrial diseases. The therapy is approved for use in the United Kingdom. A second application is to use autologous mitochondria to replace mitochondria in damaged tissue to restore the tissue to a functional state. This has been used in clinical research in the United States to treat cardiac-compromised newborns.
Medical uses
In vitro fertilisation
Mitochondrial replacement therapy has been used to prevent the transmission of mitochondrial diseases from mother to child; it could only be performed in clinics licensed by the UK's Human Fertilisation and Embryology Authority (HFEA), only for people individually approved by the HFEA, for whom preimplantation genetic diagnosis is unlikely to be helpful, and only with informed consent that the risks and benefits are not well understood.
Relevant mutations are found in about 0.5% of the population and disease affects around one in 5000 individuals (0.02%)—the percentage of people affected is much smaller because cells contain many mitochondria, only some of which carry mutations, and the number of mutated mitochondria need to reach a threshold in order to affect the entire cell, and many cells need to be affected for the person to show disease.
The average number of births per year among women at risk for transmitting mtDNA disease is estimated to approximately 150 in the United Kingdom and 800 in the United States.
Prior to the development of MRT, and in places where it is not legal or feasible, the reproductive options for women who are at risk for transmitting mtDNA disease and who want to prevent transmission were using an egg from another woman, adoption, or childlessness.
Tissue function
Autologous mitochondria extracted from healthy tissue and supplied to damaged tissue has been used to treat cardiac-compromised newborns. Alternatives to the approach include use of an extracorporeal membrane oxygenator (ECMO) or tissue or organ transplantation.
Techniques
In vitro fertilization involves removing eggs from a woman, collecting sperm from a man, fertilizing the egg with the sperm, allowing the fertilized egg to form a blastocyst, and then transferring the blastocyst into the uterus. MRT involves an additional egg from a third person, and manipulation of both the recipient egg and the donor egg.
As of 2016 there were three MRT techniques in use: maternal spindle transfer (MST); pronuclear transfer (PNT); and the newest technique, polar body transfer (PBT). The original technique, in which mitochondria-containing cytoplasm taken from a donor egg is simply injected into the recipient egg, is no longer used.
In maternal spindle transfer, an oocyte is removed from the recipient, and when it is in the metaphase II stage of cell division, the spindle-chromosome complex is removed; some of the cytoplasm comes with it, so some mitochondria are likely to be included. The spindle-chromosome complex is inserted into a donor oocyte from which the nucleus has already been removed. This egg is fertilized with sperm and allowed to form a blastocyst, which can then be investigated with preimplantation genetic diagnosis to check for mitochondrial mutations, prior to being implanted in the recipient's uterus.
In pronuclear transfer, an oocyte is removed from the recipient and fertilized with sperm. The donor oocyte is fertilized with sperm from the same person. The male and female pronuclei are removed from each fertilized egg prior to their fusing, and the pronuclei from the recipient's fertilized egg are inserted into the fertilized egg from the donor. As with MST, a small amount of cytoplasm from the recipient egg may be transferred, and as with MST, the fertilized egg is allowed to form a blastocyst, which can then be investigated with preimplantation genetic diagnosis to check for mitochondrial mutations before being implanted in the recipient's uterus.
In polar body transfer, a polar body (a small cell with very little cytoplasm that is created when an egg cell divides) from the recipient is used in its entirety, instead of using nuclear material extracted from the recipient's normal egg; this can be used in either MST or PNT. This technique was first published in 2014 and as of 2015 it had not been consistently replicated, but is considered promising as there is a greatly reduced chance for transmitting mitochondria from the recipient because polar bodies contain very few mitochondria, and it does not involve extracting material from the recipient's egg.
Cytoplasmic transfer
Cytoplasmic transfer was originally developed in the 1980s in the course of basic research conducted with mice to study the role that parts of the cell outside of the nucleus played in embryonic development. In this technique, cytoplasm, including proteins, messenger RNA (mRNA), mitochondria and other organelles, is taken from a donor egg and injected into the recipient egg, resulting in a mixture of mitochondrial genetic material. This technique started to be used in the late 1990s to "boost" the eggs of older women who were having problems conceiving and led to the birth of about 30 babies. Concerns were raised that the mixture of genetic material and proteins could cause problems with respect to epigenetic clashes, or differences in the ability of the recipient and donor materials to effect the development process, or due to the injection of the donor material. After three children born through the technique were found to have developmental disorders (two cases of Turner's syndrome and one case of pervasive developmental disorder (an autism spectrum disorder), the FDA banned the procedure until a clinical trial could prove its safety. As of 2015 that study had not been conducted, but the procedure was in use in other countries.
A related approach uses autologous mitochondria taken from healthy tissue to replace the mitochondria in damaged tissue. Transfer techniques include direct injection into damaged tissue and injection into vessels that supply blood to the tissue.
Risks
Assisted reproduction via MRT involves preimplantation genetic screening of the mother, preimplantation genetic diagnosis after the egg is fertilized, and in vitro fertilization. It has all the risks of those procedures.
In addition, both procedures used in MRT entail their own risks. On one level, the procedures physically disrupt two oocytes, removing nuclear genetic material from the recipient egg or fertilized egg and inserting the nuclear genetic material into the donor unfertilized or fertilized egg; the manipulations for both procedures may cause various forms of damage that were not well understood as of 2016.
Maternal mitochondria will be carried over to the donor egg; as of 2016 it was estimated that using techniques current in the UK, maternal mitochondria will comprise only around 2% or less of mitochondria in the resulting egg, a level that was considered safe by the HFEA and within the limits of mitochondrial variation that most people have.
Because MRT procedures involve actions at precise times during egg development and fertilization, and involves manipulating eggs, there is a risk that eggs may mature abnormally or that fertilization may happen abnormally; as of 2016 the HFEA judged that laboratory techniques in the UK had been well enough developed to manage these risks to proceed cautiously with making MRT available.
Because mitochondria in the final egg will come from a third party, different from the two parties whose DNA is in the nucleus, and because nuclear DNA encodes genes that make some of the proteins and mRNA used by mitochondria, there is a theoretical risk of adverse "mito–nuclear" interactions. While this theoretical risk could possibly be managed by attempting to match the haplotype of the donor and the recipient, as of 2016 there was no evidence that this is an actual risk.
Because MRT is a relatively new technology, there are concerns that it is not yet safe for public use as there have been limited studies that used MRT in large animal models.
Finally, there is a risk of epigenetic modification to DNA in the nucleus and mitochondria, caused by the procedure itself or by mito–nuclear interactions. As of 2016 these risks appeared to be minimal but were being monitored by long-term study of children born from the procedure.
History
In the United States in 1996 embryologist Jacques Cohen and others at the Institute for Reproductive Medicine and Science, Saint Barnabas Medical Center in Livingston, New Jersey first used cytoplasmic transfer in a human assisted reproduction procedure. In 1997 the first baby was born using this procedure. In 2001, Cohen and others reported that ten single babies, twins, and a quadruplet at his New Jersey clinic and a further six children in Israel had been born using his technique. Using modifications of his procedure, a baby had been born at Eastern Virginia Medical School, five children at the Lee Women's Hospital Infertility Clinic in Taichung, Taiwan. twins in Naples, Italy and a twins in India. In total as of 2016, 30–50 children worldwide had been reported to have been born using cytoplasmic transfer.
In 2002, the US Food and Drug Administration (FDA) asked a Biological Response Modifiers Advisory Committee Meeting to advise on the technique of cytoplasmic transfer to treat infertility. This committee felt that there were risks at the time of inadvertent transfer of chromosomes and enhanced survival of abnormal embryos. The FDA informed clinics that they considered the cytoplasmic transfer technique as a new treatment, and, as such, it would require an Investigational New Drug (IND) application. Cohen's clinic started the pre-IND application, but the clinic then went private, funding for the application dried up, the application was abandoned, the research team disbanded, and the cytoplasmic transfer procedure fell out of favor. In 2016, 12 (out of the 13) parents of children born using cytoplasmic transfer at the Saint Barnabas Center participated in a limited follow-up inquiry via online questionnaire. Children whose ages then were 13–18 reported no major problems.
In 2009, a team in Japan published studies of mitochondrial donation. In the same year, a team led by scientists at Oregon Health & Science University published results of mitochondrial donation in monkeys; that team published an update reporting on the health of the monkeys born with the technique, as well as further work it had done on human embryos.
Human trials in 2010 by a team in Newcastle University and Newcastle Fertility Centre were successful in reducing transmission of mtDNA. The results of the study found the mean mtDNA carried over was on average under 2% in the experimental embryos. This was true for both the MI-SCC and PN transfer methods of MTR. This research did not extend past the blastocyst stage because of ethical concerns, and there are still concerns about whether results retrieved from the blastocyst stage are viable representations of whole embryos. Because of these speculations and to further the viability of MTR as a safe and effective technique, further research and clinical trials would need to be initiated to test the efficacy of MTR in the long term in human patients.
Research in the United Kingdom
In the United Kingdom, following animal experiments and the recommendations of a government commissioned expert committee, the Human Fertilisation and Embryology (Research Purposes) Regulations were passed in 2001 regulating and allowing research into human embryos. In 2004, Newcastle University applied for a license to develop pronuclear transfer to avoid the transmission of mitochondrial diseases, and was granted the license in 2005. Following further research by Newcastle and the Wellcome Trust, scientific review, public consultations, and debate, the UK government recommended that mitochondrial donation be legalized in 2013. In 2015 parliament passed the Human Fertilisation and Embryology (Mitochondrial Donation) Regulations, which came into force on 29 October 2015, making human mitochondrial donation legal in the UK. The Human Fertilisation and Embryology Authority (HFEA) was authorized to license and regulate medical centers which wanted to use human mitochondrial donation. In February 2016, the US National Academy of Sciences issued a report describing technologies then current and the surrounding ethical issues.
The HFEA Safety Committee issued its fourth report in November 2016 recommending procedures under which HFEA should authorize MRT, the HFEA issued their regulations in December 2016 and granted their first license (to Newcastle Fertility Centre; Newcastle upon Tyne Hospital NHS Foundation Trust led by Dr Jane Stewart as Person Responsible to the HFEA) in March 2017. Between August 2017 and January 2019, the HFEA received 15 requests from women to undergo MRT, of which 14 were granted. As of 2020, if children have been born from these procedures, the details have not been published because of the wishes of the parents.
Douglass Turnbull, the driving force behind mitochondrial research at Newcastle University, was awarded a knighthood in 2016.
John Zhang team
In 2016, John Zhang and a mixed team of scientists from Mexico and New York used the spindle transfer technique to help a Jordanian woman to give birth to a baby boy. The mother had Leigh disease and already had four miscarriages and two children who had died of the disease. Valery Zukin, director of the Nadiya clinic in Kyiv, Ukraine, reported in June 2018 that doctors there had used the pronuclear transfer method of MRT to help four women give birth (three boys and a girl) and three women to become pregnant (one from Sweden); the team had 14 failed attempts. In January 2019 it was reported that seven babies had been born using MRT. The doctors had first gotten approval from an ethical committee and a review board of the Ukrainian Association of Reproductive Medicine and the Ukrainian Postgraduate Medical Academy, under the auspices of the Ukrainian Ministry of Healthcare; there was no law in Ukraine against MRT. One of the first children, a boy, was born to a 34-year-old woman in January 2017, and genetic test results were reported as normal. In August and October 2017 the British HFEA authorized MRT for two women who had a genetic mutation in their mitichondria that causes myoclonic epilepsy with ragged red fibers. In January 2019, Embryotools, Barcelona, Spain announced that a 32-year-old Greek woman had become pregnant using the spindle transfer technique. MRT was not legal in Spain so they had performed the trial in Greece where there was no law against MRT. They were helped by the Institute of Life in Athens, Greece and had obtained approval from the Greek National Authority of Assisted Reproduction. The pregnant Greek woman had already had four failed IVF cycles and surgery twice for endometriosis.
In August 2017, in a letter to two clinics, including Zhang's, the FDA warned that the technique should not be marketed in the U.S.
2018–present
In June 2018 Australian Senate's Senate Community Affairs References Committee recommended a move towards legalising MRT, and in July 2018 the Australian senate endorsed it. Research and clinical applications of MRT were overseen by laws made by federal and state governments. State laws were, for the most part, consistent with federal law. In all states, legislation prohibited the use of MRT techniques in the clinic, and except for Western Australia, research on a limited range of MRT was permissible up to day 14 of embryo development, subject to a license being granted. In 2010, the Hon. Mark Butler MP, then Federal Minister for Mental Health and Ageing, had appointed an independent committee to review the two relevant acts: the Prohibition of Human Cloning for Reproduction Act 2002 and the Research Involving Human Embryos Act 2002. The committee's report, released in July 2011, recommended the existing legislation remain unchanged. The Australian National Health and Medical Research Council issued two reports on legalising MRT in June 2020. In 2022, Maeve's Law was passed by the Australian Parliament, legalising MRT under a specified mitochondrial donation licence for research and training, and in clinical settings.
Singapore was also considering whether to permit the MRT in 2018.
In 2018, researchers announced the use of MRT to restore function to heart tissue in cardiac-compromised newborns. The damaged heart cells absorbed mitochondria extracted from healthy tissue and returned to useful activity.
Society and culture
Regulation
As of February 2016, the United States had no regulations governing mitochondrial donation, and Congress barred the FDA from evaluating any applications that involve implanting modified embryos into a woman.
The United Kingdom became the first country to legalize the procedure: the UK's chief medical officer recommended it be legalized in 2013; parliament passed The Human Fertilisation and Embryology (Mitochondrial Donation) Regulations in 2015, and the regulatory authority published regulations in 2016.
Ethics
Despite the promising outcomes of the two techniques, pronuclear transfer and spindle transfer, mitochondrial gene replacement raises ethical and social concerns.
Mitochondrial donation involves modification of the germline, and hence such modifications would be passed on to subsequent generations. Using human embryos for in vitro research is also controversial, as embryos are created specifically for research and egg donors are induced to undergo the procedure by financial compensation.
Mitochondrial donation also has the potential for psychological and emotional impacts on an offspring through an effect on the person's sense of identity. Ethicists question whether the genetic make-up of children born as a result of mitochondrial replacement might affect their emotional well-being when they become aware that they are different from other healthy children conceived from two parents.
Opponents argue that scientists are "playing God" and that children with three genetic parents may suffer both psychological and physical damage.
On the other hand, New York University researcher James Grifo, a critic of the American ban, has argued that society "would never have made the advances in treating infertility that we have if these bans had been imposed 10 years" earlier.
On February 3, 2016, the Institute of Medicine of the National Academies of Sciences, Engineering, and Medicine issued a report, commissioned by the U.S. Food and Drug Administration, addressing whether it is ethically permissible for clinical research into mitochondrial replacement techniques (MRT) to continue. The report, titled Mitochondrial Replacement Techniques: Ethical, Social, and Policy Considerations, analyzes multiple facets of the arguments surrounding MRT and concludes that it is 'ethically permissible' to continue clinical investigations of MRT, so long as certain conditions are met. It recommended that initially the technique should only be used for male embryos to ensure that DNA with potential mitochondrial disease would not be passed on.
In 2018 Carl Zimmer compared the reaction to He Jiankui's human gene editing experiment to the debate over MRT.
References
Assisted reproductive technology
Obstetrics
Human pregnancy
Human reproduction
Human genetics
Molecular biology
Mitochondrial diseases
Gene therapy
Transhumanism
1996 introductions | Mitochondrial replacement therapy | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 3,972 | [
"Genetic engineering",
"Transhumanism",
"Gene therapy",
"Ethics of science and technology",
"Biochemistry",
"Medical technology",
"Assisted reproductive technology",
"Molecular biology"
] |
25,420,967 | https://en.wikipedia.org/wiki/Walsh%20diagram | Walsh diagrams, often called angular coordinate diagrams or correlation diagrams, are representations of calculated orbital binding energies of a molecule versus a distortion coordinate (bond angles), used for making quick predictions about the geometries of small molecules. By plotting the change in molecular orbital levels of a molecule as a function of geometrical change, Walsh diagrams explain why molecules are more stable in certain spatial configurations (e.g. why water adopts a bent conformation).
A major application of Walsh diagrams is to explain the regularity in structure observed for related molecules having identical numbers of valence electrons (e.g. why H2O and H2S look similar), and to account for how molecules alter their geometries as their number of electrons or spin state changes. Additionally, Walsh diagrams can be used to predict distortions of molecular geometry from knowledge of how the LUMO (Lowest Unoccupied Molecular Orbital) affects the HOMO (Highest Occupied Molecular Orbital) when the molecule experiences geometrical perturbation.
Walsh's rule for predicting shapes of molecules states that a molecule will adopt a structure that best provides the most stability for its HOMO. If a particular structural change does not perturb the HOMO, the closest occupied molecular orbital governs the preference for geometrical orientation.
History
Walsh diagrams were first introduced by A.D. Walsh, a British chemistry professor at the University of Dundee, in a series of ten papers in one issue of the Journal of the Chemical Society. Here, he aimed to rationalize the shapes adopted by polyatomic molecules in the ground state as well as in excited states, by applying theoretical contributions made by Mulliken. Specifically, Walsh calculated and explained the effect of changes in the shape of a molecule on the energy of molecular orbitals. Walsh diagrams are an illustration of such dependency, and his conclusions are what are referred to as the "rules of Walsh."
In his publications, Walsh showed through multiple examples that the geometry adopted by a molecule in its ground state primarily depends on the number of its valence electrons. He himself acknowledged that this general concept was not novel, but explained that the new data available to him allowed the previous generalizations to be expanded upon and honed. He also noted that Mulliken had previously attempted to construct a correlation diagram for the possible orbitals of a polyatomic molecule in two different nuclear configurations, and had even tried to use this diagram to explain shapes and spectra of molecules in their ground and excited states. However, Mulliken was unable to explain the reasons for the rises and falls of certain curves with increases in angle, thus Walsh claimed "his diagram was either empirical or based upon unpublished computations."
Overview
Walsh originally constructed his diagrams by plotting what he described as "orbital binding energies" versus bond angles. What Walsh was actually describing by this term is unclear; some believe he was in fact referring to ionization potentials, however this remains a topic of debate. At any rate, the general concept he put forth was that the total energy of a molecule is equal to the sum of all of the "orbital binding energies" in that molecule. Hence, from knowledge of the stabilization or destabilization of each of the orbitals by an alteration of the molecular bond angle, the equilibrium bond angle for a particular state of the molecule can be predicted. Orbitals which interact to stabilize one configuration (ex. Linear) may or may not overlap in another configuration (ex. Bent), thus one geometry will be calculably more stable than the other.
Typically, core orbitals (1s for B, C, N, O, F, and Ne) are excluded from Walsh diagrams because they are so low in energy that they do not experience a significant change by variations in bond angle. Only valence orbitals are considered. However, one should keep in mind that some of the valence orbitals are often unoccupied.
Generating Walsh diagrams
In preparing a Walsh diagram, the geometry of a molecule must first be optimized for example using the Hartree–Fock (HF) method for approximating the ground-state wave function and ground-state energy of a quantum many-body system. Next, single-point energies are performed for a series of geometries displaced from the above-determined equilibrium geometry. Single-point energies (SPEs) are calculations of potential energy surfaces of a molecule for a specific arrangement of the atoms in that molecule. In conducting these calculations, bond lengths remain constant (at equilibrium values) and only the bond angle should be altered from its equilibrium value. The single-point computation for each geometry can then be plotted versus bond angle to produce the representative Walsh diagram.
Structure of a Walsh diagram
AH2 Molecules
For the simplest AH2 molecular system, Walsh produced the first angular correlation diagram by plotting the ab initio orbital energy curves for the canonical molecular orbitals while changing the bond angle from 90° to 180°. As the bond angle is distorted, the energy for each of the orbitals can be followed along the lines, allowing a quick approximation of molecular energy as a function of conformation. It is still unclear whether or not the Walsh ordinate considers nuclear repulsion, and this remains a topic of debate. A typical prediction result for water is a bond angle of 90°, which is not even close to the experimental derived value of 104°. At best the method is able to differentiate between a bent and linear molecule.
This same concept can be applied to other species including non-hydride AB2 and BAC molecules, HAB and HAAH molecules, tetraatomic hydride molecules (AH3), tetraatomic nonhydride molecules (AB), H2AB molecules, acetaldehyde, pentaatomic molecules (CH3I), hexatomic molecules (ethylene), and benzene.
Reactivity
Walsh diagrams in conjunction with molecular orbital theory can also be used as a tool to predict reactivity. By generating a Walsh Diagram and then determining the HOMO/LUMO of that molecule, it can be determined how the molecule is likely to react. In the following example, the Lewis acidity of AH3 molecules such as BH3 and CH3+ is predicted.
Six electron AH3 molecules should have a planar conformation. It can be seen that the HOMO, 1e’, of planar AH3 is destabilized upon bending of the A-H bonds to form a pyramid shape, due to disruption of bonding. The LUMO, which is concentrated on one atomic center, is a good electron acceptor and explains the Lewis acid character of BH3 and CH3+.
Walsh correlation diagrams can also be used to predict relative molecular orbital energy levels. The distortion of the hydrogen atoms from the planar CH3+ to the tetrahedral CH3-Nu causes a stabilization of the C-Nu bonding orbital, σ.
Other correlation diagrams
Other correlation diagrams are Tanabe-Sugano diagrams and Orgel diagrams.
See also
Ab initio quantum chemistry methods
Molecular Orbital Theory
Mulliken symbols
References
External links
The Rules of Walsh
Walsh Diagrams
Constructing Walsh Diagrams
Chemical bonding
Physical organic chemistry
Eponymous diagrams of chemistry | Walsh diagram | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,465 | [
"Chemical bonding",
"Condensed matter physics",
"nan",
"Physical organic chemistry"
] |
290,529 | https://en.wikipedia.org/wiki/Serum%20protein%20electrophoresis | Serum protein electrophoresis (SPEP or SPE) is a laboratory test that examines specific proteins in the blood called globulins. The most common indications for a serum protein electrophoresis test are to diagnose or monitor multiple myeloma, a monoclonal gammopathy of uncertain significance (MGUS), or further investigate a discrepancy between a low albumin and a relatively high total protein. Unexplained bone pain, anemia, proteinuria, chronic kidney disease, and hypercalcemia are also signs of multiple myeloma, and indications for SPE. Blood must first be collected, usually into an airtight vial or syringe. Electrophoresis is a laboratory technique in which the blood serum (the fluid portion of the blood after the blood has clotted) is applied to either an acetate membrane soaked in a liquid buffer, or to a buffered agarose gel matrix, or into liquid in a capillary tube, and exposed to an electric current to separate the serum protein components into five major fractions by size and electrical charge: serum albumin, alpha-1 globulins, alpha-2 globulins, beta 1 and 2 globulins, and gamma globulins.
Acetate or gel electrophoresis
Proteins are separated by both electrical forces and electroendoosmostic forces. The net charge on a protein is based on the sum charge of its amino acids, and the pH of the buffer. Proteins are applied to a solid matrix such as an agarose gel, or a cellulose acetate membrane in a liquid buffer, and electric current is applied. Proteins with a negative charge will migrate towards the positively charged anode. Albumin has the most negative charge, and will migrate furthest towards the anode. Endoosmotic flow is the movement of liquid towards the cathode, which causes proteins with a weaker charge to move backwards from the application site. Gamma proteins are primarily separated by endoosmotic forces. The drawing of the electrophoretic bands provided by the laboratory may be difficult to remember, and medical students, residents, nurses, and non-specialized medical practitioners may find visual mnemonics useful to recall the five main bands and the shape of normal serum electrophoresis.
Capillary electrophoresis
In capillary electrophoresis, there is no solid matrix. Proteins are separated primarily by strong electroendosmotic forces. The sample is injected into a capillary with a negative surface charge. A high current is applied, and negatively charged proteins such as albumin try to move towards the anode. Liquid buffer flows towards the cathode, and drags proteins with a weaker charge.
Serum protein fractions
Albumin
Albumin is the major fraction in a normal SPEP. A fall of 30% is necessary before the decrease shows on electrophoresis. Usually a single band is seen. Heterozygous individuals may produce bisalbuminemia – two equally staining bands, the product of two genes. Some variants give rise to a wide band or two bands of unequal intensity but none of these variants is associated with disease. Increased anodic mobility results from the binding of bilirubin, nonesterified fatty acids, penicillin and acetylsalicylic acid, and occasionally from tryptic digestion in acute pancreatitis.
Absence of albumin, known as analbuminaemia, is rare. A decreased level of albumin, however, is common in many diseases, including liver disease, malnutrition, malabsorption, protein-losing nephropathy and enteropathy.
Albumin – alpha-1 interzone
Even staining in this zone is due to alpha-1 lipoprotein (high density lipoprotein – HDL). Decrease occurs in severe inflammation, acute hepatitis, and cirrhosis. Also, nephrotic syndrome can lead to decrease in albumin level; due to its loss in the urine through a damaged leaky glomerulus. An increase appears in severe alcoholics and in women during pregnancy and in puberty.
The high levels of AFP that may occur in hepatocellular carcinoma may result in a sharp band between the albumin and the alpha-1 zone.
Alpha-1 zone
Orosomucoid and antitrypsin migrate together but orosomucoid stains poorly so alpha 1 antitrypsin (AAT) constitutes most of the alpha-1 band. Alpha-1 antitrypsin has an SG group and thiol compounds may be bound to the protein altering their mobility. A decreased band is seen in the deficiency state. It is decreased in the nephrotic syndrome and absence could indicate possible alpha 1-antitrypsin deficiency. This eventually leads to emphysema from unregulated neutrophil elastase activity in the lung tissue. The alpha-1 fraction does not disappear in alpha 1-antitrypsin deficiency, however, because other proteins, including alpha-lipoprotein and orosomucoid, also migrate there. As a positive acute phase reactant, AAT is increased in acute inflammation.
Bence Jones protein may bind to and retard the alpha-1 band.
Alpha-1 – alpha-2 interzone
Two faint bands may be seen representing alpha 1-antichymotrypsin and vitamin D binding protein. These bands fuse and intensify in early inflammation due to an increase in alpha 1-antichymotrypsin, an acute phase protein.
Alpha-2 zone
This zone consists principally of alpha-2 macroglobulin (AMG or A2M) and haptoglobin. There are typically low levels in haemolytic anaemia (haptoglobin is a suicide molecule which binds with free haemoglobin released from red blood cells and these complexes are rapidly removed by phagocytes). Haptoglobin is raised as part of the acute phase response, resulting in a typical elevation in the alpha-2 zone during inflammation. A normal alpha-2 and an elevated alpha-1 zone is a typical pattern in hepatic metastasis and cirrhosis.
Haptoglobin/haemoglobin complexes migrate more cathodally than haptoglobin as seen in the alpha-2 – beta interzone. This is typically seen as a broadening of the alpha-2 zone.
Alpha-2 macroglobulin may be elevated in children and the elderly. This is seen as a sharp front to the alpha-2 band. AMG is markedly raised (10-fold increase or greater) in association with glomerular protein loss, as in nephrotic syndrome. Due to its large size, AMG cannot pass through glomeruli, while other lower-molecular weight proteins are lost. Enhanced synthesis of AMG accounts for its absolute increase in nephrotic syndrome. Increased AMG is also noted in rats with no albumin indicating that this is a response to low albumin rather than nephrotic syndrome itself
AMG is mildly elevated early in the course of diabetic nephropathy.
Alpha-2 - beta interzone
Cold insoluble globulin forms a band here which is not seen in plasma because it is precipitated by heparin. There are low levels in inflammation and high levels in pregnancy.
Beta lipoprotein forms an irregular crenated band in this zone. High levels are seen in type II hypercholesterolaemia, hypertriglyceridemia, and in the nephrotic syndrome.
Beta zone
Transferrin and beta-lipoprotein (LDL) comprises the beta-1. Increased beta-1 protein due to the increased level of free transferrin is typical of iron deficiency anemia, pregnancy, and oestrogen therapy. Increased beta-1 protein due to LDL elevation occurs in hypercholesterolemia. Decreased beta-1 protein occurs in acute or chronic inflammation.
Beta-2 comprises C3 (complement protein 3). It is raised in the acute phase response. Depression of C3 occurs in autoimmune disorders as the complement system is activated and the C3 becomes bound to immune complexes and removed from serum. Fibrinogen, a beta-2 protein, is found in normal plasma but absent in normal serum. Occasionally, blood drawn from heparinized patients does not fully clot, resulting in a visible fibrinogen band between the beta and gamma globulins.
Beta-gamma interzone
C-reactive protein is found in between the beta and gamma zones producing beta/gamma fusion. IgA has the most anodal mobility and typically migrates in the region between the beta and gamma zones also causing a beta/gamma fusion in patients with cirrhosis, respiratory infection, skin disease, or rheumatoid arthritis (increased IgA). Fibrinogen from plasma samples will be seen in the beta gamma region. Fibrinogen, a beta-2 protein, is found in normal plasma but absent in normal serum. Occasionally, blood drawn from heparinized patients does not fully clot, resulting in a visible fibrinogen band between the beta and gamma globulins.
Gamma zone
The immunoglobulins or antibodies are generally the only proteins present in the normal gamma region. Of note, any protein migrating in the gamma region will be stained and appear on the gel, which may include protein contaminants, artifacts, or certain medications. Depending on whether an agarose or capillary method is used, interferences vary. Immunoglobulins consist of heavy chains (μ, δ, γ, α, and ε) and light chains (κ and λ). A normal gamma zone should appear as a smooth 'blush', or smear, with no asymmetry or sharp peaks. The gamma globulins may be elevated (hypergammaglobulinemia), decreased (hypogammaglobulinaemia), or have an abnormal peak or peaks. Note that immunoglobulins may also be found in other zones; IgA typically migrates in the beta-gamma zone, and in particular, pathogenic immunoglobulins may migrate anywhere, including the alpha regions.
Hypogammaglobulinaemia is easily identifiable as a "slump" or decrease in the gamma zone. It is normal in infants. It is found in patients with X-linked agammaglobulinemia. IgA deficiency occurs in 1:500 of the population, as is suggested by a pallor in the gamma zone. Of note, hypogammaglobulinema may be seen in the context of MGUS or multiple myeloma.
If the gamma zone shows an increase the first step in interpretation is to establish if the region is narrow or wide. A broad "swell-like" manner (wide) indicates polyclonal immunoglobulin production. If it is elevated in an asymmetric manner or with one or more peaks or narrow "spikes" it could indicate clonal production of one or more immunoglobulins,
Polyclonal gammopathy is indicated by a "swell-like" elevation in the gamma zone, which typically indicates a non-neoplastic condition (although is not exclusive to non-neoplastic conditions). The most common causes of polyclonal hypergammaglobulinaemia detected by electrophoresis are severe infection, chronic liver disease, rheumatoid arthritis, systemic lupus erythematosus and other connective tissue diseases.
A narrow spike is suggestive of a monoclonal gammopathy, also known as a restricted band, or "M-spike". To confirm that the restricted band is an immunoglobulin, follow up testing with immunofixation, or immunodisplacement/immunosubtraction (capillary methods) is performed. Therapeutic monoclonal antibodies (mAb), also migrate in this region and may be misinterpreted as a monoclonal gammopathy, and may also be identified by immunofixation or immunodisplacement/immunosubtraction as they are structurally comparable to human immunoglobulins. The most common cause of a restricted band is an MGUS (monoclonal gammopathy of uncertain significance), which, although a necessary precursor, only rarely progresses to multiple myeloma. (On average, 1%/year.) Typically, a monoclonal gammopathy is malignant or clonal in origin, Myeloma being the most common cause of IgA and IgG spikes. chronic lymphatic leukaemia and lymphosarcoma are not uncommon and usually give rise to IgM paraproteins. Note that up to 8% of healthy geriatric patients may have a monoclonal spike. Waldenström's macroglobulinaemia (IgM), monoclonal gammopathy of undetermined significance (MGUS), amyloidosis, plasma cell leukemia and solitary plasmacytomas also produce an M-spike.
Oligoclonal gammopathy is indicated by one or more discrete clones.
Lysozyme may be seen as a band cathodal to gamma in myelomonocytic leukaemia in which it is released from tumour cells.
References
External links
Protein electrophoresis at Lab Tests Online
Visual mnemonics for serum protein electrophoresis
Serology
Electrophoresis | Serum protein electrophoresis | [
"Chemistry",
"Biology"
] | 2,860 | [
"Instrumental analysis",
"Molecular biology techniques",
"Electrophoresis",
"Biochemical separation processes"
] |
291,111 | https://en.wikipedia.org/wiki/Caspase | Caspases (cysteine-aspartic proteases, cysteine aspartases or cysteine-dependent aspartate-directed proteases) are a family of protease enzymes playing essential roles in programmed cell death. They are named caspases due to their specific cysteine protease activity – a cysteine in its active site nucleophilically attacks and cleaves a target protein only after an aspartic acid residue. As of 2009, there are 12 confirmed caspases in humans and 10 in mice, carrying out a variety of cellular functions.
The role of these enzymes in programmed cell death was first identified in 1993, with their functions in apoptosis well characterised. This is a form of programmed cell death, occurring widely during development, and throughout life to maintain cell homeostasis. Activation of caspases ensures that the cellular components are degraded in a controlled manner, carrying out cell death with minimal effect on surrounding tissues.
Caspases have other identified roles in programmed cell death such as pyroptosis, necroptosis and PANoptosis. These forms of cell death are important for protecting an organism from stress signals and pathogenic attack. Caspases also have a role in inflammation, whereby it directly processes pro-inflammatory cytokines such as pro-IL1β. These are signalling molecules that allow recruitment of immune cells to an infected cell or tissue. There are other identified roles of caspases such as cell proliferation, tumor suppression, cell differentiation, neural development and axon guidance and ageing.
Caspase deficiency has been identified as a cause of tumor development. Tumor growth can occur by a combination of factors, including a mutation in a cell cycle gene which removes the restraints on cell growth, combined with mutations in apoptotic proteins such as caspases that would respond by inducing cell death in abnormally growing cells. Conversely, over-activation of some caspases such as caspase-3 can lead to excessive programmed cell death. This is seen in several neurodegenerative diseases where neural cells are lost, such as Alzheimer's disease. Caspases involved with processing inflammatory signals are also implicated in disease. Insufficient activation of these caspases can increase an organism's susceptibility to infection, as an appropriate immune response may not be activated. The integral role caspases play in cell death and disease has led to research on using caspases as a drug target. For example, inflammatory caspase-1 has been implicated in causing autoimmune diseases; drugs blocking the activation of Caspase-1 have been used to improve the health of patients. Additionally, scientists have used caspases as cancer therapy to kill unwanted cells in tumors.
Functional classification of caspases
Most caspases play a role in programmed cell death. These are summarized in the table below. The enzymes are sub classified into three types: Initiator, Executioner and Inflammatory.
Note that in addition to apoptosis, caspase-8 is also required for the inhibition of another form of programmed cell death called necroptosis. Caspase-14 plays a role in epithelial cell keratinocyte differentiation and can form an epidermal barrier that protects against dehydration and UVB radiation.
Activation of caspases
Caspases are synthesised as inactive zymogens (pro-caspases) that are only activated following an appropriate stimulus. This post-translational level of control allows rapid and tight regulation of the enzyme.
Activation involves dimerization and often oligomerisation of pro-caspases, followed by cleavage into a small subunit and large subunit. The large and small subunit associate with each other to form an active heterodimer caspase. The active enzyme often exists as a heterotetramer in the biological environment, where a pro-caspase dimer is cleaved together to form a heterotetramer.
Dimerisation
The activation of initiator caspases and inflammatory caspases is initiated by dimerisation, which is facilitated by binding to adaptor proteins via protein–protein interaction motifs that are collectively referred to as death folds. The death folds are located in a structural domain of the caspases known as the pro-domain, which is larger in those caspases that contain death folds than in those that do not. The pro-domain of the intrinsic initiator caspases and the inflammatory caspases contains a single death fold known as caspase recruitment domain (CARD), while the pro-domain of the extrinsic initiator caspases contains two death folds known as death effector domains (DED).
Multiprotein complexes often form during caspase activation. Some activating multiprotein complexes includes:
The death-inducing signaling complex (DISC) during extrinsic apoptosis
The apoptosome during intrinsic apoptosis
The inflammasome during pyroptosis
Cleavage
Once appropriately dimerised, the Caspases cleave at inter domain linker regions, forming a large and small subunit. This cleavage allows the active-site loops to take up a conformation favourable for enzymatic activity.
Cleavage of Initiator and Executioner caspases occur by different methods outlined in the table below.
Initiator caspases auto-proteolytically cleave whereas Executioner caspases are cleaved by initiator caspases. This hierarchy allows an amplifying chain reaction or cascade for degrading cellular components, during controlled cell death.
Some roles of caspases
Apoptosis
Apoptosis is a form of programmed cell death where the cell undergoes morphological changes, to minimize its effect on surrounding cells to avoid inducing an immune response. The cell shrinks and condenses - the cytoskeleton will collapse, and the nuclear envelope disassembles the DNA fragments up. This results in the cell forming self-enclosed bodies called 'blebs', to avoid release of cellular components into the extracellular medium. Additionally, the cell membrane phospholipid content is altered, which makes the dying cell more susceptible to phagocytic attack and removal.
Apoptotic caspases are subcategorised as:
Initiator Caspases (Caspase 2, Caspase 8, Caspase 9, Caspase 10)
Executioner Caspases (Caspase 3, Caspase 6 and Caspase 7)
Once initiator caspases are activated, they produce a chain reaction, activating several other executioner caspases. Executioner caspases degrade over 600 cellular components in order to induce the morphological changes for apoptosis.
Examples of caspase cascade during apoptosis:
Intrinsic apoptopic pathway: During times of cellular stress, mitochondrial cytochrome c is released into the cytosol. This molecule binds an adaptor protein (APAF-1), which recruits initiator Caspase-9 (via CARD-CARD interactions). This leads to the formation of a Caspase activating multiprotein complex called the Apoptosome. Once activated, initiator caspases such as Caspase 9 will cleave and activate other executioner caspases. This leads to degradation of cellular components for apoptosis.
Extrinsic apoptopic pathway: The caspase cascade is also activated by extracellular ligands, via cell surface Death Receptors. This is done by the formation of a multiprotein Death Inducing Signalling Complex (DISC) that recruits and activates a pro-caspase. For example, the Fas Ligand binds the FasR receptor at the receptor's extracellular surface; this activates the death domains at the cytoplasmic tail of the receptor. The adaptor protein FADD will recruit (by a Death domain-Death domain interaction) pro-Caspase 8 via the DED domain. This FasR, FADD and pro-Caspase 8 form the Death Inducing Signalling Complex (DISC) where Caspase-8 is activated. This could lead to either downstream activation of the intrinsic pathway by inducing mitochondrial stress, or direct activation of Executioner Caspases (Caspase 3, Caspase 6 and Caspase 7) to degrade cellular components as shown in the adjacent diagram.
Pyroptosis
Pyroptosis is a form of programmed cell death that inherently induces an immune response. It is morphologically distinct from other types of cell death – cells swell up, rupture and release pro-inflammatory cellular contents. This is done in response to a range of stimuli including microbial infections as well as heart attacks (myocardial infarctions). Caspase-1, Caspase-4 and Caspase-5 in humans, and Caspase-1 and Caspase-11 in mice play important roles in inducing cell death by pyroptosis. This limits the life and proliferation time of intracellular and extracellular pathogens.
Pyroptosis by caspase-1
Caspase-1 activation is mediated by a repertoire of proteins, allowing detection of a range of pathogenic ligands. Some mediators of Caspase-1 activation are: NOD-like Leucine Rich Repeats (NLRs), AIM2-Like Receptors (ALRs), Pyrin and IFI16.
These proteins allow caspase-1 activation by forming a multiprotein activating complex called Inflammasomes. For example, a NOD Like Leucine Rich Repeat NLRP3 will sense an efflux of potassium ions from the cell. This cellular ion imbalance leads to oligomerisation of NLRP3 molecules to form a multiprotein complex called the NLRP3 inflammasome. The pro-caspase-1 is brought into close proximity with other pro-caspase molecule in order to dimerise and undergo auto-proteolytic cleavage.
Some pathogenic signals that lead to Pyroptosis by Caspase-1 are listed below:
DNA in the host cytosol binds to AIM2-Like Receptors inducing Pyroptosis
Type III secretion system apparatus from bacteria bind NOD Like Leucine Rich Repeats receptors called NAIP's (1 in humans and 4 in mice)
Pyroptosis by Caspase-4 and Caspase-5 in humans and Caspase-11 in mice
These caspases have the ability to induce direct pyroptosis when lipopolysaccharide (LPS) molecules (found in the cell wall of gram negative bacteria) are found in the cytoplasm of the host cell. For example, Caspase 4 acts as a receptor and is proteolytically activated, without the need of an inflammasome complex or Caspase-1 activation.
A crucial downstream substrate for pyroptotic caspases is Gasdermin D (GSDMD)
Role in inflammation
Inflammation is a protective attempt by an organism to restore a homeostatic state, following disruption from harmful stimulus, such as tissue damage or bacterial infection.
Caspase-1, Caspase-4, Caspase-5 and Caspase-11 are considered 'Inflammatory Caspases'.
Caspase-1 is key in activating pro-inflammatory cytokines; these act as signals to immune cells and make the environment favourable for immune cell recruitment to the site of damage. Caspase-1 therefore plays a fundamental role in the innate immune system. The enzyme is responsible for processing cytokines such as pro-ILβ and pro-IL18, as well as secreting them.
Caspase-4 and -5 in humans, and Caspase-11 in mice have a unique role as a receptor, whereby it binds to LPS, a molecule abundant in gram negative bacteria. This can lead to the processing and secretion of IL-1β and IL-18 cytokines by activating Caspase-1; this downstream effect is the same as described above. It also leads to the secretion of another inflammatory cytokine that is not processed. This is called pro-IL1α. There is also evidence of an inflammatory caspase, caspase-11 aiding cytokine secretion; this is done by inactivating a membrane channel that blocks IL-1β secretion
Caspases can also induce an inflammatory response on a transcriptional level. There is evidence where it promotes transcription of nuclear factor-κB (NF-κB), a transcription factor that assists in transcribing inflammatory cytokines such as IFNs, TNF, IL-6 and IL-8. For example, Caspase-1 activates Caspase-7, which in turn cleaves the poly (ADP) ribose – this activates transcription of NF-κB controlled genes.
Discovery of caspases
H. Robert Horvitz initially established the importance of caspases in apoptosis and found that the ced-3 gene is required for the cell death that took place during the development of the nematode C. elegans. Horvitz and his colleague Junying Yuan found in 1993 that the protein encoded by the ced-3 gene is cysteine protease with similar properties to the mammalian interleukin-1-beta converting enzyme (ICE) (now known as caspase 1). At the time, ICE was the only known caspase. Other mammalian caspases were subsequently identified, in addition to caspases in organisms such as fruit fly Drosophila melanogaster.
Researchers decided upon the nomenclature of the caspase in 1996. In many instances, a particular caspase had been identified simultaneously by more than one laboratory; each would then give the protein a different name. For example, caspase 3 was variously known as CPP32, apopain and Yama. Caspases, therefore, were numbered in the order in which they were identified. ICE was, therefore, renamed as caspase 1. ICE was the first mammalian caspase to be characterised because of its similarity to the nematode death gene ced-3, but it appears that the principal role of this enzyme is to mediate inflammation rather than cell death.
Evolution
In animals apoptosis is induced by caspases and in fungi and plants, apoptosis is induced by arginine and lysine-specific caspase like proteases called metacaspases. Homology searches revealed a close homology between caspases and the caspase-like proteins of Reticulomyxa (a unicellular organism). The phylogenetic study indicates that divergence of caspase and metacaspase sequences occurred before the divergence of eukaryotes.
See also
Apoptosis
Apoptosome
Bcl-2
Emricasan
Metacaspase
Paracaspase
Pyroptosis
The Proteolysis Map
Programmed cell death
Cellular anastasis
Notes
References
External links
Apoptosis Video Demonstrates a model of a caspase cascade as it occurs in vivo.
The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007.
Apoptosis & Caspase 7, PMAP-animation
Tumors Beware (from Beaker Blog)
EC 3.4.22
Programmed cell death
Apoptosis
Proteases
Caspases | Caspase | [
"Chemistry",
"Biology"
] | 3,300 | [
"Senescence",
"Programmed cell death",
"Apoptosis",
"Signal transduction"
] |
291,453 | https://en.wikipedia.org/wiki/Renormalization | Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.
For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Mathematics and experiments prove that positrons and more massive particles such as protons exhibit precisely the same observed charge as the electron – even in the presence of much stronger interactions and more intense clouds of virtual particles.
Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in further infinities. When describing spacetime as a continuum, certain statistical and quantum mechanical constructions are not well-defined. To define them, or make them unambiguous, a continuum limit must carefully remove "construction scaffolding" of lattices at various scales. Renormalization procedures are based on the requirement that certain physical quantities (such as the mass and charge of an electron) equal observed (experimental) values. That is, the experimental value of the physical quantity yields practical applications, but due to their empirical nature the observed measurement represents areas of quantum field theory that require deeper derivation from theoretical bases.
Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Despite his later skepticism, it was Paul Dirac who pioneered renormalization.
Today, the point of view has shifted: on the basis of the breakthrough renormalization group insights of Nikolay Bogolyubov and Kenneth Wilson, the focus is on variation of physical quantities across contiguous scales, while distant scales are related to each other through "effective" descriptions. All scales are linked in a broadly systematic way, and the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each. Wilson clarified which variables of a system are crucial and which are redundant.
Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
Self-interactions in classical physics
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.
The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius . The mass–energy in the field is
which becomes infinite as . This implies that the point particle would have infinite inertia and thus cannot be accelerated. Incidentally, the value of that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of and ) turns out to be
where is the fine-structure constant, and is the reduced Compton wavelength of the electron.
Renormalization: The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell (in addition to the mass mentioned above associated with its electric field). If the shell's bare mass is allowed to be negative, it might be possible to take a consistent point limit. This was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory.
(See also regularization (physics) for an alternative way to remove infinities from this classical problem, assuming new physics exists at small scales.)
When calculating the electromagnetic interactions of charged particles, it is tempting to ignore the back-reaction of a particle's own field on itself. (Analogous to the back-EMF of circuit analysis.) But this back-reaction is necessary to explain the friction on charged particles when they emit radiation. If the electron is assumed to be a point, the value of the back-reaction diverges, for the same reason that the mass diverges, because the field is inverse-square.
The Abraham–Lorentz theory had a noncausal "pre-acceleration". Sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent.
The trouble was worse in classical field theory than in quantum field theory, because in quantum field theory a charged particle experiences Zitterbewegung due to interference with virtual particle–antiparticle pairs, thus effectively smearing out the charge over a region comparable to the Compton wavelength. In quantum electrodynamics at small coupling, the electromagnetic mass only diverges as the logarithm of the radius of the particle.
Divergences in quantum electrodynamics
When developing quantum electrodynamics in the 1930s, Max Born, Werner Heisenberg, Pascual Jordan, and Paul Dirac discovered that in perturbative corrections many integrals were divergent (see The problem of infinities).
One way of describing the perturbation theory corrections' divergences was discovered in 1947–49 by Hans Kramers, Hans Bethe,
Julian Schwinger, Richard Feynman, and Shin'ichiro Tomonaga, and systematized by Freeman Dyson in 1949. The divergences appear in radiative corrections involving Feynman diagrams with closed loops of virtual particles in them.
While virtual particles obey conservation of energy and momentum, they can have any energy and momentum, even one that is not allowed by the relativistic energy–momentum relation for the observed mass of that particle (that is, is not necessarily the squared mass of the particle in that process, e.g. for a photon it could be nonzero). Such a particle is called off-shell. When there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles. A variation in the energy of one particle in the loop can be balanced by an equal and opposite change in the energy of another particle in the loop, without affecting the incoming and outgoing particles. Thus many variations are possible. So to find the amplitude for the loop process, one must integrate over all possible combinations of energy and momentum that could travel around the loop.
These integrals are often divergent, that is, they give infinite answers. The divergences that are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one that comes from
the region in the integral where all particles in the loop have large energies and momenta,
very short wavelengths and high-frequencies fluctuations of the fields, in the path integral for the field,
very short proper-time between particle emission and absorption, if the loop is thought of as a sum over particle paths.
So these divergences are short-distance, short-time phenomena.
Shown in the pictures at the right margin, there are exactly three one-loop divergent loop diagrams in quantum electrodynamics:
(a) A photon creates a virtual electron–positron pair, which then annihilates. This is a vacuum polarization diagram.
(b) An electron quickly emits and reabsorbs a virtual photon, called a self-energy.
(c) An electron emits a photon, emits a second photon, and reabsorbs the first. This process is shown in the section below in figure 2, and it is called a vertex renormalization. The Feynman diagram for this is also called a “penguin diagram” due to its shape remotely resembling a penguin.
The three divergences correspond to the three parameters in the theory under consideration:
The field normalization Z.
The mass of the electron.
The charge of the electron.
The second class of divergence called an infrared divergence, is due to massless particles, like the photon. Every process involving charged particles emits infinitely many coherent photons of infinite wavelength, and the amplitude for emitting any finite number of photons is zero. For photons, these divergences are well understood. For example, at the 1-loop order, the vertex function has both ultraviolet and infrared divergences. In contrast to the ultraviolet divergence, the infrared divergence does not require the renormalization of a parameter in the theory involved. The infrared divergence of the vertex diagram is removed by including a diagram similar to the vertex diagram with the following important difference: the photon connecting the two legs of the electron is cut and replaced by two on-shell (i.e. real) photons whose wavelengths tend to infinity; this diagram is equivalent to the bremsstrahlung process. This additional diagram must be included because there is no physical way to distinguish a zero-energy photon flowing through a loop as in the vertex diagram and zero-energy photons emitted through bremsstrahlung. From a mathematical point of view, the IR divergences can be regularized by assuming fractional differentiation w.r.t. a parameter, for example:
is well defined at but is UV divergent; if we take the -th fractional derivative with respect to , we obtain the IR divergence
so we can cure IR divergences by turning them into UV divergences.
A loop divergence
The diagram in Figure 2 shows one of the several one-loop contributions to electron–electron scattering in QED. The electron on the left side of the diagram, represented by the solid line, starts out with 4-momentum and ends up with 4-momentum . It emits a virtual photon carrying to transfer energy and momentum to the other electron. But in this diagram, before that happens, it emits another virtual photon carrying 4-momentum , and it reabsorbs this one after emitting the other virtual photon. Energy and momentum conservation do not determine the 4-momentum uniquely, so all possibilities contribute equally and we must integrate.
This diagram's amplitude ends up with, among other things, a factor from the loop of
The various factors in this expression are gamma matrices as in the covariant formulation of the Dirac equation; they have to do with the spin of the electron. The factors of are the electric coupling constant, while the provide a heuristic definition of the contour of integration around the poles in the space of momenta. The important part for our purposes is the dependency on of the three big factors in the integrand, which are from the propagators of the two electron lines and the photon line in the loop.
This has a piece with two powers of on top that dominates at large values of (Pokorski 1987, p. 122):
This integral is divergent and infinite, unless we cut it off at finite energy and momentum in some way.
Similar loop divergences occur in other quantum field theories.
Renormalized and bare quantities
The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtual-particle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic back-reaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under consideration in the first place; so finite measured quantities would, in general, imply divergent bare quantities.
To make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams.
Renormalization in QED
For example, in the Lagrangian of QED
the fields and coupling constant are really bare quantities, hence the subscript above. Conventionally the bare quantities are written so that the corresponding Lagrangian terms are multiples of the renormalized ones:
Gauge invariance, via a Ward–Takahashi identity, turns out to imply that we can renormalize the two terms of the covariant derivative piece
together (Pokorski 1987, p. 115), which is what happened to ; it is the same as .
A term in this Lagrangian, for example, the electron–photon interaction pictured in Figure 1, can then be written
The physical constant , the electron's charge, can then be defined in terms of some specific experiment: we set the renormalization scale equal to the energy characteristic of this experiment, and the first term gives the interaction we see in the laboratory (up to small, finite corrections from loop diagrams, providing such exotica as the high-order corrections to the magnetic moment). The rest is the counterterm. If the theory is renormalizable (see below for more on this), as it is in QED, the divergent parts of loop diagrams can all be decomposed into pieces with three or fewer legs, with an algebraic form that can be canceled out by the second term (or by the similar counterterms that come from and ).
The diagram with the counterterm's interaction vertex placed as in Figure 3 cancels out the divergence from the loop in Figure 2.
Historically, the splitting of the "bare terms" into the original terms and counterterms came before the renormalization group insight due to Kenneth Wilson. According to such renormalization group insights, detailed in the next section, this splitting is unnatural and actually unphysical, as all scales of the problem enter in continuous systematic ways.
Running couplings
To minimize the contribution of loop diagrams to a given calculation (and therefore make it easier to extract results), one chooses a renormalization point close to the energies and momenta exchanged in the interaction. However, the renormalization point is not itself a physical quantity: the physical predictions of the theory, calculated to all orders, should in principle be independent of the choice of renormalization point, as long as it is within the domain of application of the theory. Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the remaining finite parts of loop diagrams. One can exploit this fact to calculate the effective variation of physical constants with changes in scale. This variation is encoded by beta-functions, and the general theory of this kind of scale-dependence is known as the renormalization group.
Colloquially, particle physicists often speak of certain physical "constants" as varying with the energy of interaction, though in fact, it is the renormalization scale that is the independent quantity. This running does, however, provide a convenient means of describing changes in the behavior of a field theory under changes in the energies involved in an interaction. For example, since the coupling in quantum chromodynamics becomes small at large energy scales, the theory behaves more like a free theory as the energy exchanged in an interaction becomes large – a phenomenon known as asymptotic freedom. Choosing an increasing energy scale and using the renormalization group makes this clear from simple Feynman diagrams; were this not done, the prediction would be the same, but would arise from complicated high-order cancellations.
For example,
is ill-defined.
To eliminate the divergence, simply change lower limit of integral into and :
Making sure , then
Regularization
Since the quantity is ill-defined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization (Weinberg, 1995).
An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals.
With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoff-dependent terms. After canceling out these terms with the contributions from cutoff-dependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoff-independent results for calculations.
Many different types of regulator are used in quantum field theory calculations, each with its advantages and disadvantages. One of the most popular in modern use is dimensional regularization, invented by Gerardus 't Hooft and Martinus J. G. Veltman, which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions. Another is Pauli–Villars regularization, which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta.
Yet another regularization scheme is the lattice regularization, introduced by Kenneth Wilson, which pretends that hyper-cubical lattice constructs our spacetime with fixed grid size. This size is a natural cutoff for the maximal momentum that a particle could possess when propagating on the lattice. And after doing a calculation on several lattices with different grid size, the physical result is extrapolated to grid size 0, or our natural universe. This presupposes the existence of a scaling limit.
A rigorous mathematical approach to renormalization theory is the so-called causal perturbation theory, where ultraviolet divergences are avoided from the start in calculations by performing well-defined mathematical operations only within the framework of distribution theory. In this approach, divergences are replaced by ambiguity: corresponding to a divergent diagram is a term which now has a finite, but undetermined, coefficient. Other principles, such as gauge symmetry, must then be used to reduce or eliminate the ambiguity.
Attitudes and interpretation
The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers.
Freeman Dyson argued that these infinities are of a basic nature and cannot be eliminated by any formal mathematical procedures, such as the renormalization method.
Dirac's criticism was the most persistent. As late as 1975, he was saying:
Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation because this so-called 'good theory' does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!
Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985:
The shell game that we play to find n and j is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.
Feynman was concerned that all field theories known in the 1960s had the property that the interactions become infinitely strong at short enough distance scales. This property called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, quantum chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory.
The general unease was almost universal in texts up to the 1970s and 1980s. Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac and various others—all of whom belonged to the older generation—never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behavior of phase transitions. In condensed matter physics, a physical short-distance regulator exists: matter ceases to be continuous on the scale of atoms. Short-distance divergences in condensed matter physics do not present a philosophical problem since the field theory is only an effective, smoothed-out representation of the behavior of matter anyway; there are no infinities since the cutoff is always finite, and it makes perfect sense that the bare quantities are cutoff-dependent.
If QFT holds all the way down past the Planck length (where it might yield to string theory, causal set theory or something different), then there may be no real problem with short-distance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful.
Be that as it may, Salam's remark in 1972 seems still relevant
Field-theoretic infinities – first encountered in Lorentz's computation of electron self-mass – have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may, after all, be circumvented — and finite values for the renormalization constants computed – is considered irrational. Compare Russell's postscript to the third volume of his autobiography The Final Years, 1944–1969 (George Allen and Unwin, Ltd., London 1969), p. 221:
In the modern world, if communities are unhappy, it is often because they have ignorances, habits, beliefs, and passions, which are dearer to them than happiness or even life. I find many men in our dangerous age who seem to be in love with misery and death, and who grow angry when hopes are suggested to them. They think hope is irrational and that, in sitting down to lazy despair, they are merely facing facts.
In QFT, the value of a physical constant, in general, depends on the scale that one chooses as the renormalization point, and it becomes very interesting to examine the renormalization group running of physical constants under changes in the energy scale. The coupling constants in the Standard Model of particle physics vary in different ways with increasing energy scale: the coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force tend to decrease, and the weak hypercharge coupling of the electroweak force tends to increase. At the colossal energy scale of 1015 GeV (far beyond the reach of our current particle accelerators), they all become approximately the same size (Grotz and Klapdor 1990, p. 254), a major motivation for speculations about grand unified theory. Instead of being only a worrisome problem, renormalization has become an important theoretical tool for studying the behavior of field theories in different regimes.
If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, "In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things."
Renormalizability
From this philosophical reassessment, a new concept follows naturally: the notion of renormalizability. Not all theories lend themselves to renormalization in the manner described above, with a finite supply of counterterms and all quantities becoming cutoff-independent at the end of the calculation. If the Lagrangian contains combinations of field operators of high enough dimension in energy units, the counterterms required to cancel all divergences proliferate to infinite number, and, at first glance, the theory would seem to gain an infinite number of free parameters and therefore lose all predictive power, becoming scientifically worthless. Such theories are called nonrenormalizable.
The Standard Model of particle physics contains only renormalizable operators, but the interactions of general relativity become nonrenormalizable operators if one attempts to construct a field theory of quantum gravity in the most straightforward manner (treating the metric in the Einstein–Hilbert Lagrangian as a perturbation about the Minkowski metric), suggesting that perturbation theory is not satisfactory in application to quantum gravity.
However, in an effective field theory, "renormalizability" is, strictly speaking, a misnomer. In nonrenormalizable effective field theory, terms in the Lagrangian do multiply to infinity, but have coefficients suppressed by ever-more-extreme inverse powers of the energy cutoff. If the cutoff is a real, physical quantity—that is, if the theory is only an effective description of physics up to some maximum energy or minimum distance scale—then these additional terms could represent real physical interactions. Assuming that the dimensionless constants in the theory do not get too large, one can group calculations by inverse powers of the cutoff, and extract approximate predictions to finite order in the cutoff that still have a finite number of free parameters. It can even be useful to renormalize these "nonrenormalizable" interactions.
Nonrenormalizable interactions in effective field theories rapidly become weaker as the energy scale becomes much smaller than the cutoff. The classic example is the Fermi theory of the weak nuclear force, a nonrenormalizable effective theory whose cutoff is comparable to the mass of the W particle. This fact may also provide a possible explanation for why almost all of the particle interactions we see are describable by renormalizable theories. It may be that any others that may exist at the GUT or Planck scale simply become too weak to detect in the realm we can observe, with one exception: gravity, whose exceedingly weak interaction is magnified by the presence of the enormous masses of stars and planets.
Renormalization schemes
In actual calculations, the counterterms introduced to cancel the divergences in Feynman diagram calculations beyond tree level must be fixed using a set of renormalisation conditions. The common renormalization schemes in use include:
Minimal subtraction (MS) scheme and the related modified minimal subtraction (MS-bar) scheme
On-shell scheme
Besides, there exists a "natural" definition of the renormalized coupling (combined with the photon propagator) as a propagator of dual free bosons, which does not explicitly require introducing the counterterms.
In statistical physics
History
A deeper understanding of the physical meaning and generalization of the
renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
Principles
In more technical terms, let us assume that we have a theory described
by a certain function of the state variables
and a certain set of coupling constants
. This function may be a partition function,
an action, a Hamiltonian, etc. It must contain the
whole description of the physics of the system.
Now we consider a certain blocking transformation of the state
variables ,
the number of must be lower than the number of
. Now let us try to rewrite the
function only in terms of the . If this is achievable by a
certain change in the parameters, , then the theory is said to be
renormalizable.
The possible
macroscopic states of the system, at a large scale, are given by this
set of fixed points.
Renormalization group fixed points
The most important information in the RG flow is its fixed points. A fixed point is defined by the vanishing of the beta function associated to the flow. Then, fixed points of the renormalization group are by definition scale invariant. In many cases of physical interest scale invariance enlarges to conformal invariance. One then has a conformal field theory at the fixed point.
The ability of several theories to flow to the same fixed point leads to universality.
If these fixed points correspond to free field theory, the theory is said to exhibit quantum triviality. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
See also
History of quantum field theory
Quantum triviality
Zeno's paradoxes
Nonoblique correction
References
Further reading
General introduction
DeDeo, Simon; Introduction to Renormalization (2017). Santa Fe Institute Complexity Explorer MOOC. Renormalization from a complex systems point of view, including Markov Chains, Cellular Automata, the real space Ising model, the Krohn-Rhodes Theorem, QED, and rate distortion theory.
Baez, John; Renormalization Made Easy, (2005). A qualitative introduction to the subject.
Blechman, Andrew E.; Renormalization: Our Greatly Misunderstood Friend, (2002). Summary of a lecture; has more information about specific regularization and divergence-subtraction schemes.
Shirkov, Dmitry; Fifty Years of the Renormalization Group, C.E.R.N. Courrier 41(7) (2001). Full text available at : I.O.P Magazines.
E. Elizalde; Zeta regularization techniques with Applications.
Mainly: quantum field theory
N. N. Bogoliubov, D. V. Shirkov (1959): The Theory of Quantized Fields. New York, Interscience. The first text-book on the renormalization group theory.
Ryder, Lewis H.; Quantum Field Theory (Cambridge University Press, 1985), Highly readable textbook, certainly the best introduction to relativistic Q.F.T. for particle physics.
Zee, Anthony; Quantum Field Theory in a Nutshell, Princeton University Press (2003) . Another excellent textbook on Q.F.T.
Weinberg, Steven; The Quantum Theory of Fields (3 volumes) Cambridge University Press (1995). A monumental treatise on Q.F.T. written by a leading expert, Nobel laureate 1979.
Pokorski, Stefan; Gauge Field Theories, Cambridge University Press (1987) .
't Hooft, Gerard; The Glorious Days of Physics – Renormalization of Gauge theories, lecture given at Erice (August/September 1998) by the Nobel laureate 1999 . Full text available at: hep-th/9812203.
Rivasseau, Vincent; An introduction to renormalization, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript.
Rivasseau, Vincent; From perturbative to constructive renormalization, Princeton University Press (1991) . Full text available in PostScript and in PDF (draft version).
Iagolnitzer, Daniel & Magnen, J.; Renormalization group analysis, Encyclopaedia of Mathematics, Kluwer Academic Publisher (1996). Full text available in PostScript and pdf here.
Scharf, Günter; Finite quantum electrodynamics: The causal approach, Springer Verlag Berlin Heidelberg New York (1995) .
A. S. Švarc (Albert Schwarz), Математические основы квантовой теории поля, (Mathematical aspects of quantum field theory), Atomizdat, Moscow, 1975. 368 pp.
Mainly: statistical physics
A. N. Vasil'ev; The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004);
Nigel Goldenfeld; Lectures on Phase Transitions and the Renormalization Group, Frontiers in Physics 85, Westview Press (June, 1992) . Covering the elementary aspects of the physics of phases transitions and the renormalization group, this popular book emphasizes understanding and clarity rather than technical manipulations.
Zinn-Justin, Jean; Quantum Field Theory and Critical Phenomena, Oxford University Press (4th edition – 2002) . A masterpiece on applications of renormalization methods to the calculation of critical exponents in statistical mechanics, following Wilson's ideas (Kenneth Wilson was Nobel laureate 1982).
Zinn-Justin, Jean; Phase Transitions & Renormalization Group: from Theory to Numbers, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript .
Domb, Cyril; The Critical Point: A Historical Introduction to the Modern Theory of Critical Phenomena, CRC Press (March, 1996) .
Brown, Laurie M. (Ed.); Renormalization: From Lorentz to Landau (and Beyond), Springer-Verlag (New York-1993) .
Cardy, John; Scaling and Renormalization in Statistical Physics, Cambridge University Press (1996) .
Miscellaneous
Shirkov, Dmitry; The Bogoliubov Renormalization Group, JINR Communication E2-96-15 (1996). Full text available at: hep-th/9602024
Zinn-Justin, Jean; Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375–388 (1999). Full text available in PostScript.
Connes, Alain; Symétries Galoisiennes & Renormalisation, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . French mathematician Alain Connes (Fields medallist 1982) describe the mathematical underlying structure (the Hopf algebra) of renormalization, and its link to the Riemann-Hilbert problem. Full text (in French) available at .
External links
Quantum field theory
Renormalization group
Mathematical physics | Renormalization | [
"Physics",
"Mathematics"
] | 7,876 | [
"Quantum field theory",
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Mathematical physics"
] |
291,462 | https://en.wikipedia.org/wiki/Renormalization%20group | In theoretical physics, the renormalization group (RG) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.
A change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (self-similarity).
As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.
For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.
History
The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school, Euclid, and up to Galileo. They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.
The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics, fluid mechanics, physical cosmology, and even nanotechnology. An early article by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function h(e) in quantum electrodynamics (QED), which is now called the beta function (see below).
Beginnings
Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g(μ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation
or equivalently, , for some function G (unspecified—nowadays called Wegner's scaling function) and a constant d, in terms of the coupling g(M) at a reference scale M.
Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ, and can vary to define the theory at any other scale:
The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings in the mathematical sense (Schröder's equation).
On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg and Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation:
The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in 1970. Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured to be about at energies close to 200 GeV, as opposed to the standard low-energy physics value of .
Deeper understanding
The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ.
The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).
A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
Reformulation
Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.
In 1973, it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of at which the coupling blows up (diverges). This special value is the scale of the strong interactions, = and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.
Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.
Conformal symmetry
Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β(g) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a (trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point, first predicted by Pendleton and Ross (1981), and C. T. Hill.
The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.
In string theory, conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification.
It is also the modern key idea underlying critical phenomena in condensed matter physics. Indeed, the RG has become one of the most important tools of modern physics. It is often used in combination with the Monte Carlo method.
Block spin
This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966.
Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature . The strength of their interaction is quantified by a certain coupling . The physics of the system will be described by a certain formula, say the Hamiltonian .
Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables, i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for and : . (This isn't exactly true, in general, but it is often a good first approximation.)
Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to , and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.
Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behaviour of the RG transformation which took and . Often, when iterated many times, this RG transformation leads to a certain number of fixed points.
To be more concrete, consider a magnetic system (e.g., the Ising model), in which the coupling denotes the trend of neighbour spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering term and the disordering effect of temperature.
For many models of this kind there are three fixed points:
and . This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase.
and . Exactly the opposite; here, temperature dominates, and the system is disordered at large scales.
A nontrivial point between them, and . In this point, changing the scale does not change the physics, because the system is in a fractal state. It corresponds to the Curie phase transition, and is also called a critical point.
So, if we are given a certain material with given values of and , all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.
Elementary theory
In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables , the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable.
Most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.
The change in the parameters is implemented by a certain beta function: , which is said to induce a renormalization group flow (or RG flow) on the -space. The values of under the flow are called running couplings.
As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality, possessing what is called a Landau pole, as in quantum electrodynamics. For a 4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension ≥ 5. For = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup, as lossiness implies that there is no unique inverse for each element.
Relevant and irrelevant operators and universality classes
Consider a certain observable of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:
A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant, i.e., the macroscopic physics is dominated by only a few observables in most systems.
As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 10 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.
Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables, such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.
This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes, specified by the shared sets of relevant observables.
Momentum space
Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG.
Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.
Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion.
This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.
As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale.
Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.
Exact renormalization group equations
An exact renormalization group equation (ERGE) is one that takes irrelevant couplings into account. There are several formulations.
The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space. Insist upon a hard momentum cutoff, so that the only degrees of freedom are those with momenta less than . The partition function is
For any positive Λ′ less than Λ, define SΛ′ (a functional over field configurations whose Fourier transform has momentum support within ) as
If depends only on and not on derivatives of , this may be rewritten as
in which it becomes clear that, since only functions φ with support between and are integrated over, the left hand side may still depend on with support outside that range. Obviously,
In fact, this transformation is transitive. If you compute from and then compute from , this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.
The Polchinski ERGE involves a smooth UV regulator cutoff. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale . As in Wilson's approach, we have a different action functional for each cutoff energy scale . Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.
In other words, (for a real scalar field; generalizations to other fields are obvious),
and ZΛ is really independent of ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is
when expanded.
When , is essentially 1. When , becomes very very huge and approaches infinity. is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.
The condition that
can be satisfied by (but not only by)
Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively.
The effective average action ERGE involves a smooth IR regulator cutoff.
The idea is to take all fluctuations right up to an IR scale into account. The effective average action will be accurate for fluctuations with momenta larger than . As the parameter is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.
For a real scalar field, one adds an IR cutoff
to the action , where Rk is a function of both and such that for
, Rk(p) is very tiny and approaches 0 and for , . Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.
One can use the condensed deWitt notation
for this IR regulator.
So,
where is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S[φ] + 1/2 φ⋅Rk⋅φ and so, to get the effective average action, we subtract off 1/2 φ⋅Rk⋅φ. In other words,
can be inverted to give Jk[φ] and we define the effective average action Γk as
Hence,
thus
is the ERGE which is also known as the Wetterich equation. As shown by Morris the effective action Γk is in fact simply related to Polchinski's effective action Sint via a Legendre transform relation.
As there are infinitely many choices of k, there are also infinitely many different interpolating ERGEs.
Generalization to other fields like spinorial fields is straightforward.
Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.
Renormalization group improvement of the effective potential
The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the model:
In order to determine the effective potential, it is useful to write as
where is a power series in :
Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference.
See also
Quantum triviality
Scale invariance
Schröder's equation
Regularization (physics)
Density matrix renormalization group
Functional renormalization group
Critical phenomena
Universality (dynamical systems)
C-theorem
History of quantum field theory
Top quark
Asymptotic safety
Remarks
Citations
References
Historical references
Pedagogical and historical reviews
The most successful variational RG method.
A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.
A pedestrian introduction to renormalization and the renormalization group.
A pedestrian introduction to the renormalization group as applied in condensed matter physics.
Books
T. D. Lee; Particle physics and introduction to field theory, Harwood academic publishers, 1981, . Contains a Concise, simple, and trenchant summary of the group structure, in whose discovery he was also involved, as acknowledged in Gell-Mann and Low's paper.
L. Ts. Adzhemyan, N. V. Antonov and A. N. Vasiliev; The Field Theoretic Renormalization Group in Fully Developed Turbulence; Gordon and Breach, 1999. .
Vasil'ev, A. N.; The field theoretic renormalization group in critical behavior theory and stochastic dynamics; Chapman & Hall/CRC, 2004. (Self-contained treatment of renormalization group applications with complete computations);
Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002), (an exceptionally solid and thorough treatise on both topics);
Zinn-Justin, Jean: Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375-388 (1999) [ISBN ]. Full text available in PostScript.
Kleinert, H. and Schulte Frohlinde, V; Critical Properties of 4-Theories, World Scientific (Singapore, 2001); Paperback ''. Full text available in PDF.
Quantum field theory
Statistical mechanics
Scaling symmetries
Mathematical physics | Renormalization group | [
"Physics",
"Mathematics"
] | 5,904 | [
"Scaling symmetries",
"Quantum field theory",
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Mathematical physics",
"Symmetry"
] |
291,499 | https://en.wikipedia.org/wiki/Single-photon%20emission%20computed%20tomography | Single-photon emission computed tomography (SPECT, or less commonly, SPET) is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera (that is, scintigraphy), but is able to provide true 3D information. This information is typically presented as cross-sectional slices through the patient, but can be freely reformatted or manipulated as required.
The technique needs delivery of a gamma-emitting radioisotope (a radionuclide) into the patient, normally through injection into the bloodstream. On occasion, the radioisotope is a simple soluble dissolved ion, such as an isotope of gallium(III). Usually, though, a marker radioisotope is attached to a specific ligand to create a radioligand, whose properties bind it to certain types of tissues. This marriage allows the combination of ligand and radiopharmaceutical to be carried and bound to a place of interest in the body, where the ligand concentration is seen by a gamma camera.
Principles
Instead of just "taking a picture of anatomical structures", a SPECT scan monitors level of biological activity at each place in the 3-D region analyzed. Emissions from the radionuclide indicate amounts of blood flow in the capillaries of the imaged regions. In the same way that a plain X-ray is a 2-dimensional (2-D) view of a 3-dimensional structure, the image obtained by a gamma camera is a 2-D view of 3-D distribution of a radionuclide.
SPECT imaging is performed by using a gamma camera to acquire multiple 2-D images (also called projections), from multiple angles. A computer is then used to apply a tomographic reconstruction algorithm to the multiple projections, yielding a 3-D data set. This data set may then be manipulated to show thin slices along any chosen axis of the body, similar to those obtained from other tomographic techniques, such as magnetic resonance imaging (MRI), X-ray computed tomography (X-ray CT), and positron emission tomography (PET).
SPECT is similar to PET in its use of radioactive tracer material and detection of gamma rays. In contrast with PET, the tracers used in SPECT emit gamma radiation that is measured directly, whereas PET tracers emit positrons that annihilate with electrons up to a few millimeters away, causing two gamma photons to be emitted in opposite directions. A PET scanner detects these emissions "coincident" in time, which provides more radiation event localization information and, thus, higher spatial resolution images than SPECT (which has about 1 cm resolution). SPECT scans are significantly less expensive than PET scans, in part because they are able to use longer-lived and more easily obtained radioisotopes than PET.
Because SPECT acquisition is very similar to planar gamma camera imaging, the same radiopharmaceuticals may be used. If a patient is examined in another type of nuclear medicine scan, but the images are non-diagnostic, it may be possible to proceed straight to SPECT by moving the patient to a SPECT instrument, or even by simply reconfiguring the camera for SPECT image acquisition while the patient remains on the table.
To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every 3–6 degrees. In most cases, a full 360-degree rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds is typical. This gives a total scan time of 15–20 minutes.
Multi-headed gamma cameras can accelerate acquisition. For example, a dual-headed camera can be used with heads spaced 180 degrees apart, allowing two projections to be acquired simultaneously, with each head requiring 180 degrees of rotation. Triple-head cameras with 120-degree spacing are also used.
Cardiac gated acquisitions are possible with SPECT, just as with planar imaging techniques such as multi gated acquisition scan (MUGA). Triggered by electrocardiogram (EKG) to obtain differential information about the heart in various parts of its cycle, gated myocardial SPECT can be used to obtain quantitative information about myocardial perfusion, thickness, and contractility of the myocardium during various parts of the cardiac cycle, and also to allow calculation of left ventricular ejection fraction, stroke volume, and cardiac output.
Application
SPECT can be used to complement any gamma imaging study, where a true 3D representation can be helpful, such as tumor imaging, infection (leukocyte) imaging, thyroid imaging or bone scintigraphy.
Because SPECT permits accurate localisation in 3D space, it can be used to provide information about localised function in internal organs, such as functional cardiac or brain imaging.
Myocardial perfusion imaging
Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is that under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test.
A cardiac specific radiopharmaceutical is administered, e.g., 99mTc-tetrofosmin (Myoview, GE healthcare), 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb) or Thallium-201 chloride. Following this, the heart rate is raised to induce myocardial stress, either by exercise on a treadmill or pharmacologically with adenosine, dobutamine, or dipyridamole (aminophylline can be used to reverse the effects of dipyridamole).
SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by comparing stress images to a further set of images obtained at rest which are normally acquired prior to the stress images.
MPI has been demonstrated to have an overall accuracy of about 83% (sensitivity: 85%; specificity: 72%) (in a review, not exclusively of SPECT MPI), and is comparable with (or better than) other non-invasive tests for ischemic heart disease.
Functional brain imaging
Usually, the gamma-emitting tracer used in functional brain imaging is Technetium (99mTc) exametazime. 99mTc is a metastable nuclear isomer that emits gamma rays detectable by a gamma camera. Attaching it to exametazime allows it to be taken up by brain tissue in a manner proportional to brain blood flow, in turn allowing cerebral blood flow to be assessed with the nuclear gamma camera.
Because blood flow in the brain is tightly coupled to local brain metabolism and energy use, the 99mTc-exametazime tracer (as well as the similar 99mTc-EC tracer) is used to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. Meta-analysis of many reported studies suggests that SPECT with this tracer is about 74% sensitive at diagnosing Alzheimer's disease vs. 81% sensitivity for clinical exam (cognitive testing, etc.). More recent studies have shown the accuracy of SPECT in Alzheimer's diagnosis may be as high as 88%. In meta analysis, SPECT was superior to clinical exam and clinical criteria (91% vs. 70%) in being able to differentiate Alzheimer's disease from vascular dementias. This latter ability relates to SPECT's imaging of local metabolism of the brain, in which the patchy loss of cortical metabolism seen in multiple strokes differs clearly from the more even or "smooth" loss of non-occipital cortical brain function typical of Alzheimer's disease. Another recent review article showed that multi-headed SPECT cameras with quantitative analysis result in an overall sensitivity of 84-89% and an overall specificity of 83-89% in cross sectional studies and sensitivity of 82-96% and specificity of 83-89% for longitudinal studies of dementia.
99mTc-exametazime SPECT scanning competes with fludeoxyglucose (FDG) PET scanning of the brain, which works to assess regional brain glucose metabolism, to provide very similar information about local brain damage from many processes. SPECT is more widely available, because the radioisotope used is longer-lasting and far less expensive in SPECT, and the gamma scanning equipment is less expensive as well. While 99mTc is extracted from relatively simple technetium-99m generators, which are delivered to hospitals and scanning centers weekly to supply fresh radioisotope, FDG PET relies on FDG, which is made in an expensive medical cyclotron and "hot-lab" (automated chemistry lab for radiopharmaceutical manufacture), and then delivered immediately to scanning sites because of the natural short 110-minute half-life of Fluorine-18.
Applications in nuclear technology
In the nuclear power sector, the SPECT technique can be applied to image radioisotope distributions in irradiated nuclear fuels. Due to the irradiation of nuclear fuel (e.g. uranium) with neutrons in a nuclear reactor, a wide array of gamma-emitting radionuclides are naturally produced in the fuel, such as fission products (cesium-137, barium-140 and europium-154) and activation products (chromium-51 and cobalt-58). These may be imaged using SPECT in order to verify the presence of fuel rods in a stored fuel assembly for IAEA safeguards purposes, to validate predictions of core simulation codes, or to study the behavior of the nuclear fuel in normal operation,
or in accident scenarios.
Reconstruction
Reconstructed images typically have resolutions of 64×64 or 128×128 pixels, with the pixel sizes ranging from 3–6 mm. The number of projections acquired is chosen to be approximately equal to the width of the resulting images. In general, the resulting reconstructed images will be of lower resolution, have increased noise than planar images, and be susceptible to artifacts.
Scanning is time-consuming, and it is essential that there is no patient movement during the scan time. Movement can cause significant degradation of the reconstructed images, although movement compensation reconstruction techniques can help with this. A highly uneven distribution of radiopharmaceutical also has the potential to cause artifacts. A very intense area of activity (e.g., the bladder) can cause extensive streaking of the images and obscure neighboring areas of activity. This is a limitation of the filtered back projection reconstruction algorithm. Iterative reconstruction is an alternative algorithm that is growing in importance, as it is less sensitive to artifacts and can also correct for attenuation and depth dependent blurring. Furthermore, iterative algorithms can be made more efficacious using the Superiorization methodology.
Attenuation of the gamma rays within the patient can lead to significant underestimation of activity in deep tissues, compared to superficial tissues. Approximate correction is possible, based on relative position of the activity, and optimal correction is obtained with measured attenuation values. Modern SPECT equipment is available with an integrated X-ray CT scanner. As X-ray CT images are an attenuation map of the tissues, this data can be incorporated into the SPECT reconstruction to correct for attenuation. It also provides a precisely registered CT image, which can provide additional anatomical information.
Scatter of the gamma rays as well as the random nature of gamma rays can also lead to the degradation of quality of SPECT images and cause loss of resolution. Scatter correction and resolution recovery are also applied to improve resolution of SPECT images.
Typical SPECT acquisition protocols
SPECT/CT
In some cases a SPECT gamma scanner may be built to operate with a conventional CT scanner, with coregistration of images. As in PET/CT, this allows location of tumors or tissues which may be seen on SPECT scintigraphy, but are difficult to locate precisely with regard to other anatomical structures. Such scans are most useful for tissues outside the brain, where location of tissues may be far more variable. For example, SPECT/CT may be used in sestamibi parathyroid scan applications, where the technique is useful in locating ectopic parathyroid adenomas which may not be in their usual locations in the thyroid gland.
Quality control
The overall performance of SPECT systems can be performed by quality control tools such as the Jaszczak phantom.
See also
Daniel Amen, psychiatrist who uses SPECT for diagnoses
Functional neuroimaging
Gamma camera
Magnetic resonance imaging
Neuroimaging
Positron emission tomography
ISAS (Ictal-Interictal SPECT Analysis by SPM)
References
Further reading
Bruyant, P. P. (2002). "Analytic and iterative reconstruction algorithms in SPECT". Journal of Nuclear Medicine 43(10):1343-1358.
Elhendy et al., "Dobutamine Stress Myocardial Perfusion Imaging in Coronary Artery Disease", J Nucl Med 2002 43: 1634–1646.
Jones / Hogg / Seeram (2013). Practical SPECT/CT in Nuclear Medicine. .
Willowson K, Bailey DL, Baldock C, 2008. "Quantitative SPECT reconstruction using CT-derived corrections". Phys. Med. Biol. 53 3099–3112.
External links
Human Health Campus, The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications
National Isotope Development Center Reference information on radioisotopes including those for SPECT; coordination and management of isotope production, availability, and distribution
Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development
3D nuclear medical imaging
Radiobiology
Neuroimaging
Medical physics
Articles containing video clips | Single-photon emission computed tomography | [
"Physics",
"Chemistry",
"Biology"
] | 2,958 | [
"Radiobiology",
"Radioactivity",
"Applied and interdisciplinary physics",
"Medical physics"
] |
291,912 | https://en.wikipedia.org/wiki/Introduction%20to%20gauge%20theory | A gauge theory is a type of theory in physics. The word gauge means a measurement, a thickness, an in-between distance (as in railroad tracks), or a resulting number of units per certain parameter (a number of loops in an inch of fabric or a number of lead balls in a pound of ammunition). Modern theories describe physical forces in terms of fields, e.g., the electromagnetic field, the gravitational field, and fields that describe forces between the elementary particles. A general feature of these field theories is that the fundamental fields cannot be directly measured; however, some associated quantities can be measured, such as charges, energies, and velocities. For example, say you cannot measure the diameter of a lead ball, but you can determine how many lead balls, which are equal in every way, are required to make a pound. Using the number of balls, the density of lead, and the formula for calculating the volume of a sphere from its diameter, one could indirectly determine the diameter of a single lead ball.
In field theories, different configurations of the unobservable fields can result in identical observable quantities. A transformation from one such field configuration to another is called a gauge transformation; the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance. For example, if you could measure the color of lead balls and discover that when you change the color, you still fit the same number of balls in a pound, the property of "color" would show gauge invariance. Since any kind of invariance under a field transformation is considered a symmetry, gauge invariance is sometimes called gauge symmetry. Generally, any theory that has the property of gauge invariance is considered a gauge theory.
For example, in electromagnetism the electric field E and the magnetic field B are observable, while the potentials V ("voltage") and A (the vector potential) are not. Under a gauge transformation in which a constant is added to V, no observable change occurs in E or B.
With the advent of quantum mechanics in the 1920s, and with successive advances in quantum field theory, the importance of gauge transformations has steadily grown. Gauge theories constrain the laws of physics, because all the changes induced by a gauge transformation have to cancel each other out when written in terms of observable quantities. Over the course of the 20th century, physicists gradually realized that all forces (fundamental interactions) arise from the constraints imposed by local gauge symmetries, in which case the transformations vary from point to point in space and time. Perturbative quantum field theory (usually employed for scattering theory) describes forces in terms of force-mediating particles called gauge bosons. The nature of these particles is determined by the nature of the gauge transformations. The culmination of these efforts is the Standard Model, a quantum field theory that accurately predicts all of the fundamental interactions except gravity.
History and importance
The earliest field theory having a gauge symmetry was James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field". The importance of this symmetry remained unnoticed in the earliest formulations. Similarly unnoticed, David Hilbert had derived Einstein's equations of general relativity by postulating a symmetry under any change of coordinates, just as Einstein was completing his work. Later Hermann Weyl, inspired by success in Einstein's general relativity, conjectured (incorrectly, as it turned out) in 1919 that invariance under the change of scale or "gauge" (a term inspired by the various track gauges of railroads) might also be a local symmetry of electromagnetism. Although Weyl's choice of the gauge was incorrect, the name "gauge" stuck to the approach. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London modified their gauge choice by replacing the scale factor with a change of wave phase, and applying it successfully to electromagnetism. Gauge symmetry was generalized mathematically in 1954 by Chen Ning Yang and Robert Mills in an attempt to describe the strong nuclear forces. This idea, dubbed Yang–Mills theory, later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory.
The importance of gauge theories for physics stems from their tremendous success in providing a unified framework to describe the quantum-mechanical behavior of electromagnetism, the weak force and the strong force. This gauge theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature.
In classical physics
Electromagnetism
Historically, the first example of gauge symmetry to be discovered was classical electromagnetism. A static electric field can be described in terms of an electric potential (voltage, ) that is defined at every point in space, and in practical work it is conventional to take the Earth as a physical reference that defines the zero level of the potential, or ground. But only differences in potential are physically measurable, which is the reason that a voltmeter must have two probes, and can only report the voltage difference between them. Thus one could choose to define all voltage differences relative to some other standard, rather than the Earth, resulting in the addition of a constant offset. If the potential is a solution to Maxwell's equations then, after this gauge transformation, the new potential is also a solution to Maxwell's equations and no experiment can distinguish between these two solutions. In other words, the laws of physics governing electricity and magnetism (that is, Maxwell equations) are invariant under gauge transformation. Maxwell's equations have a gauge symmetry.
Generalizing from static electricity to electromagnetism, we have a second potential, the magnetic vector potential A, which can also undergo gauge transformations. These transformations may be local. That is, rather than adding a constant onto V, one can add a function that takes on different values at different points in space and time. If A is also changed in certain corresponding ways, then the same E (electric) and B (magnetic) fields result. The detailed mathematical relationship between the fields E and B and the potentials V and A is given in the article Gauge fixing, along with the precise statement of the nature of the gauge transformation. The relevant point here is that the fields remain the same under the gauge transformation, and therefore Maxwell's equations are still satisfied.
Gauge symmetry is closely related to charge conservation. Suppose that there existed some process by which one could briefly violate conservation of charge by creating a charge q at a certain point in space, 1, moving it to some other point 2, and then destroying it. We might imagine that this process was consistent with conservation of energy. We could posit a rule stating that creating the charge required an input of energy E1=qV1 and destroying it released E2=qV2, which would seem natural since qV measures the extra energy stored in the electric field because of the existence of a charge at a certain point. Outside of the interval during which the particle exists, conservation of energy would be satisfied, because the net energy released by creation and destruction of the particle, qV2-qV1, would be equal to the work done in moving the particle from 1 to 2, qV2-qV1. But although this scenario salvages conservation of energy, it violates gauge symmetry. Gauge symmetry requires that the laws of physics be invariant under the transformation , which implies that no experiment should be able to measure the absolute potential, without reference to some external standard such as an electrical ground. But the proposed rules E1=qV1 and E2=qV2 for the energies of creation and destruction would allow an experimenter to determine the absolute potential, simply by comparing the energy input required to create the charge q at a particular point in space in the case where the potential is and respectively. The conclusion is that if gauge symmetry holds, and energy is conserved, then charge must be conserved.
General relativity
As discussed above, the gauge transformations for classical (i.e., non-quantum mechanical) general relativity are arbitrary coordinate transformations. Technically, the transformations must be invertible, and both the transformation and its inverse must be smooth, in the sense of being differentiable an arbitrary number of times.
An example of a symmetry in a physical theory: translation invariance
Some global symmetries under changes of coordinate predate both general relativity and the concept of a gauge. For example, Galileo and Newton introduced the notion of translation invariance, an advancement from the Aristotelian concept that different places in space, such as the earth versus the heavens, obeyed different physical rules.
Suppose, for example, that one observer examines the properties of a hydrogen atom on Earth, the other—on the Moon (or any other place in the universe), the observer will find that their hydrogen atoms exhibit completely identical properties. Again, if one observer had examined a hydrogen atom today and the other—100 years ago (or any other time in the past or in the future), the two experiments would again produce completely identical results. The invariance of the properties of a hydrogen atom with respect to the time and place where these properties were investigated is called translation invariance.
Recalling our two observers from different ages: the time in their experiments is shifted by 100 years. If the time when the older observer did the experiment was t, the time of the modern experiment is t+100 years. Both observers discover the same laws of physics. Because light from hydrogen atoms in distant galaxies may reach the earth after having traveled across space for billions of years, in effect one can do such observations covering periods of time almost all the way back to the Big Bang, and they show that the laws of physics have always been the same.
In other words, if in the theory we change the time t to t+100 years (or indeed any other time shift) the theoretical predictions do not change.
Another example of a symmetry: the invariance of Einstein's field equation under arbitrary coordinate transformations
In Einstein's general relativity, coordinates like x, y, z, and t are not only "relative" in the global sense of translations like , rotations, etc., but become completely arbitrary, so that, for example, one can define an entirely new time-like coordinate according to some arbitrary rule such as , where has dimensions of time, and yet Einstein's equations will have the same form.
Invariance of the form of an equation under an arbitrary coordinate transformation is customarily referred to as general covariance, and equations with this property are referred to as written in the covariant form. General covariance is a special case of gauge invariance.
Maxwell's equations can also be expressed in a generally covariant form, which is as invariant under general coordinate transformation as Einstein's field equation.
In quantum mechanics
Quantum electrodynamics
Until the advent of quantum mechanics, the only well known example of gauge symmetry was in electromagnetism, and the general significance of the concept was not fully understood. For example, it was not clear whether it was the fields E and B or the potentials V and A that were the fundamental quantities; if the former, then the gauge transformations could be considered as nothing more than a mathematical trick.
Aharonov–Bohm experiment
In quantum mechanics, a particle such as an electron is also described as a wave. For example, if the double-slit experiment is performed with electrons, then a wave-like interference pattern is observed. The electron has the highest probability of being detected at locations where the parts of the wave passing through the two slits are in phase with one another, resulting in constructive interference. The frequency, f, of the electron wave is related to the kinetic energy of an individual electron particle via the quantum-mechanical relation E = hf. If there are no electric or magnetic fields present in this experiment, then the electron's energy is constant, and, for example, there will be a high probability of detecting the electron along the central axis of the experiment, where by symmetry the two parts of the wave are in phase.
But now suppose that the electrons in the experiment are subject to electric or magnetic fields. For example, if an electric field were imposed on one side of the axis but not on the other, the results of the experiment would be affected. The part of the electron wave passing through that side oscillates at a different rate, since its energy has had −eV added to it, where −e is the charge of the electron and V the electrical potential. The results of the experiment will be different, because phase relationships between the two parts of the electron wave have changed, and therefore the locations of constructive and destructive interference will be shifted to one side or the other. It is the electric potential that occurs here, not the electric field, and this is a manifestation of the fact that it is the potentials and not the fields that are of fundamental significance in quantum mechanics.
Explanation with potentials
It is even possible to have cases in which an experiment's results differ when the potentials are changed, even if no charged particle is ever exposed to a different field. One such example is the Aharonov–Bohm effect, shown in the figure. In this example, turning on the solenoid only causes a magnetic field B to exist within the solenoid. But the solenoid has been positioned so that the electron cannot possibly pass through its interior. If one believed that the fields were the fundamental quantities, then one would expect that the results of the experiment would be unchanged. In reality, the results are different, because turning on the solenoid changed the vector potential A in the region that the electrons do pass through. Now that it has been established that it is the potentials V and A that are fundamental, and not the fields E and B, we can see that the gauge transformations, which change V and A, have real physical significance, rather than being merely mathematical artifacts.
Gauge invariance: the results of the experiments are independent of the choice of the gauge for the potentials
Note that in these experiments, the only quantity that affects the result is the difference in phase between the two parts of the electron wave. Suppose we imagine the two parts of the electron wave as tiny clocks, each with a single hand that sweeps around in a circle, keeping track of its own phase. Although this cartoon ignores some technical details, it retains the physical phenomena that are important here. If both clocks are sped up by the same amount, the phase relationship between them is unchanged, and the results of experiments are the same. Not only that, but it is not even necessary to change the speed of each clock by a fixed amount. We could change the angle of the hand on each clock by a varying amount θ, where θ could depend on both the position in space and on time. This would have no effect on the result of the experiment, since the final observation of the location of the electron occurs at a single place and time, so that the phase shift in each electron's "clock" would be the same, and the two effects would cancel out. This is another example of a gauge transformation: it is local, and it does not change the results of experiments.
Summary
In summary, gauge symmetry attains its full importance in the context of quantum mechanics. In the application of quantum mechanics to electromagnetism, i.e., quantum electrodynamics, gauge symmetry applies to both electromagnetic waves and electron waves. These two gauge symmetries are in fact intimately related. If a gauge transformation θ is applied to the electron waves, for example, then one must also apply a corresponding transformation to the potentials that describe the electromagnetic waves. Gauge symmetry is required in order to make quantum electrodynamics a renormalizable theory, i.e., one in which the calculated predictions of all physically measurable quantities are finite.
Types of gauge symmetries
The description of the electrons in the subsection above as little clocks is in effect a statement of the mathematical rules according to which the phases of electrons are to be added and subtracted: they are to be treated as ordinary numbers, except that in the case where the result of the calculation falls outside the range of 0≤θ<360°, we force it to "wrap around" into the allowed range, which covers a circle. Another way of putting this is that a phase angle of, say, 5° is considered to be completely equivalent to an angle of 365°. Experiments have verified this testable statement about the interference patterns formed by electron waves. Except for the "wrap-around" property, the algebraic properties of this mathematical structure are exactly the same as those of the ordinary real numbers.
In mathematical terminology, electron phases form an Abelian group under addition, called the circle group or U(1). "Abelian" means that addition commutes, so that θ + φ = φ + θ. Group means that addition associates and has an identity element, namely "0". Also, for every phase there exists an inverse such that the sum of a phase and its inverse is 0. Other examples of abelian groups are the integers under addition, 0, and negation, and the nonzero fractions under product, 1, and reciprocal.
As a way of visualizing the choice of a gauge, consider whether it is possible to tell if a cylinder has been twisted. If the cylinder has no bumps, marks, or scratches on it, we cannot tell. We could, however, draw an arbitrary curve along the cylinder, defined by some function θ(x), where x measures distance along the axis of the cylinder. Once this arbitrary choice (the choice of gauge) has been made, it becomes possible to detect it if someone later twists the cylinder.
In 1954, Chen Ning Yang and Robert Mills proposed to generalize these ideas to noncommutative groups. A noncommutative gauge group can describe a field that, unlike the electromagnetic field, interacts with itself. For example, general relativity states that gravitational fields have energy, and special relativity concludes that energy is equivalent to mass. Hence a gravitational field induces a further gravitational field. The nuclear forces also have this self-interacting property.
Gauge bosons
Surprisingly, gauge symmetry can give a deeper explanation for the existence of interactions, such as the electric and nuclear interactions. This arises from a type of gauge symmetry relating to the fact that all particles of a given type are experimentally indistinguishable from one another. Imagine that Alice and Betty are identical twins, labeled at birth by bracelets reading A and B. Because the girls are identical, nobody would be able to tell if they had been switched at birth; the labels A and B are arbitrary, and can be interchanged. Such a permanent interchanging of their identities is like a global gauge symmetry. There is also a corresponding local gauge symmetry, which describes the fact that from one moment to the next, Alice and Betty could swap roles while nobody was looking, and nobody would be able to tell. If we observe that Mom's favorite vase is broken, we can only infer that the blame belongs to one twin or the other, but we cannot tell whether the blame is 100% Alice's and 0% Betty's, or vice versa. If Alice and Betty are in fact quantum-mechanical particles rather than people, then they also have wave properties, including the property of superposition, which allows waves to be added, subtracted, and mixed arbitrarily. It follows that we are not even restricted to complete swaps of identity. For example, if we observe that a certain amount of energy exists in a certain location in space, there is no experiment that can tell us whether that energy is 100% A's and 0% B's, 0% A's and 100% B's, or 20% A's and 80% B's, or some other mixture. The fact that the symmetry is local means that we cannot even count on these proportions to remain fixed as the particles propagate through space. The details of how this is represented mathematically depend on technical issues relating to the spins of the particles, but for our present purposes we consider a spinless particle, for which it turns out that the mixing can be specified by some arbitrary choice of gauge θ(x), where an angle θ = 0° represents 100% A and 0% B, θ = 90° means 0% A and 100% B, and intermediate angles represent mixtures.
According to the principles of quantum mechanics, particles do not actually have trajectories through space. Motion can only be described in terms of waves, and the momentum p of an individual particle is related to its wavelength λ by p = h/λ. In terms of empirical measurements, the wavelength can only be determined by observing a change in the wave between one point in space and another nearby point (mathematically, by differentiation). A wave with a shorter wavelength oscillates more rapidly, and therefore changes more rapidly between nearby points. Now suppose that we arbitrarily fix a gauge at one point in space, by saying that the energy at that location is 20% A's and 80% B's. We then measure the two waves at some other, nearby point, in order to determine their wavelengths. But there are two entirely different reasons that the waves could have changed. They could have changed because they were oscillating with a certain wavelength, or they could have changed because the gauge function changed from a 20–80 mixture to, say, 21–79. If we ignore the second possibility, the resulting theory does not work; strange discrepancies in momentum will show up, violating the principle of conservation of momentum. Something in the theory must be changed.
Again there are technical issues relating to spin, but in several important cases, including electrically charged particles and particles interacting via nuclear forces, the solution to the problem is to impute physical reality to the gauge function θ(x). We say that if the function θ oscillates, it represents a new type of quantum-mechanical wave, and this new wave has its own momentum p = h/λ, which turns out to patch up the discrepancies that otherwise would have broken conservation of momentum. In the context of electromagnetism, the particles A and B would be charged particles such as electrons, and the quantum mechanical wave represented by θ would be the electromagnetic field. (Here we ignore the technical issues raised by the fact that electrons actually have spin 1/2, not spin zero. This oversimplification is the reason that the gauge field θ comes out to be a scalar, whereas the electromagnetic field is actually represented by a vector consisting of V and A.) The result is that we have an explanation for the presence of electromagnetic interactions: if we try to construct a gauge-symmetric theory of identical, non-interacting particles, the result is not self-consistent, and can only be repaired by adding electric and magnetic fields that cause the particles to interact.
Although the function θ(x) describes a wave, the laws of quantum mechanics require that it also have particle properties. In the case of electromagnetism, the particle corresponding to electromagnetic waves is the photon. In general, such particles are called gauge bosons, where the term "boson" refers to a particle with integer spin. In the simplest versions of the theory gauge bosons are massless, but it is also possible to construct versions in which they have mass. This is the case for the gauge bosons that carry the weak interaction: the force responsible for nuclear decay.
References
Further reading
These books are intended for general readers and employ the barest minimum of mathematics.
't Hooft, Gerard: "Gauge Theories of the Force between Elementary Particles," Scientific American, 242(6):104–138 (June 1980).
"Press Release: The 1999 Nobel Prize in Physics". Nobelprize.org. Nobel Media AB 2013. 20 Aug 2013.
Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. A serious attempt by a physicist to explain gauge theory and the Standard Model.
Feynman, Richard (2006) QED: The Strange Theory of Light and Matter. Princeton University Press. A nontechnical description of quantum field theory (not specifically about gauge theory).
Quantum chromodynamics
Differential topology
Symmetry | Introduction to gauge theory | [
"Physics",
"Mathematics"
] | 5,042 | [
"Topology",
"Differential topology",
"Geometry",
"Symmetry"
] |
292,052 | https://en.wikipedia.org/wiki/Lyman-alpha | Lyman-alpha, typically denoted by Ly-α, is a spectral line of hydrogen (or, more generally, of any one-electron atom) in the Lyman series. It is emitted when the atomic electron transitions from an n = 2 orbital to the ground state (n = 1), where n is the principal quantum number. In hydrogen, its wavelength of 1215.67 angstroms ( or ), corresponding to a frequency of about , places Lyman-alpha in the ultraviolet (UV) part of the electromagnetic spectrum. More specifically, Ly-α lies in vacuum UV (VUV), characterized by a strong absorption in the air.
Fine structure
Because of the spin–orbit interaction, the Lyman-alpha line splits into a fine-structure doublet with the wavelengths of 1215.668 and 1215.674 angstroms. These components are called Ly-α3/2 and Ly-α1/2, respectively.
The eigenstates of the perturbed Hamiltonian are labeled by the total angular momentum j of the electron, not just the orbital angular momentum l. In the n = 2, l = 1 orbital, there are two possible states, with j = and j = , resulting in a spectral doublet. The j = state has a higher energy and so is energetically farther from the n = 1 state to which it is transitioning. Thus, the j = state is associated with the more energetic (having a shorter wavelength) spectral line in the doublet.
Observation
Since the hydrogen Lyman-alpha radiation is strongly absorbed by the air, its observation in laboratory requires use of vacuumed spectroscopic systems. For the same reason, Lyman-alpha astronomy is ordinarily carried out by satellite-borne instruments, except for observing extremely distant sources whose redshifts allow the line to penetrate the Earth atmosphere.
The line was also observed in antihydrogen. Within the experimental uncertainties, the measured frequency is equal to that of hydrogen, in agreement with predictions of quantum electrodynamics.
See also
References
Emission spectroscopy
Atomic physics
Hydrogen physics
Astronomical spectroscopy | Lyman-alpha | [
"Physics",
"Chemistry"
] | 433 | [
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Quantum mechanics",
"Astrophysics",
"Atomic physics",
"Astronomical spectroscopy",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
292,196 | https://en.wikipedia.org/wiki/Inharmonicity | In music, inharmonicity is the degree to which the frequencies of overtones (also known as partials or partial tones) depart from whole multiples of the fundamental frequency (harmonic series).
Acoustically, a note perceived to have a single distinct pitch in fact contains a variety of additional overtones. Many percussion instruments, such as cymbals, tam-tams, and chimes, create complex and inharmonic sounds.
Music harmony and intonation depends strongly on the harmonicity of tones. An ideal, homogeneous, infinitesimally thin or infinitely flexible string or column of air has exact harmonic modes of vibration. In any real musical instrument, the resonant body that produces the music tone—typically a string, wire, or column of air—deviates from this ideal and has some small or large amount of inharmonicity. For instance, a very thick string behaves less as an ideal string and more like a cylinder (a tube of mass), which has natural resonances that are not whole number multiples of the fundamental frequency.
However, in stringed instruments such as the violin, and guitar, or in some Indian drums such as tabla, the overtones are close to—or in some cases, quite exactly—whole number multiples of the fundamental frequency. Any departure from this ideal harmonic series is known as inharmonicity. The less elastic the strings are (that is, the shorter, thicker, smaller tension or stiffer they are), the more inharmonicity they exhibit.
When a string is bowed or a tone in a wind instrument is initiated by vibrating the reed or lips, a phenomenon called mode-locking counteracts the natural inharmonicity of the string or air column and causes the overtones to lock precisely onto integer multiples of the fundamental pitch, even though these are slightly different from the natural resonance points of the instrument. For this reason, a single tone played by a bowed string instrument, brass instrument, or reed instrument does not necessarily exhibit inharmonicity.
However, when a string is struck or plucked, as with a piano string that is struck by its hammer, a violin string played pizzicato, or a guitar string that is plucked by a finger or plectrum, the string will exhibit inharmonicity. The inharmonicity of a string depends on its physical characteristics, such as tension, stiffness, and length. For instance, a stiff string under low tension (such as those found in the bass notes of small upright pianos) exhibits a high degree of inharmonicity, while a thinner string under higher tension (such as a treble string in a piano) or a more flexible string (such as a gut or nylon string used on a guitar or harp) will exhibit less inharmonicity. A wound string generally exhibits less inharmonicity than the equivalent solid string, and for that reason wound strings are often preferred.
The physical origin of this inharmonicity is the dispersion of waves in a stiff string. In an ideal flexible string, the wave speed is constant as a function of frequency. Looking at the resonant frequency of a string with two fixed ends, this means that the frequency of the harmonics increases linearly with the mode number. The added dispersion due to the stiffness, which is most prevalent in the thick bass strings, means that as the frequency increases, so too does the wave speed in the string. The result is that modes of the stiff string are no longer perfectly harmonic.
Pianos
Sound quality of inharmonicity
In 1943, Schuck and Young were the first scientists to measure the spectral inharmonicity in piano tones. They found that the spectral partials in piano tones run progressively sharp—that is to say, the lowest partials are sharpened the least and higher partials are progressively sharpened further.
Inharmonicity is not necessarily unpleasant. In 1962, research by Harvey Fletcher and his collaborators indicated that the spectral inharmonicity is important for tones to sound piano-like. They proposed that inharmonicity is responsible for the "warmth" property common to real piano tones. According to their research synthesized piano tones sounded more natural when some inharmonicity was introduced. In general, electronic instruments that duplicate acoustic instruments must duplicate both the inharmonicity and the resulting stretched tuning of the original instruments.
Inharmonicity leads to stretched tuning
When pianos are tuned by piano tuners, the technician sometimes listens for the sound of "beating" when two notes are played together, and tunes to the point that minimizes roughness between tones. Piano tuners must deal with the inharmonicity of piano strings, which is present in different amounts in all of the ranges of the instrument, but especially in the bass and high treble registers. The result is that octaves are tuned slightly wider than the harmonic 2:1 ratio. The exact amount octaves are stretched in a piano tuning varies from piano to piano and even from register to register within a single piano—depending on the exact inharmonicity of the strings involved.
Because of the problem of inharmonicity, electronic piano tuning devices used by piano technicians are not designed to tune according to a simple harmonic series. Rather, the devices use various means to duplicate the stretched octaves and other adjustments a technician makes by ear. The most sophisticated devices allow a technician to make custom inharmonicity measurements—simultaneously considering all partials for pitch and volume to determine the most appropriate stretch to employ for a given instrument. Some include an option to simply record a tuning that a technician has completed by ear; the technician can then duplicate that tuning on the same piano (or others of similar make and model) more easily and quickly.
The issues surrounding setting the stretch by ear vs machine have not been settled; machines are better at deriving the absolute placement of semitones within a given chromatic scale, whereas non-machine tuners prefer to adjust these locations preferentially due to their temptation to make intervals more sonorous. The result is that pianos tuned by ear and immediately checked with a machine tend to vary from one degree to another from the purely theoretical semitone (mathematically the 12th root of two) due to human error and perception. (If pleasing the ear is the goal of an aural tuning, then pleasing the math is the goal of a machine tuning.) This is thought to be because strings can vary somewhat from note to note and even from neighbors within a unison. This non-linearity is different from true falseness where a string creates false harmonics and is more akin to minor variations in string thickness, string sounding length or minor bridge inconsistencies.
Piano tuning is a compromise—both in terms of choosing a temperament to minimize out-of-tuneness in the intervals and chords that will be played, and in terms of dealing with inharmonicity. For more information, see Piano acoustics and Piano tuning.
Another factor that can cause problems is the presence of rust on the strings or dirt in the windings. These factors can slightly raise the frequency of the higher modes, resulting in more inharmonicity.
Guitar
While piano tuning is normally done by trained technicians, guitars such as acoustic guitars, electric guitars, and electric bass guitars are usually tuned by the guitarist themselves. When a guitarist tunes a guitar by ear, they have to take both temperament and string inharmonicity into account. The inharmonicity in guitar strings can "cause stopped notes to stop sharp, meaning they will sound sharper both in terms of pitch and beating, than they "should". This is distinct from any temperament issue." Even if a guitar is built so that there are no "fret or neck angle errors, inharmonicity can make the simple approach of tuning open strings to notes stopped on the fifth or fourth frets" unreliable. Inharmonicity also demands that
some of the "octaves may need to be compromised minutely."
When strobe tuners became available in the 1970s, and then inexpensive electronic tuners in the 1980s reached the mass market, it did not spell the end of tuning problems for guitarists. Even if an electronic tuner indicates that the guitar is "perfectly" in tune, some chords may not sound in tune when they are strummed, either due to string inharmonicity from worn or dirty strings, a misplaced fret, a mis-adjusted bridge, or other problems. Due to the range of factors in play, getting a guitar to sound in tune is an exercise in compromise. "Worn or dirty strings are also inharmonic and harder to tune", a problem that can be partially resolved by cleaning strings.
Some performers choose to focus the tuning towards the key of the piece, so that the tonic and dominant chords will have a clear, resonant sound. However, since this compromise may lead to muddy-sounding chords in sections of a piece that stray from the main key (e.g., a bridge section that modulates a semitone down), some performers choose to make a broader compromise, and "split the difference" so that all chords will sound acceptable.
Mode-locking
Other stringed instruments such as the violin, viola, cello, and double bass also exhibit inharmonicity when notes are plucked using the pizzicato technique. However, this inharmonicity disappears when the strings are bowed, because the bow's stick-slip action is periodic, driving all of the resonances of the string at exactly harmonic ratios even if it has to drive them slightly off their natural frequency. As a result, the operating mode of a bowed string playing a steady note is a compromise among the tunings of all of the (slightly inharmonic) string resonances, which is due to the strong non-linearity of the stick-slip action. Mode locking also occurs in the human voice and in reed instruments such as the clarinet.
List of instruments
Perfectly harmonic
Bowed string instruments (violin, cello, erhu, ...)
Brass instruments (trumpet, horn, trombone, ...)
Reed aerophones (oboe, clarinet, ...)
Nearly harmonic
Plucked string instruments (guitar, harpsichord, harp...)
Approximately harmonic
Tuned percussion
Not harmonic
Untuned percussion
See also
Anharmonicity
Pseudo-octave
Subharmonic
References
Further reading
B. C. J. Moore, R.W. Peters, and B. C. Glasberg, “Thresholds for the detection of inharmonicity in complex tones,” Journal of the Acoust. Soc. Am., vol. 77, no. 5, pp. 1861–1867, 1985.
F. Scalcon, D. Rocchesso, and G. Borin, “Subjective evaluation of the inharmonicity of synthetic piano tones,” in Proc. Int. Comp. Music Conf. ICMC’98, pp. 53–56, 1998.
A. Galembo and L. Cuddy, “String inharmonicity and the timbral quality of piano bass tones: Fletcher, Blackham, and Stratton (1962) revisited.” Proceedings of the Society for Music Perception and Cognition, MIT, Cambridge, Massachusetts, July - August 1997.
Acoustics
Musical tuning | Inharmonicity | [
"Physics"
] | 2,355 | [
"Classical mechanics",
"Acoustics"
] |
292,200 | https://en.wikipedia.org/wiki/Missing%20fundamental | The pitch being perceived with the first harmonic being absent in the waveform is called the missing fundamental phenomenon.
It is established in psychoacoustics that the auditory system, with its natural tendency to distinguish a tone from another, will persistently assign a pitch to a complex tone given that a sufficient set of harmonics are present in the spectrum.
For example, when a note (that is not a pure tone) has a pitch of 100 Hz, it will consist of frequency components that are integer multiples of that value (e.g. 100, 200, 300, 400, 500.... Hz). However, smaller loudspeakers may not produce low frequencies, so in our example, the 100 Hz component may be missing. Nevertheless, a pitch corresponding to the fundamental may still be heard.
Explanation
A low pitch (also known as the pitch of the missing fundamental or virtual pitch) can sometimes be heard when there is no apparent source or component of that frequency. This perception is due to the brain interpreting repetition patterns that are present.
It was once thought that this effect was because the missing fundamental was replaced by distortions introduced by the physics of the ear. However, experiments subsequently showed that when a noise was added that would have masked these distortions had they been present, listeners still heard a pitch corresponding to the missing fundamental, as reported by J. C. R. Licklider in 1954. It is now widely accepted that the brain processes the information present in the overtones to calculate the fundamental frequency. The precise way in which it does so is still a matter of debate, but the processing seems to be based on an autocorrelation involving the timing of neural impulses in the auditory nerve. However, it has long been noted that any neural mechanisms which may accomplish a delay (a necessary operation of a true autocorrelation) have not been found. At least one model shows a temporal delay to be unnecessary to produce an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters; however, earlier work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding pitch percept, and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch. Autocorrelation can thus be considered, at best, an incomplete model.
The pitch of the missing fundamental, usually at the greatest common divisor of the frequencies present, is not, however, always perceived. Research conducted at Heidelberg University shows that, under narrow stimulus conditions with a small number of harmonics, the general population can be divided into those who perceive missing fundamentals, and those who primarily hear the overtones instead. This was done by asking subjects to judge the direction of motion (up or down) of two complexes in succession. The authors used structural MRI and MEG to show that the preference for missing fundamental hearing correlated with left-hemisphere lateralization of pitch perception, where the preference for spectral hearing correlated with right-hemisphere lateralization, and those who exhibited the latter preference tended to be musicians.
In Parsing the Spectral Envelope: Toward a General Theory of Vocal Tone Color (2016) by Ian Howell, He wrote that although not everyone can hear the missing fundamentals, noticing them can be taught and learned. D. Robert Ladd et al. have a related study that claims that most people can switch from listening for the pitch from the harmonics that are evident to finding these pitches spectrally.
Examples
Timpani produce inharmonic overtones, but are constructed and tuned to produce near-harmonic overtones to an implied missing fundamental. Hit in the usual way (half to three-quarters the distance from the center to the rim), the fundamental note of a timpani is very weak in relation to its second through fifth "harmonic" overtones. A timpani might be tuned to produce sound most strongly at 200, 302, 398, and 488 Hz, for instance, implying a missing fundamental at 100 Hz (though the actual dampened fundamental is 170 Hz).
A violin's lowest air and body resonances generally fall between 250 Hz and 300 Hz. The fundamental frequency of the open G3 string is below 200 Hz in modern tunings as well as most historical tunings, so the lowest notes of a violin have an attenuated fundamental, although listeners seldom notice this.
Most common telephones cannot reproduce sounds lower than 300 Hz, but a male voice has a fundamental frequency of approximately 150 Hz. Because of the missing fundamental effect, the fundamental frequencies of male voices are still perceived as their pitches over the telephone.
The missing fundamental phenomenon is used electronically by some pro audio manufacturers to allow sound systems to seem to produce notes that are lower in pitch than they are capable of reproducing. In a hardware effects unit or a software plugin, a crossover filter is set at a low frequency above which the sound system is capable of safely reproducing tones. Musical signal content above the high-pass part of the crossover filter is sent to the main output which is amplified by the sound system. Low frequency content below the low-pass part of the crossover filter is sent to a circuit where harmonics are synthesized above the low notes. The newly created harmonics are mixed back into the main output to create a perception of the filtered-out low notes. Using a device with this synthetic process can reduce complaints from low frequency noise carrying through walls and it can be employed to reduce low frequency content in loud music that might otherwise vibrate and damage breakable valuables.
Some pipe organs make use of this phenomenon as a resultant tone, which allows relatively smaller bass pipes to produce very low-pitched sounds.
Audio processing applications
This very concept of "missing fundamental" being reproduced based on the overtones in the tone has been used to create the illusion of bass in sound systems that are not capable of such bass. In mid-1999, Meir Shashoua of Tel Aviv, co-founder of Waves Audio, patented an algorithm to create the sense of the missing fundamental by synthesizing higher harmonics. Waves Audio released the MaxxBass plug-in to allow computer users to apply the synthesized harmonics to their audio files. Later, Waves Audio produced small subwoofers that relied on the missing fundamental concept to give the illusion of low bass. Both products processed certain overtones selectively to help small loudspeakers, ones which could not reproduce low-frequency components, to sound as if they were capable of low bass. Both products included a high-pass filter which greatly attenuated all the low frequency tones that were expected to be beyond the capabilities of the target sound system. One example of a popular song that was recorded with MaxxBass processing is "Lady Marmalade", the 2001 Grammy award-winning version sung by Christina Aguilera, Lil' Kim, Mýa, and Pink, produced by Missy Elliott.
Other software and hardware companies have developed their own versions of missing fundamental-based bass augmentation products. The poor bass reproduction of earbuds has been identified as a possible target for such processing. Many computer sound systems are not capable of low bass, and songs offered to consumers via computer have been identified as ones that may benefit from augmented bass harmonics processing.
See also
Psychoacoustics
Subharmonic
References
External links
Pitch Paradoxical
Structural and functional asymmetry of lateral Heschl's gyrus reflects pitch perception preference – abstract of the Heidelberg research, as published in Nature Neuroscience 8, 1241–1247 (2005); downloading the full article requires payment
How do you hear tones? – discussion forum thread about the Heidelberg research, with a link to a sound file used in the research so that readers can determine whether they are fundamental or overtone hearers
Psychoacoustics
Waves | Missing fundamental | [
"Physics"
] | 1,600 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
292,744 | https://en.wikipedia.org/wiki/Ising%20model | The Ising model (or Lenz–Ising model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a lattice (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.
The Ising model was invented by the physicist , who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by alone in his 1924 thesis; it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by . It is usually solved by a transfer-matrix method, although there exists a very simple approach relating the model to a non-interacting fermionic quantum field theory.
In dimensions greater than four, the phase transition of the Ising model is described by mean-field theory. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970s, culminating in an exact solution of the zero-field, time-independent model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible .
The Ising problem without an external field can be equivalently formulated as a graph maximum cut (Max-Cut) problem that can be solved via combinatorial optimization.
Definition
Consider a set of lattice sites, each with a set of adjacent sites (e.g. a graph) forming a -dimensional lattice. For each lattice site there is a discrete variable such that , representing the site's spin. A spin configuration, is an assignment of spin value to each lattice site.
For any two adjacent sites there is an interaction . Also a site has an external magnetic field interacting with it. The energy of a configuration is given by the Hamiltonian function
where the first sum is over pairs of adjacent spins (every pair is counted once). The notation indicates that sites and are nearest neighbors. The magnetic moment is given by . Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally. The configuration probability is given by the Boltzmann distribution with inverse temperature :
where , and the normalization constant
is the partition function. For a function of the spins ("observable"), one denotes by
the expectation (mean) value of .
The configuration probabilities represent the probability that (in equilibrium) the system is in a state with configuration .
Discussion
The minus sign on each term of the Hamiltonian function is conventional. Using this sign convention, Ising models can be classified according to the sign of the interaction: if, for a pair i, j
The system is called ferromagnetic or antiferromagnetic if all interactions are ferromagnetic or all are antiferromagnetic. The original Ising models were ferromagnetic, and it is still often assumed that "Ising model" means a ferromagnetic Ising model.
In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs.
The sign convention of H(σ) also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If:
Simplifications
Ising models are often examined without an external field interacting with the lattice, that is, h = 0 for all j in the lattice Λ. Using this simplification, the Hamiltonian becomes
When the external field is zero everywhere, h = 0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a nonzero field breaks this symmetry.
Another common simplification is to assume that all of the nearest neighbors ⟨ij⟩ have the same interaction strength. Then we can set Jij = J for all pairs i, j in Λ. In this case the Hamiltonian is further simplified to
Connection to graph maximum cut
A subset S of the vertex set V(G) of a weighted undirected graph G determines a cut of the graph G into S and its complementary subset G\S. The size of the cut is the sum of the weights of the edges between S and G\S. A maximum cut size is at least the size of any other cut, varying S.
For the Ising model without an external field on a graph G, the Hamiltonian becomes the following sum over the graph edges E(G)
.
Here each vertex i of the graph is a spin site that takes a spin value . A given spin configuration partitions the set of vertices into two -depended subsets, those with spin up and those with spin down . We denote by the -depended set of edges that connects the two complementary vertex subsets and . The size of the cut to bipartite the weighted undirected graph G can be defined as
where denotes a weight of the edge and the scaling 1/2 is introduced to compensate for double counting the same weights .
The identities
where the total sum in the first term does not depend on , imply that minimizing in is equivalent to minimizing . Defining the edge weight thus turns the Ising problem without an external field into a graph Max-Cut problem
maximizing the cut size , which is related to the Ising Hamiltonian as follows,
Questions
A significant number of statistical questions to ask about this model are in the limit of large numbers of spins:
In a typical configuration, are most of the spins +1 or −1, or are they split equally?
If a spin at any given position i is 1, what is the probability that the spin at position j is also 1?
If β is changed, is there a phase transition?
On a lattice Λ, what is the fractal dimension of the shape of a large cluster of +1 spins?
Basic properties and history
The most studied case of the Ising model is the translation-invariant ferromagnetic zero-field model on a d-dimensional lattice, namely, Λ = Zd, Jij = 1, h = 0.
No phase transition in one dimension
In his 1924 PhD thesis, Ising solved the model for the d = 1 case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no phase transition. Namely, for any positive β, the correlations ⟨σiσj⟩ decay exponentially in |i − j|:
and the system is disordered. On the basis of this result, he incorrectly concluded that this model does not exhibit phase behaviour in any dimension.
Phase transition and exact solution in two dimensions
The Ising model undergoes a phase transition between an ordered and a disordered phase in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order:
This was first proven by Rudolf Peierls in 1936, using what is now called a Peierls argument.
The Ising model on a two-dimensional square lattice with no magnetic field was analytically solved by . Onsager obtained the correlation functions and free energy of the Ising model and announced the formula for the spontaneous magnetization for the 2-dimensional model in 1949 but did not give a derivation. gave the first published proof of this formula, using a limit formula for Fredholm determinants, proved in 1951 by Szegő in direct response to Onsager's work.
Correlation inequalities
A number of correlation inequalities have been derived rigorously for the Ising spin correlations (for general lattice structures), which have enabled mathematicians to study the Ising model both on and off criticality.
Griffiths inequality
Given any subset of spins and on the lattice, the following inequality holds,
where .
With , the special case results.
This means that spins are positively correlated on the Ising ferromagnet. An immediate application of this is that the magnetization of any set of spins is increasing with respect to any set of coupling constants .
Simon-Lieb inequality
The Simon-Lieb inequality states that for any set disconnecting from (e.g. the boundary of a box with being inside the box and being outside),
This inequality can be used to establish the sharpness of phase transition for the Ising model.
FKG inequality
This inequality is proven first for a type of positively-correlated percolation model, of which includes a representation of the Ising model. It is used to determine the critical temperatures of planar Potts model using percolation arguments (which includes the Ising model as a special case).
Historical significance
One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials, as when ice melts to water or water turns to steam. His idea was that small changes in atomic-scale properties would lead to big changes in the aggregate behavior. Others believed that matter is inherently continuous, not atomic, and that the large-scale properties of matter are not reducible to basic atomic properties.
While the laws of chemical bonding made it clear to nineteenth century chemists that atoms were real, among physicists the debate continued well into the early twentieth century. Atomists, notably James Clerk Maxwell and Ludwig Boltzmann, applied Hamilton's formulation of Newton's laws to large systems, and found that the statistical behavior of the atoms correctly describes room temperature gases. But classical statistical mechanics did not account for all of the properties of liquids and solids, nor of gases at low temperature.
Once modern quantum mechanics was formulated, atomism was no longer in conflict with experiment, but this did not lead to a universal acceptance of statistical mechanics, which went beyond atomism. Josiah Willard Gibbs had given a complete formalism to reproduce the laws of thermodynamics from the laws of mechanics. But many faulty arguments survived from the 19th century, when statistical mechanics was considered dubious. The lapses in intuition mostly stemmed from the fact that the limit of an infinite statistical system has many zero-one laws which are absent in finite systems: an infinitesimal change in a parameter can lead to big differences in the overall, aggregate behavior, as Democritus expected.
No phase transitions in finite volume
In the early part of the twentieth century, some believed that the partition function could never describe a phase transition, based on the following argument:
The partition function is a sum of e−βE over all configurations.
The exponential function is everywhere analytic as a function of β.
The sum of analytic functions is an analytic function.
This argument works for a finite sum of exponentials, and correctly establishes that there are no singularities in the free energy of a system of a finite size. For systems which are in the thermodynamic limit (that is, for infinite systems) the infinite sum can lead to singularities. The convergence to the thermodynamic limit is fast, so that the phase behavior is apparent already on a relatively small lattice, even though the singularities are smoothed out by the system's finite size.
This was first established by Rudolf Peierls in the Ising model.
Peierls droplets
Shortly after Lenz and Ising constructed the Ising model, Peierls was able to explicitly show that a phase transition occurs in two dimensions.
To do this, he compared the high-temperature and low-temperature limits. At infinite temperature (β = 0) all configurations have equal probability. Each spin is completely independent of any other, and if typical configurations at infinite temperature are plotted so that plus/minus are represented by black and white, they look like television snow. For high, but not infinite temperature, there are small correlations between neighboring positions, the snow tends to clump a little bit, but the screen stays randomly looking, and there is no net excess of black or white.
A quantitative measure of the excess is the magnetization, which is the average value of the spin:
A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero.
Every configuration of spins has equal energy to the configuration with all spins flipped.
So for every configuration with magnetization M there is a configuration with magnetization −M with equal probability.
The system should therefore spend equal amounts of time in the configuration with magnetization M as with magnetization −M.
So the average magnetization (over all time) is zero.
As before, this only proves that the average magnetization is zero at any finite volume. For an infinite system, fluctuations might not be able to push the system from a mostly plus state to a mostly minus with a nonzero probability.
For very high temperatures, the magnetization is zero, as it is at infinite temperature. To see this, note that if spin A has only a small correlation ε with spin B, and B is only weakly correlated with C, but C is otherwise independent of A, the amount of correlation of A and C goes like ε2. For two spins separated by distance L, the amount of correlation goes as εL, but if there is more than one path by which the correlations can travel, this amount is enhanced by the number of paths.
The number of paths of length L on a square lattice in d dimensions is
since there are 2d choices for where to go at each step.
A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length L divided by
which goes to zero when ε is small.
At low temperatures (β ≫ 1) the configurations are near the lowest-energy configuration, the one where all the spins are plus or all the spins are minus. Peierls asked whether it is statistically possible at low temperature, starting with all the spins minus, to fluctuate to a state where most of the spins are plus. For this to happen, droplets of plus spin must be able to congeal to make the plus state.
The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter L, the area is somewhere between (L − 2)/2 (the straight line) and (L/4)2 (the square box). The probability cost for introducing a droplet has the factor e−βL, but this contributes to the partition function multiplied by the total number of droplets with perimeter L, which is less than the total number of paths of length L:
So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by
which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from −1.
So Peierls established that the magnetization in the Ising model eventually defines superselection sectors, separated domains not linked by finite fluctuations.
Kramers–Wannier duality
Kramers and Wannier were able to show that the high-temperature expansion and the low-temperature expansion of the model are equal up to an overall rescaling of the free energy. This allowed the phase-transition point in the two-dimensional model to be determined exactly (under the assumption that there is a unique critical point).
Yang–Lee zeros
After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.
Applications
Magnetism
The original motivation for the model was the phenomenon of ferromagnetism. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time.
In the 19th century, it was thought that magnetic fields are due to currents in matter, and Ampère postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by Larmor. In order to have ferromagnetism, the atoms must have permanent magnetic moments which are not due to the motion of classical charges.
Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electron spins all oriented in the same direction. It was natural to ask how the electrons' spins all know which direction to point in, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electron spins could be oriented in the same direction using only local forces.
Lattice gas
The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy depends only on momentum and not on position, while the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms.
A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits Bi, where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4JBiBj for each occupied neighboring pair.
The density of the atoms can be controlled by adding a chemical potential, which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with N atoms is changed by μN. The probability cost of one more atom is a factor of exp(−βμ).
So the energy of the lattice gas is:
Rewriting the bits in terms of spins,
For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field h = (zJ − μ)/2, where z is the number of neighbors.
In biological systems, modified versions of the lattice gas model have been used to understand a range of binding behaviors. These include the binding of ligands to receptors in the cell surface, the binding of chemotaxis proteins to the flagellar motor, and the condensation of DNA.
Neuroscience
The activity of neurons in the brain can be modelled statistically. Each neuron at any time is either active + or inactive −. The active neurons are those that send an action potential down the axon in any given time window, and the inactive ones are those that do not.
Following the general approach of Jaynes, a later interpretation of Schneidman, Berry, Segev and Bialek,
is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the principle of maximum entropy. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a Lagrange multiplier for each neuron:
But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers:
where are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics.
This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences.
Spin glasses
With the Ising model the so-called spin glasses can also be described, by the usual Hamiltonian where the S-variables describe the Ising spins, while the Ji,k are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability p and ferromagnetic bonds with probability 1 − p (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When p = 0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.
Artificial neural network
Ising model was instrumental in the development of the Hopfield network. The original Ising model is a model for equilibrium. Roy J. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium (Glauber dynamics), adding in the component of time. (Kaoru Nakano, 1971) and (Shun'ichi Amari, 1972), proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by (, 1974), who was cited by Hopfield in his 1982 paper.
The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.
Sea ice
The melt pond can be modelled by the Ising model; sea ice topography data bears rather heavily on the results. The state variable is binary for a simple 2D approximation, being either water or ice.
Cayley tree topologies and large neural networks
In order to investigate an Ising model with potential relevance for large (e.g. with or interactions per node) neural nets, at the suggestion of Krizan in 1979, obtained the exact analytical expression for the free energy of the Ising model on the closed Cayley tree (with an arbitrarily large branching ratio) for a zero-external magnetic field (in the thermodynamic limit) by applying the methodologies of and
where is an arbitrary branching ratio (greater than or equal to 2), , , (with representing the nearest-neighbor interaction energy) and there are k (→ ∞ in the thermodynamic limit) generations in each of the tree branches (forming the closed tree architecture as shown in the given closed Cayley tree diagram.) The sum in the last term can be shown to converge uniformly and rapidly (i.e. for z → ∞, it remains finite) yielding a continuous and monotonous function, establishing that, for greater than or equal to 2, the free energy is a continuous function of temperature T. Further analysis of the free energy indicates that it exhibits an unusual discontinuous first derivative at the critical temperature (, .)
The spin-spin correlation between sites (in general, m and n) on the tree was found to have a transition point when considered at the vertices (e.g. A and Ā, its reflection), their respective neighboring sites (such as B and its reflection), and between sites adjacent to the top and bottom extreme vertices of the two trees (e.g. A and B), as may be determined from
where is equal to the number of bonds, is the number of graphs counted for odd vertices with even intermediate sites (see cited methodologies and references for detailed calculations), is the multiplicity resulting from two-valued spin possibilities and the partition function is derived from . (Note: is consistent with the referenced literature in this section and is equivalent to or utilized above and in earlier sections; it is valued at .) The critical temperature is given by
The critical temperature for this model is only determined by the branching ratio and the site-to-site interaction energy , a fact which may have direct implications associated with neural structure vs. its function (in that it relates the energies of interaction and branching ratio to its transitional behavior.) For example, a relationship between the transition behavior of activities of neural networks between sleeping and wakeful states (which may correlate with a spin-spin type of phase transition) in terms of changes in neural interconnectivity () and/or neighbor-to-neighbor interactions (), over time, is just one possible avenue suggested for further experimental investigation into such a phenomenon. In any case, for this Ising model it was established, that “the stability of the long-range correlation increases with increasing or increasing .”
For this topology, the spin-spin correlation was found to be zero between the extreme vertices and the central sites at which the two trees (or branches) are joined (i.e. between A and individually C, D, or E.) This behavior is explained to be due to the fact that, as k increases, the number of links increases exponentially (between the extreme vertices) and so even though the contribution to spin correlations decrease exponentially, the correlation between sites such as the extreme vertex (A) in one tree and the extreme vertex in the joined tree (Ā) remains finite (above the critical temperature.) In addition, A and B also exhibit a non-vanishing correlation (as do their reflections) thus lending itself to, for B level sites (with A level), being considered “clusters” which tend to exhibit synchronization of firing.
Based upon a review of other classical network models as a comparison, the Ising model on a closed Cayley tree was determined to be the first classical statistical mechanical model to demonstrate both local and long-range sites with non-vanishing spin-spin correlations, while at the same time exhibiting intermediate sites with zero correlation, which indeed was a relevant matter for large neural networks at the time of its consideration. The model's behavior is also of relevance for any other divergent-convergent tree physical (or biological) system exhibiting a closed Cayley tree topology with an Ising-type of interaction. This topology should not be ignored since its behavior for Ising models has been solved exactly, and presumably nature will have found a way of taking advantage of such simple symmetries at many levels of its designs.
early on noted the possibility of interrelationships between (1) the classical large neural network model (with similar coupled divergent-convergent topologies) with (2) an underlying statistical quantum mechanical model (independent of topology and with persistence in fundamental quantum states):
It was a natural and common belief among early neurophysicists (e.g. Umezawa, Krizan, Barth, etc.) that classical neural models (including those with statistical mechanical aspects) will one day have to be integrated with quantum physics (with quantum statistical aspects), similar perhaps to how the domain of chemistry has historically integrated itself into quantum physics via quantum chemistry.
Several additional statistical mechanical problems of interest remain to be solved for the closed Cayley tree, including the time-dependent case and the external field situation, as well as theoretical efforts aimed at understanding interrelationships with underlying quantum constituents and their physics.
Numerical simulation
The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with
L = |Λ|: the total number of sites on the lattice,
σj ∈ {−1, +1}: an individual spin site on the lattice, j = 1, ..., L,
S ∈ {−1, +1}L: state of the system.
Since every spin site has ±1 spin, there are 2L different states that are possible. This motivates the reason for the Ising model to be simulated using Monte Carlo methods.
The Hamiltonian that is commonly used to represent the energy of the model when using Monte Carlo methods is:
Furthermore, the Hamiltonian is further simplified by assuming zero external field h, since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:
Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.
Metropolis algorithm
The Metropolis–Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising model estimations. The algorithm first chooses selection probabilities g(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities A(μ, ν) so that detailed balance is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes ferromagnetic, meaning all of the sites point in the same direction.
When implementing the algorithm, one must ensure that g(μ, ν) is selected such that ergodicity is met. In thermal equilibrium a system's energy only fluctuates within a small range. This is the motivation behind the concept of single-spin-flip dynamics, which states that in each transition, we will only change one of the spin sites on the lattice. Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time. The maximum amount of change between the energy of the present state, Hμ and any possible new state's energy Hν (using single-spin-flip dynamics) is 2J between the spin we choose to "flip" to move to the new state and that spin's neighbor. Thus, in a 1D Ising model, where each site has two neighbors (left and right), the maximum difference in energy would be 4J. Let c represent the lattice coordination number; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to periodic boundary conditions. It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.
Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following. Since there are L total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of L new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the L states: g(μ, ν) = 1/L. Detailed balance tells us that the following equation must hold:
Thus, we want to select the acceptance probability for our algorithm to satisfy
If Hν > Hμ, then A(ν, μ) > A(μ, ν). Metropolis sets the larger of A(μ, ν) or A(ν, μ) to be 1. By this reasoning the acceptance algorithm is:
The basic form of the algorithm is as follows:
Pick a spin site using selection probability g(μ, ν) and calculate the contribution to the energy involving this spin.
Flip the value of the spin and calculate the new contribution.
If the new energy is less, keep the flipped value.
If the new energy is more, only keep with probability
Repeat.
The change in energy Hν − Hμ only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.
As a Markov chain
It is possible to view the Ising model as a Markov chain, as the immediate probability Pβ(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a Markov chain Monte Carlo simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly L other states, where each transition corresponds to flipping a single spin site to the opposite value. Furthermore, since the energy equation Hσ change only depends on the nearest-neighbor interaction strength J, the Ising model and its variants such the Sznajd model can be seen as a form of a voter model for opinion dynamics.
Solutions
One dimension
The thermodynamic limit exists as long as the interaction decay is with α > 1.
In the case of ferromagnetic interaction with 1 < α < 2, Dyson proved, by comparison with the hierarchical case, that there is phase transition at small enough temperature.
In the case of ferromagnetic interaction , Fröhlich and Spencer proved that there is phase transition at small enough temperature (in contrast with the hierarchical case).
In the case of interaction with α > 2 (which includes the case of finite-range interactions), there is no phase transition at any positive temperature (i.e. finite β), since the free energy is analytic in the thermodynamic parameters.
In the case of nearest neighbor interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters, and the truncated two-point spin correlation decays exponentially fast. At zero temperature (i.e. infinite β), there is a second-order phase transition: the free energy is infinite, and the truncated two-point spin correlation does not decay (remains constant). Therefore, T = 0 is the critical temperature of this case. Scaling formulas are satisfied.
Ising's exact solution
In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The Hamiltonian of the one-dimensional Ising model on a lattice of L sites with free boundary conditions is
where J and h can be any number, since in this simplified case J is a constant representing the interaction strength between the nearest neighbors and h is the constant external magnetic field applied to lattice sites. Then the
free energy is
and the spin-spin correlation (i.e. the covariance) is
where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length c(β) vanishes.
Proof
The proof of this result is a simple computation.
If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when
Then the model factorizes under the change of variables
This gives
Therefore, the free energy is
With the same change of variables
hence it decays exponentially as soon as T ≠ 0; but for T = 0, i.e. in the limit β → ∞ there is no decay.
If h ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is
The coefficients can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is
or
In matrix formalism
where λ1 is the highest eigenvalue of V, while is the other eigenvalue:
and . This gives the formula of the free energy above. In the thermodynamics limit for the non-interaction case (J = 0), we got
as the answer for the open-boundary Ising model.
Comments
The energy of the lowest state is −JL, when all the spins are the same. For any other configuration, the extra energy is equal to 2J times the number of sign changes that are encountered when scanning the configuration from left to right.
If we designate the number of sign changes in a configuration as k, the difference in energy from the lowest energy state is 2k. Since the energy is additive in the number of flips, the probability p of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:
The problem is reduced to independent biased coin tosses. This essentially completes the mathematical description.
From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to p, so the correlations fall off exponentially.
The partition function is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:
The logarithm divided by L is the free energy density:
which is analytic away from β = ∞. A sign of a phase transition is a non-analytic free energy, so the one-dimensional model does not have a phase transition.
One-dimensional solution with transverse field
To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective Pauli matrices. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The transverse-field Hamiltonian is given by
The transverse-field model experiences a phase transition between an ordered and disordered regime at J ~ h. This can be shown by a mapping of Pauli matrices
Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain
Since the roles of h and J are switched, the Hamiltonian undergoes a transition at J = h.
Renormalization
When there is no external field, we can derive a functional equation that satisfies using renormalization. Specifically, let be the partition function with sites. Now we have:where . We sum over each of , to obtainNow, since the cosh function is even, we can solve as . Now we have a self-similarity relation:Taking the limit, we obtainwhere .
When is small, we have , so we can numerically evaluate by iterating the functional equation until is small.
Two dimensions
In the ferromagnetic case there is a phase transition. At low temperature, the Peierls argument proves positive magnetization for the nearest neighbor case and then, by the Griffiths inequality, also when longer range interactions are added. Meanwhile, at high temperature, the cluster expansion gives analyticity of the thermodynamic functions.
In the nearest-neighbor case, the free energy was exactly computed by Onsager. The spin-spin correlation functions were computed by McCoy and Wu.
Onsager's exact solution
obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies and , respectively
From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature which solves the equation
In the isotropic case when the horizontal and vertical interaction energies are equal , the critical temperature occurs at the following point
When the interaction energies , are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field , so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently. Specifically, around a triangle, it is impossible to make all 3 spin-pairs antiparallel, so the antiferromagnetic Ising model cannot reach the minimal energy state. This is an example of geometric frustration.
Transfer matrix
Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function
Think of the i direction as space, and the j direction as time. This is an independent sum over all the values that the spins can take at each time slice. This is a type of path integral, it is the sum over all spin histories.
A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time t and time t + Δt:
The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.
where N is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.
Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the transfer matrix:
The configuration in each slice is a one-dimensional collection of spins. At each time slice, T has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are C1 and C2, and they are all one-dimensional spin configurations. We can think of the vector space that T acts on as all complex linear combinations of these. Using quantum mechanical notation:
where each basis vector is a spin configuration of a one-dimensional Ising model.
Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the sum over all histories which come back to the original configuration after N steps:
Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix T, we can find Z.
T in terms of Pauli matrices
The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:
In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.
Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position i:
T can be written in terms of these:
where A and B are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position i has flipped.
Spin flip creation and annihilation operators
Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σz term in T counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:
The first term flips a spin, so depending on the basis state it either:
moves a spin-flip one unit to the right
moves a spin-flip one unit to the left
produces two spin-flips on neighboring sites
destroys two spin-flips on neighboring sites.
Writing this out in terms of creation and annihilation operators:
Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the T matrix can be diagonalized by Fourier transforms.
Carrying out the diagonalization produces the Onsager free energy.
Onsager's formula for spontaneous magnetization
Onsager famously announced the following expression for the spontaneous magnetization M of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof
where and are horizontal and vertical interaction energies.
A complete derivation was only given in 1951 by using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward using Szegő's limit formula for Toeplitz determinants by treating the magnetization as the limit of correlation functions.
Minimal model
At the critical point, the two-dimensional Ising model is a two-dimensional conformal field theory. The spin and energy correlation functions are described by a minimal model, which has been exactly solved.
Three dimensions
In three as in two dimensions, the most studied case of the Ising model is the translation-invariant model on a cubic lattice with nearest-neighbor coupling in the zero magnetic field. Many theoreticians searched for an analytical three-dimensional solution for many decades, which would be analogous to Onsager's solution in the two-dimensional case. Such a solution has not been found until now, although there is no proof that it may not exist. In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic strings by Alexander Polyakov and Vladimir Dotsenko. This construction has been carried on the lattice, and the continuum limit, conjecturally describing the critical point, is unknown.
In three as in two dimensions, Peierls' argument shows that there is a phase transition. This phase transition is rigorously known to be continuous (in the sense that correlation length diverges and the magnetization goes to zero), and is called the critical point. It is believed that the critical point can be described by a renormalization group fixed point of the Wilson-Kadanoff renormalization group transformation. It is also believed that the phase transition can be described by a three-dimensional unitary conformal field theory, as evidenced by Monte Carlo simulations, exact diagonalization results in quantum models, and quantum field theoretical arguments. Although it is an open problem to establish rigorously the renormalization group picture or the conformal field theory picture, theoretical physicists have used these two methods to compute the critical exponents of the phase transition, which agree with the experiments and with the Monte Carlo simulations. This conformal field theory describing the three-dimensional Ising critical point is under active investigation using the method of the conformal bootstrap. This method currently yields the most precise information about the structure of the critical theory (see Ising critical exponents).
In 2000, Sorin Istrail of Sandia National Laboratories proved that the spin glass Ising model on a nonplanar lattice is NP-complete. That is, assuming P ≠ NP, the general spin glass Ising model is exactly solvable only in planar cases, so solutions for dimensions higher than two are also intractable. Istrail's result only concerns the spin glass model with spatially varying couplings, and tells nothing about Ising's original ferromagnetic model with equal couplings.
Four dimensions and above
In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.
Local field
The field H is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of H and not the spins. Once the correlations in H are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in H.
For any value of the slowly varying field H, the free energy (log-probability) is a local analytic function of H and its gradients. The free energy F(H) is defined to be the sum over all Ising configurations which are consistent with the long wavelength field. Since H is a coarse description, there are many Ising configurations consistent with each value of H, so long as not too much exactness is required for the match.
Since the allowed range of values of the spin in any region only depends on the values of H within one averaging volume from that region, the free energy contribution from each region only depends on the value of H there and in the neighboring regions. So F is a sum over all regions of a local contribution, which only depends on H and its derivatives.
By symmetry in H, only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:
On a square lattice, symmetries guarantee that the coefficients Zi of the derivative terms are all equal. But even for an anisotropic Ising model, where the Zis in different directions are different, the fluctuations in H are isotropic in a coordinate system where the different directions of space are rescaled.
On any lattice, the derivative term
is a positive definite quadratic form, and can be used to define the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make Zij = δij. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this accidental symmetry is lost.
Since βF is a function of a slowly spatially varying field, the probability of any field configuration is (omitting higher-order terms):
The statistical average of any product of H terms is equal to:
The denominator in this expression is called the partition function:and the integral over all possible values of H is a statistical path integral. It integrates exp(βF) over all values of H, over all the long wavelength fourier components of the spins. F is a "Euclidean" Lagrangian for the field H. It is similar to the Lagrangian in of a scalar field in quantum field theory, the difference being that all the derivative terms enter with a positive sign, and there is no overall factor of i (thus "Euclidean").
Dimensional analysis
The form of F can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of H needs to be determined.
In the generic case, choosing the scaling law for H is easy, since the only term that contributes is the first one,
This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of H at any point fluctuates completely independently of the value at any other point.
The scale of the field can be redefined to absorb the coefficient A, and then it is clear that A only determines the overall scale of fluctuations. The ultralocal model describes the long wavelength high temperature behavior of the Ising model, since in this limit the fluctuation averages are independent from point to point.
To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in H go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing A in the system of units where H does not absorb A. The phase transition can only happen when the subleading terms in F can contribute, but since the first term dominates at long distances, the coefficient A must be tuned to zero. This is the location of the critical point:
where t is a parameter which goes through zero at the transition.
Since t is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once t is small, the scale of the field can either be set to fix the coefficient of the H4 term or the (∇H)2 term to 1.
Magnetization
To find the magnetization, fix the scaling of H so that λ is one. Now the field H has dimension −d/4, so that H4ddx is dimensionless, and Z has dimension 2 − d/2. In this scaling, the gradient term is only important at long distances for d ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.
There is one subtle point. The field H is fluctuating statistically, and the fluctuations can shift the zero point of t. To see how, consider H4 split in the following way:
The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in t. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of t by dimensional analysis, it is the shifted t that is important. This was historically very confusing, because the shift in t at any finite λ is finite, but near the transition t is very small. The fractional change in t is very large, and in units where t is fixed the shift looks infinite.
The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted t,
For t < 0, the minima are at H proportional to the square root of t. So Landau's catastrophe argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean-field value.
When t is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the fluctuations again become ultralocal at long distances.
Fluctuations
To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1 − d/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the H2 term is 2, while the scale dimension of the H4 term is 4 − d. For d < 4, the H4 term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.
This is an essential difference. In dimensions higher than 4, fixing the scale of the gradient term means that the coefficient of the H4 term is less and less important at longer and longer wavelengths. The dimension at which nonquadratic contributions begin to contribute is known as the critical dimension. In the Ising model, the critical dimension is 4.
In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as Gaussian averages:
valid when x − y is large. The function G(x − y) is the analytic continuation to imaginary time of the Feynman propagator, since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by Wick's theorem. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of G(x − y) for each pair.
where C is the proportionality constant. So knowing G is enough. It determines all the multipoint correlations of the field.
The critical two-point function
To determine the form of G, consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:
This is valid at noncoincident points only, since the correlations of H are singular when points collide. H obeys classical equations of motion for the same reason that quantum mechanical operators obey them—its fluctuations are defined by a path integral.
At the critical point t = 0, this is Laplace's equation, which can be solved by Gauss's method from electrostatics. Define an electric field analog by
Away from the origin:
since G is spherically symmetric in d dimensions, and E is the radial gradient of G. Integrating over a large d − 1 dimensional sphere,
This gives:
and G can be found by integrating with respect to r.
The constant C fixes the overall normalization of the field.
G(r) away from the critical point
When t does not equal zero, so that H is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:
For r small compared with , the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.
To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:
This is G, since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k.
This is the inverse of the operator ∇2 − t in k-space, acting on the unit function in k-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as G with the same boundary conditions that determine the strength of the divergence at 0.
The interpretation of the integral representation over the proper time τ is that the two point function is the sum over all random walk paths that link position 0 to position x over time τ. The density of these paths at time τ at position x is Gaussian, but the random walkers disappear at a steady rate proportional to t so that the Gaussian at time τ is diminished in height by a factor that decreases steadily exponentially. In the quantum field theory context, these are the paths of relativistically localized quanta in a formalism that follows the paths of individual particles. In the pure statistical context, these paths still appear by the mathematical correspondence with quantum fields, but their interpretation is less directly physical.
The integral representation immediately shows that G(r) is positive, since it is represented as a weighted sum of positive Gaussians. It also gives the rate of decay at large r, since the proper time for a random walk to reach position τ is r2 and in this time, the Gaussian height has decayed by . The decay factor appropriate for position r is therefore .
A heuristic approximation for G(r) is:
This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of Bessel functions.
Symanzik polymer interpretation
The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the H4 interaction is 4. The term H4 can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the H2 coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.
The fractal dimension of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε−2. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. Kurt Symanzik argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.
4 − ε dimensions – renormalization group
The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.
The negative logarithm of the probability of any field configuration H is the free energy function
The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a Feynman expansion as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.
Although dimensional analysis shows that both λ and Z are dimensionless, this is misleading. The long wavelength statistical fluctuations are not exactly scale invariant, and only become scale invariant when the interaction strength vanishes.
The reason is that there is a cutoff used to define H, and the cutoff defines the shortest wavelength. Fluctuations of H at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.
Wilson renormalization
A quick heuristic way of studying the scaling is to cut off the H wavenumbers at a point λ. Fourier modes of H with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.
To restore the old cutoff, perform a partial integration over all the wavenumbers which used to be forbidden, but are now fluctuating. In Feynman diagrams, integrating over a fluctuating mode at wavenumber k links up lines carrying momentum k in a correlation function in pairs, with a factor of the inverse propagator.
Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and do not change at all.
The lowest order effect of integrating out can be calculated from the equations of motion:
This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < k < (1+b)Λ, it will be a slightly different identity.
Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the H3 term. In a Feynman diagram expansion, the H3 term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber k gives a change H3 with one dangling line, so proportional to H:
The factor of 3 comes from the fact that the loop can be closed in three different ways.
The integral should be split into two parts:
The first part is not proportional to t, and in the equation of motion it can be absorbed by a constant shift in t. It is caused by the fact that the H3 term has a linear part. Only the second term, which varies from t to t, contributes to the critical scaling.
This new linear term adds to the first term on the left hand side, changing t by an amount proportional to t. The total change in t is the sum of the term from dimensional analysis and this second term from operator products:
So t is rescaled, but its dimension is anomalous, it is changed by an amount proportional to the value of λ.
But λ also changes. The change in λ requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from H3 splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is
The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So
These two equations together define the renormalization group equations in four dimensions:
The coefficient B is determined by the formula
and is proportional to the area of a three-dimensional sphere of radius λ, times the width of the integration region bΛ divided by Λ4:
In other dimensions, the constant B changes, but the same constant appears both in the t flow and in the coupling flow. The reason is that the derivative with respect to t of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the t is the combinatorial factors from joining and splitting.
Wilson–Fisher fixed point
To investigate three dimensions starting from the four-dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling does not change very much when the dimension is changed.
The process of continuing away from dimension 4 is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4 − ε defined by:
In dimension 4 − ε, the coupling λ has positive scale dimension ε, and this must be added to the flow.
The coefficient B is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at:
where the scale dimensions of t is altered by an amount λB = ε/3.
The magnetization exponent is altered proportionately to:
which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125.
Infinite dimensions – mean field
The behavior of an Ising model on a fully connected graph may be completely understood by mean-field theory. This type of description is appropriate to very-high-dimensional square lattices, because then each site has a very large number of neighbors.
The idea is that if each spin is connected to a large number of spins, only the average ratio of + spins to − spins is important, since the fluctuations about this mean will be small. The mean field H is the average fraction of spins which are + minus the average fraction of spins which are −. The energy cost of flipping a single spin in the mean field H is ±2JNH. It is convenient to redefine J to absorb the factor N, so that the limit N → ∞ is smooth. In terms of the new J, the energy cost for flipping a spin is ±2JH.
This energy cost gives the ratio of probability p that the spin is + to the probability 1−p that the spin is −. This ratio is the Boltzmann factor:
so that
The mean value of the spin is given by averaging 1 and −1 with the weights p and 1 − p, so the mean value is 2p − 1. But this average is the same for all spins, and is therefore equal to H.
The solutions to this equation are the possible consistent mean fields. For βJ < 1 there is only the one solution at H = 0. For bigger values of β there are three solutions, and the solution at H = 0 is unstable.
The instability means that increasing the mean field above zero a little bit produces a statistical fraction of spins which are + which is bigger than the value of the mean field. So a mean field which fluctuates above zero will produce an even greater mean field, and will eventually settle at the stable solution. This means that for temperatures below the critical value βJ = 1 the mean-field Ising model undergoes a phase transition in the limit of large N.
Above the critical temperature, fluctuations in H are damped because the mean field restores the fluctuation to zero field. Below the critical temperature, the mean field is driven to a new equilibrium value, which is either the positive H or negative H solution to the equation.
For βJ = 1 + ε, just below the critical temperature, the value of H can be calculated from the Taylor expansion of the hyperbolic tangent:
Dividing by H to discard the unstable solution at H = 0, the stable solutions are:
The spontaneous magnetization H grows near the critical point as the square root of the change in temperature. This is true whenever H can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led Landau to suspect that all Ising type phase transitions in all dimensions should follow this law.
The mean-field exponent is universal because changes in the character of solutions of analytic equations are always described by catastrophes in the Taylor series, which is a polynomial equation. By symmetry, the equation for H must only have odd powers of H on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of H on the right hand side is 1. Near the transition:
Whatever A and B are, so long as neither of them is tuned to zero, the spontaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy βF is either non-analytic or non-generic at the exact β where the transition occurs.
But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is:
The exponent is also universal, since it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean-field value. This was a great surprise.
This is also true in two dimensions, where
But there it was not a surprise, because it was predicted by Onsager.
Low dimensions – block spins
In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3.
But renormalization can also be productively applied to the spins directly, without passing to an average field. Historically, this approach is due to Leo Kadanoff and predated the perturbative ε expansion.
The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality.
Migdal–Kadanoff renormalization
Write the two-dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute:
By translation invariance, Jij is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two-dimensional vector i − j. The higher order coefficients are also similarly restricted.
The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (i,j), the odd sites are those with i + j odd and the even sites those with i + j even, and even sites are only connected to odd sites.
The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances larger.
Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than the critical temperature, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included.
The simplest approximation is to keep only the usual J term, and discard everything else. This will generate a flow in J, analogous to the flow in t at the fixed point of λ in the ε expansion.
To find the change in J, consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is:
where N± is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is:
This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is:
From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8J. The difference in energy between all spins equal and nonstaggered but net zero spin is 4J. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6J. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that:
For small J, this quickly flows to zero coupling. Large J'''s flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point.
Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in both two and three dimensions.
See also
ANNNI model
Binder parameter
Boltzmann machine
Conformal bootstrap
Construction of an irreducible Markov chain in the Ising model
Geometrically frustrated magnet
Classical Heisenberg model
Quantum Heisenberg model
Hopfield net
Ising critical exponents
J. C. Ward
Kuramoto model
Maximal evenness
Order operator
Potts model (common with Ashkin–Teller model)
Spin models
Square-lattice Ising model
Swendsen–Wang algorithm
t-J model
Two-dimensional critical Ising model
Wolff algorithm
XY model
Z N model
Footnotes
References
Ross Kindermann and J. Laurie Snell (1980), Markov Random Fields and Their Applications. American Mathematical Society. .
Kleinert, H (1989), Gauge Fields in Condensed Matter, Vol. I, "Superflow and Vortex Lines", pp. 1–742, Vol. II, "Stresses and Defects", pp. 743–1456, World Scientific (Singapore); Paperback (also available online: Vol. I and Vol. II)Kleinert, H and Schulte-Frohlinde, V (2001), Critical Properties of φ4-Theories, World Scientific (Singapore); Paperback (also available online)
Barry M. McCoy and Tai Tsun Wu (1973), The Two-Dimensional Ising Model. Harvard University Press, Cambridge Massachusetts,
John Palmer (2007), Planar Ising Correlations''. Birkhäuser, Boston, .
External links
Ising model at The Net Advance of Physics
Barry Arthur Cipra, "The Ising model is NP-complete", SIAM News, Vol. 33, No. 6; online edition (.pdf)
Science World article on the Ising Model
A dynamical 2D Ising java applet by UCSC
A dynamical 2D Ising java applet
A larger/more complicated 2D Ising java applet
“I sing well-tempered” The Ising Model: A simple model for critical behavior in a system of spins by Dirk Brockman, is an interactive simulation that allows users to export the working code to a presentation slide
Ising Model simulation by Enrique Zeleny, the Wolfram Demonstrations Project
Phase transitions on lattices
Three-dimensional proof for Ising Model impossible, Sandia researcher claims
Interactive Monte Carlo simulation of the Ising, XY and Heisenberg models with 3D graphics(requires WebGL compatible browser)
Ising Model code , image denoising example with Ising Model
David Tong's Lecture Notes provide a good introduction
The Cartoon Picture of Magnets That Has Transformed Science - Quanta Magazine article about Ising model
Simulation of the 2-dimensional Ising model in Julia: https://github.com/cossio/SquareIsingModel.jl
Spin models
Exactly solvable models
Statistical mechanics
Lattice models
NP-complete problems | Ising model | [
"Physics",
"Materials_science",
"Mathematics"
] | 16,432 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Computational problems",
"Condensed matter physics",
"Statistical mechanics",
"Mathematical problems",
"NP-complete problems"
] |
292,800 | https://en.wikipedia.org/wiki/Dirac%20spinor | In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group.
Dirac spinors are important and interesting in numerous ways. Foremost, they are important as they do describe all of the known fundamental particle fermions in nature; this includes the electron and the quarks. Algebraically they behave, in a certain sense, as the "square root" of a vector. This is not readily apparent from direct examination, but it has slowly become clear over the last 60 years that spinorial representations are fundamental to geometry. For example, effectively all Riemannian manifolds can have spinors and spin connections built upon them, via the Clifford algebra. The Dirac spinor is specific to that of Minkowski spacetime and Lorentz transformations; the general case is quite similar.
This article is devoted to the Dirac spinor in the Dirac representation. This corresponds to a specific representation of the gamma matrices, and is best suited for demonstrating the positive and negative energy solutions of the Dirac equation. There are other representations, most notably the chiral representation, which is better suited for demonstrating the chiral symmetry of the solutions to the Dirac equation. The chiral spinors may be written as linear combinations of the Dirac spinors presented below; thus, nothing is lost or gained, other than a change in perspective with regards to the discrete symmetries of the solutions.
The remainder of this article is laid out in a pedagogical fashion, using notations and conventions specific to the standard presentation of the Dirac spinor in textbooks on quantum field theory. It focuses primarily on the algebra of the plane-wave solutions. The manner in which the Dirac spinor transforms under the action of the Lorentz group is discussed in the article on bispinors.
Definition
The Dirac spinor is the bispinor in the plane-wave ansatz
of the free Dirac equation for a spinor with mass ,
which, in natural units becomes
and with Feynman slash notation may be written
An explanation of terms appearing in the ansatz is given below.
The Dirac field is , a relativistic spin-1/2 field, or concretely a function on Minkowski space valued in , a four-component complex vector function.
The Dirac spinor related to a plane-wave with wave-vector is , a vector which is constant with respect to position in spacetime but dependent on momentum .
The inner product on Minkowski space for vectors and is .
The four-momentum of a plane wave is where is arbitrary,
In a given inertial frame of reference, the coordinates are . These coordinates parametrize Minkowski space. In this article, when appears in an argument, the index is sometimes omitted.
The Dirac spinor for the positive-frequency solution can be written as
where
is an arbitrary two-spinor, concretely a vector.
is the Pauli vector,
is the positive square root . For this article, the subscript is sometimes omitted and the energy simply written .
In natural units, when is added to or when is added to , means in ordinary units; when is added to , means in ordinary units. When m is added to or to it means (which is called the inverse reduced Compton wavelength) in ordinary units.
Derivation from Dirac equation
The Dirac equation has the form
In order to derive an expression for the four-spinor , the matrices and must be given in concrete form. The precise form that they take is representation-dependent. For the entirety of this article, the Dirac representation is used. In this representation, the matrices are
These two 4×4 matrices are related to the Dirac gamma matrices. Note that and are 2×2 matrices here.
The next step is to look for solutions of the form
while at the same time splitting into two two-spinors:
Results
Using all of the above information to plug into the Dirac equation results in
This matrix equation is really two coupled equations:
Solve the 2nd equation for and one obtains
Note that this solution needs to have in order for the solution to be valid in a frame where the particle has .
Derivation of the sign of the energy in this case. We consider the potentially problematic term .
If , clearly as .
On the other hand, let , with a unit vector, and let .
Hence the negative solution clearly has to be omitted, and . End derivation.
Assembling these pieces, the full positive energy solution is conventionally written as
The above introduces a normalization factor derived in the next section.
Solving instead the 1st equation for a different set of solutions are found:
In this case, one needs to enforce that for this solution to be valid in a frame where the particle has . The proof follows analogously to the previous case. This is the so-called negative energy solution. It can sometimes become confusing to carry around an explicitly negative energy, and so it is conventional to flip the sign on both the energy and the momentum, and to write this as
In further development, the -type solutions are referred to as the particle solutions, describing a positive-mass spin-1/2 particle carrying positive energy, and the -type solutions are referred to as the antiparticle solutions, again describing a positive-mass spin-1/2 particle, again carrying positive energy. In the laboratory frame, both are considered to have positive mass and positive energy, although they are still very much dual to each other, with the flipped sign on the antiparticle plane-wave suggesting that it is "travelling backwards in time". The interpretation of "backwards-time" is a bit subjective and imprecise, amounting to hand-waving when one's only evidence are these solutions. It does gain stronger evidence when considering the quantized Dirac field. A more precise meaning for these two sets of solutions being "opposite to each other" is given in the section on charge conjugation, below.
Chiral basis
In the chiral representation for , the solution space is parametrised by a vector , with Dirac spinor solution
where are Pauli 4-vectors and is the Hermitian matrix square-root.
Spin orientation
Two-spinors
In the Dirac representation, the most convenient definitions for the two-spinors are:
and
since these form an orthonormal basis with respect to a (complex) inner product.
Pauli matrices
The Pauli matrices are
Using these, one obtains what is sometimes called the Pauli vector:
Orthogonality
The Dirac spinors provide a complete and orthogonal set of solutions to the Dirac equation. This is most easily demonstrated by writing the spinors in the rest frame, where this becomes obvious, and then boosting to an arbitrary Lorentz coordinate frame. In the rest frame, where the three-momentum vanishes: one may define four spinors
Introducing the Feynman slash notation
the boosted spinors can be written as
and
The conjugate spinors are defined as which may be shown to solve the conjugate Dirac equation
with the derivative understood to be acting towards the left. The conjugate spinors are then
and
The normalization chosen here is such that the scalar invariant really is invariant in all Lorentz frames. Specifically, this means
Completeness
The four rest-frame spinors indicate that there are four distinct, real, linearly independent solutions to the Dirac equation. That they are indeed solutions can be made clear by observing that, when written in momentum space, the Dirac equation has the form
and
This follows because
which in turn follows from the anti-commutation relations for the gamma matrices:
with the metric tensor in flat space (in curved space, the gamma matrices can be viewed as being a kind of vielbein, although this is beyond the scope of the current article). It is perhaps useful to note that the Dirac equation, written in the rest frame, takes the form
and
so that the rest-frame spinors can correctly be interpreted as solutions to the Dirac equation. There are four equations here, not eight. Although 4-spinors are written as four complex numbers, thus suggesting 8 real variables, only four of them have dynamical independence; the other four have no significance and can always be parameterized away. That is, one could take each of the four vectors and multiply each by a distinct global phase This phase changes nothing; it can be interpreted as a kind of global gauge freedom. This is not to say that "phases don't matter", as of course they do; the Dirac equation must be written in complex form, and the phases couple to electromagnetism. Phases even have a physical significance, as the Aharonov–Bohm effect implies: the Dirac field, coupled to electromagnetism, is a U(1) fiber bundle (the circle bundle), and the Aharonov–Bohm effect demonstrates the holonomy of that bundle. All this has no direct impact on the counting of the number of distinct components of the Dirac field. In any setting, there are only four real, distinct components.
With an appropriate choice of the gamma matrices, it is possible to write the Dirac equation in a purely real form, having only real solutions: this is the Majorana equation. However, it has only two linearly independent solutions. These solutions do not couple to electromagnetism; they describe a massive, electrically neutral spin-1/2 particle. Apparently, coupling to electromagnetism doubles the number of solutions. But of course, this makes sense: coupling to electromagnetism requires taking a real field, and making it complex. With some effort, the Dirac equation can be interpreted as the "complexified" Majorana equation. This is most easily demonstrated in a generic geometrical setting, outside the scope of this article.
Energy eigenstate projection matrices
It is conventional to define a pair of projection matrices and , that project out the positive and negative energy eigenstates. Given a fixed Lorentz coordinate frame (i.e. a fixed momentum), these are
These are a pair of 4×4 matrices. They sum to the identity matrix:
are orthogonal
and are idempotent
It is convenient to notice their trace:
Note that the trace, and the orthonormality properties hold independent of the Lorentz frame; these are Lorentz covariants.
Charge conjugation
Charge conjugation transforms the positive-energy spinor into the negative-energy spinor. Charge conjugation is a mapping (an involution) having the explicit form
where denotes the transpose, is a 4×4 matrix, and is an arbitrary phase factor, The article on charge conjugation derives the above form, and demonstrates why the word "charge" is the appropriate word to use: it can be interpreted as the electrical charge. In the Dirac representation for the gamma matrices, the matrix can be written as
Thus, a positive-energy solution (dropping the spin superscript to avoid notational overload)
is carried to its charge conjugate
Note the stray complex conjugates. These can be consolidated with the identity
to obtain
with the 2-spinor being
As this has precisely the form of the negative energy solution, it becomes clear that charge conjugation exchanges the particle and anti-particle solutions. Note that not only is the energy reversed, but the momentum is reversed as well. Spin-up is transmuted to spin-down. It can be shown that the parity is also flipped. Charge conjugation is very much a pairing of Dirac spinor to its "exact opposite".
See also
Dirac equation
Weyl equation
Majorana equation
Helicity basis
Spin(1,3), the double cover of SO(1,3) by a spin group
References
Quantum mechanics
Quantum field theory
Spinors
Spinor | Dirac spinor | [
"Physics"
] | 2,496 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics"
] |
292,852 | https://en.wikipedia.org/wiki/Symplectic%20vector%20space | In mathematics, a symplectic vector space is a vector space over a field (for example the real numbers ) equipped with a symplectic bilinear form.
A symplectic bilinear form is a mapping that is
Bilinear Linear in each argument separately;
Alternating holds for all ; and
Non-degenerate for all implies that .
If the underlying field has characteristic not 2, alternation is equivalent to skew-symmetry. If the characteristic is 2, the skew-symmetry is implied by, but does not imply alternation. In this case every symplectic form is a symmetric form, but not vice versa.
Working in a fixed basis, can be represented by a matrix. The conditions above are equivalent to this matrix being skew-symmetric, nonsingular, and hollow (all diagonal entries are zero). This should not be confused with a symplectic matrix, which represents a symplectic transformation of the space. If is finite-dimensional, then its dimension must necessarily be even since every skew-symmetric, hollow matrix of odd size has determinant zero. Notice that the condition that the matrix be hollow is not redundant if the characteristic of the field is 2. A symplectic form behaves quite differently from a symmetric form, for example, the scalar product on Euclidean vector spaces.
Standard symplectic space
The standard symplectic space is with the symplectic form given by a nonsingular, skew-symmetric matrix. Typically is chosen to be the block matrix
where In is the identity matrix. In terms of basis vectors :
A modified version of the Gram–Schmidt process shows that any finite-dimensional symplectic vector space has a basis such that takes this form, often called a Darboux basis or symplectic basis.
Sketch of process:
Start with an arbitrary basis , and represent the dual of each basis vector by the dual basis: . This gives us a matrix with entries . Solve for its null space. Now for any in the null space, we have , so the null space gives us the degenerate subspace .
Now arbitrarily pick a complementary such that , and let be a basis of . Since , and , WLOG . Now scale so that . Then define for each of . Iterate.
Notice that this method applies for symplectic vector space over any field, not just the field of real numbers.
Case of real or complex field:
When the space is over the field of real numbers, then we can modify the modified Gram-Schmidt process as follows: Start the same way. Let be an orthonormal basis (with respect to the usual inner product on ) of . Since , and , WLOG . Now multiply by a sign, so that . Then define for each of , then scale each so that it has norm one. Iterate.
Similarly, for the field of complex numbers, we may choose a unitary basis. This proves the spectral theory of antisymmetric matrices.
Lagrangian form
There is another way to interpret this standard symplectic form. Since the model space R2n used above carries much canonical structure which might easily lead to misinterpretation, we will use "anonymous" vector spaces instead. Let V be a real vector space of dimension n and V∗ its dual space. Now consider the direct sum of these spaces equipped with the following form:
Now choose any basis of V and consider its dual basis
We can interpret the basis vectors as lying in W if we write . Taken together, these form a complete basis of W,
The form ω defined here can be shown to have the same properties as in the beginning of this section. On the other hand, every symplectic structure is isomorphic to one of the form . The subspace V is not unique, and a choice of subspace V is called a polarization. The subspaces that give such an isomorphism are called Lagrangian subspaces or simply Lagrangians.
Explicitly, given a Lagrangian subspace as defined below, then a choice of basis defines a dual basis for a complement, by .
Analogy with complex structures
Just as every symplectic structure is isomorphic to one of the form , every complex structure on a vector space is isomorphic to one of the form . Using these structures, the tangent bundle of an n-manifold, considered as a 2n-manifold, has an almost complex structure, and the cotangent bundle of an n-manifold, considered as a 2n-manifold, has a symplectic structure: .
The complex analog to a Lagrangian subspace is a real subspace, a subspace whose complexification is the whole space: . As can be seen from the standard symplectic form above, every symplectic form on R2n is isomorphic to the imaginary part of the standard complex (Hermitian) inner product on Cn (with the convention of the first argument being anti-linear).
Volume form
Let ω be an alternating bilinear form on an n-dimensional real vector space V, . Then ω is non-degenerate if and only if n is even and is a volume form. A volume form on a n-dimensional vector space V is a non-zero multiple of the n-form where is a basis of V.
For the standard basis defined in the previous section, we have
By reordering, one can write
Authors variously define ωn or (−1)n/2ωn as the standard volume form. An occasional factor of n! may also appear, depending on whether the definition of the alternating product contains a factor of n! or not. The volume form defines an orientation on the symplectic vector space .
Symplectic map
Suppose that and are symplectic vector spaces. Then a linear map is called a symplectic map if the pullback preserves the symplectic form, i.e. , where the pullback form is defined by . Symplectic maps are volume- and orientation-preserving.
Symplectic group
If , then a symplectic map is called a linear symplectic transformation of V. In particular, in this case one has that , and so the linear transformation f preserves the symplectic form. The set of all symplectic transformations forms a group and in particular a Lie group, called the symplectic group and denoted by Sp(V) or sometimes . In matrix form symplectic transformations are given by symplectic matrices.
Subspaces
Let W be a linear subspace of V. Define the symplectic complement of W to be the subspace
The symplectic complement satisfies:
However, unlike orthogonal complements, W⊥ ∩ W need not be 0. We distinguish four cases:
W is symplectic if }. This is true if and only if ω restricts to a nondegenerate form on W. A symplectic subspace with the restricted form is a symplectic vector space in its own right.
W is isotropic if . This is true if and only if ω restricts to 0 on W. Any one-dimensional subspace is isotropic.
W is coisotropic if . W is coisotropic if and only if ω descends to a nondegenerate form on the quotient space W/W⊥. Equivalently W is coisotropic if and only if W⊥ is isotropic. Any codimension-one subspace is coisotropic.
W is Lagrangian if . A subspace is Lagrangian if and only if it is both isotropic and coisotropic. In a finite-dimensional vector space, a Lagrangian subspace is an isotropic one whose dimension is half that of V. Every isotropic subspace can be extended to a Lagrangian one.
Referring to the canonical vector space R2n above,
the subspace spanned by {x1, y1} is symplectic
the subspace spanned by {x1, x2} is isotropic
the subspace spanned by {x1, x2, ..., xn, y1} is coisotropic
the subspace spanned by {x1, x2, ..., xn} is Lagrangian.
Heisenberg group
A Heisenberg group can be defined for any symplectic vector space, and this is the typical way that Heisenberg groups arise.
A vector space can be thought of as a commutative Lie group (under addition), or equivalently as a commutative Lie algebra, meaning with trivial Lie bracket. The Heisenberg group is a central extension of such a commutative Lie group/algebra: the symplectic form defines the commutation, analogously to the canonical commutation relations (CCR), and a Darboux basis corresponds to canonical coordinates – in physics terms, to momentum operators and position operators.
Indeed, by the Stone–von Neumann theorem, every representation satisfying the CCR (every representation of the Heisenberg group) is of this form, or more properly unitarily conjugate to the standard one.
Further, the group algebra of (the dual to) a vector space is the symmetric algebra, and the group algebra of the Heisenberg group (of the dual) is the Weyl algebra: one can think of the central extension as corresponding to quantization or deformation.
Formally, the symmetric algebra of a vector space V over a field F is the group algebra of the dual, , and the Weyl algebra is the group algebra of the (dual) Heisenberg group . Since passing to group algebras is a contravariant functor, the central extension map becomes an inclusion .
See also
A symplectic manifold is a smooth manifold with a smoothly-varying closed symplectic form on each tangent space.
Maslov index
A symplectic representation is a group representation where each group element acts as a symplectic transformation.
References
Claude Godbillon (1969) "Géométrie différentielle et mécanique analytique", Hermann
PDF
Paulette Libermann and Charles-Michel Marle (1987) "Symplectic Geometry and Analytical Mechanics", D. Reidel
Jean-Marie Souriau (1997) "Structure of Dynamical Systems, A Symplectic View of Physics", Springer
Linear algebra
Symplectic geometry
Bilinear forms | Symplectic vector space | [
"Mathematics"
] | 2,159 | [
"Linear algebra",
"Algebra"
] |
292,864 | https://en.wikipedia.org/wiki/G2%20%28mathematics%29 | {{DISPLAYTITLE:G2 (mathematics)}}
In mathematics, G2 is three simple Lie groups (a complex form, a compact real form and a split real form), their Lie algebras as well as some algebraic groups. They are the smallest of the five exceptional simple Lie groups. G2 has rank 2 and dimension 14. It has two fundamental representations, with dimension 7 and 14.
The compact form of G2 can be described as the automorphism group of the octonion algebra or, equivalently, as the subgroup of SO(7) that preserves any chosen particular vector in its 8-dimensional real spinor representation (a spin representation).
History
The Lie algebra , being the smallest exceptional simple Lie algebra, was the first of these to be discovered in the attempt to classify simple Lie algebras. On May 23, 1887, Wilhelm Killing wrote a letter to Friedrich Engel saying that he had found a 14-dimensional simple Lie algebra, which we now call .
In 1893, Élie Cartan published a note describing an open set in equipped with a 2-dimensional distribution—that is, a smoothly varying field of 2-dimensional subspaces of the tangent space—for which the Lie algebra appears as the infinitesimal symmetries. In the same year, in the same journal, Engel noticed the same thing. Later it was discovered that the 2-dimensional distribution is closely related to a ball rolling on another ball. The space of configurations of the rolling ball is 5-dimensional, with a 2-dimensional distribution that describes motions of the ball where it rolls without slipping or twisting.
In 1900, Engel discovered that a generic antisymmetric trilinear form (or 3-form) on a 7-dimensional complex vector space is preserved by a group isomorphic to the complex form of G2.
In 1908 Cartan mentioned that the automorphism group of the octonions is a 14-dimensional simple Lie group. In 1914 he stated that this is the compact real form of G2.
In older books and papers, G2 is sometimes denoted by E2.
Real forms
There are 3 simple real Lie algebras associated with this root system:
The underlying real Lie algebra of the complex Lie algebra G2 has dimension 28. It has complex conjugation as an outer automorphism and is simply connected. The maximal compact subgroup of its associated group is the compact form of G2.
The Lie algebra of the compact form is 14-dimensional. The associated Lie group has no outer automorphisms, no center, and is simply connected and compact.
The Lie algebra of the non-compact (split) form has dimension 14. The associated simple Lie group has fundamental group of order 2 and its outer automorphism group is the trivial group. Its maximal compact subgroup is . It has a non-algebraic double cover that is simply connected.
Algebra
Dynkin diagram and Cartan matrix
The Dynkin diagram for G2 is given by .
Its Cartan matrix is:
Roots of G2
A set of simple roots for can be read directly from the Cartan matrix above. These are (2,−3) and (−1, 2), however the integer lattice spanned by those is not the one pictured above (from obvious reason: the hexagonal lattice on the plane cannot be generated by integer vectors). The diagram above is obtained from a different pair roots: and .
The remaining (positive) roots are .
Although they do span a 2-dimensional space, as drawn, it is much more symmetric to consider them as vectors in a 2-dimensional subspace of a three-dimensional space. In this identification α corresponds to e₁−e₂, β to −e₁ + 2e₂−e₃, A to e₂−e₃ and so on. In euclidean coordinates these vectors look as follows:
The corresponding set of simple roots is:
e₁−e₂ = (1,−1,0), and −e₁+2e₂−e₃ = (−1,2,−1)
Note: α and A together form root system identical to A₂, while the system formed by β and B is isomorphic to A₂.
Weyl/Coxeter group
Its Weyl/Coxeter group is the dihedral group of order 12. It has minimal faithful degree .
Special holonomy
G2 is one of the possible special groups that can appear as the holonomy group of a Riemannian metric. The manifolds of G2 holonomy are also called G2-manifolds.
Polynomial invariant
G2 is the automorphism group of the following two polynomials in 7 non-commutative variables.
(± permutations)
which comes from the octonion algebra. The variables must be non-commutative otherwise the second polynomial would be identically zero.
Generators
Adding a representation of the 14 generators with coefficients A, ..., N gives the matrix:
It is exactly the Lie algebra of the group
There are 480 different representations of corresponding to the 480 representations of octonions. The calibrated form, has 30 different forms and each has 16 different signed variations. Each of the signed variations generate signed differences of and each is an automorphism of all 16 corresponding octonions. Hence there are really only 30 different representations of . These can all be constructed with Clifford algebra using an invertible form for octonions. For other signed variations of , this form has remainders that classify 6 other non-associative algebras that show partial symmetry. An analogous calibration in leads to sedenions and at least 11 other related algebras.
Representations
The characters of finite-dimensional representations of the real and complex Lie algebras and Lie groups are all given by the Weyl character formula. The dimensions of the smallest irreducible representations are :
1, 7, 14, 27, 64, 77 (twice), 182, 189, 273, 286, 378, 448, 714, 729, 748, 896, 924, 1254, 1547, 1728, 1729, 2079 (twice), 2261, 2926, 3003, 3289, 3542, 4096, 4914, 4928 (twice), 5005, 5103, 6630, 7293, 7371, 7722, 8372, 9177, 9660, 10206, 10556, 11571, 11648, 12096, 13090....
The 14-dimensional representation is the adjoint representation, and the 7-dimensional one is action of G2 on the imaginary octonions.
There are two non-isomorphic irreducible representations of dimensions 77, 2079, 4928, 30107, etc. The fundamental representations are those with dimensions 14 and 7 (corresponding to the two nodes in the Dynkin diagram in the order such that the triple arrow points from the first to the second).
described the (infinite-dimensional) unitary irreducible representations of the split real form of G2.
The embeddings of the maximal subgroups of G2 up to dimension 77 are shown to the right.
Finite groups
The group G2(q) is the points of the algebraic group G2 over the finite field Fq. These finite groups were first introduced by Leonard Eugene Dickson in for odd q and for even q. The order of G2(q) is . When , the group is simple, and when , it has a simple subgroup of index 2 isomorphic to 2A2(32), and is the automorphism group of a maximal order of the octonions. The Janko group J1 was first constructed as a subgroup of G2(11). introduced twisted Ree groups 2G2(q) of order for , an odd power of 3.
See also
Cartan matrix
Dynkin diagram
Exceptional Jordan algebra
Fundamental representation
G2-structure
Lie group
Seven-dimensional cross product
Simple Lie group
Star of David
References
.
See section 4.1: G2; an online HTML version of which is available at http://math.ucr.edu/home/baez/octonions/node14.html.
Leonard E. Dickson reported groups of type G2 in fields of odd characteristic.
Leonard E. Dickson reported groups of type G2 in fields of even characteristic.
Algebraic groups
Lie groups
Octonions
Exceptional Lie algebras | G2 (mathematics) | [
"Mathematics"
] | 1,734 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
292,877 | https://en.wikipedia.org/wiki/F4%20%28mathematics%29 | {{DISPLAYTITLE:F4 (mathematics)}}
In mathematics, F4 is a Lie group and also its Lie algebra f4. It is one of the five exceptional simple Lie groups. F4 has rank 4 and dimension 52. The compact form is simply connected and its outer automorphism group is the trivial group. Its fundamental representation is 26-dimensional.
The compact real form of F4 is the isometry group of a 16-dimensional Riemannian manifold known as the octonionic projective plane OP2. This can be seen systematically using a construction known as the magic square, due to Hans Freudenthal and Jacques Tits.
There are 3 real forms: a compact one, a split one, and a third one. They are the isometry groups of the three real Albert algebras.
The F4 Lie algebra may be constructed by adding 16 generators transforming as a spinor to the 36-dimensional Lie algebra so(9), in analogy with the construction of E8.
In older books and papers, F4 is sometimes denoted by E4.
Algebra
Dynkin diagram
The Dynkin diagram for F4 is: .
Weyl/Coxeter group
Its Weyl/Coxeter group is the symmetry group of the 24-cell: it is a solvable group of order 1152. It has minimal faithful degree , which is realized by the action on the 24-cell. The group has ID (1152,157478) in the small groups library.
Cartan matrix
F4 lattice
The F4 lattice is a four-dimensional body-centered cubic lattice (i.e. the union of two hypercubic lattices, each lying in the center of the other). They form a ring called the Hurwitz quaternion ring. The 24 Hurwitz quaternions of norm 1 form the vertices of a 24-cell centered at the origin.
Roots of F4
The 48 root vectors of F4 can be found as the vertices of the 24-cell in two dual configurations, representing the vertices of a disphenoidal 288-cell if the edge lengths of the 24-cells are equal:
24-cell vertices:
24 roots by (±1, ±1, 0, 0), permuting coordinate positions
Dual 24-cell vertices:
8 roots by (±1, 0, 0, 0), permuting coordinate positions
16 roots by (±1/2, ±1/2, ±1/2, ±1/2).
Simple roots
One choice of simple roots for F4, , is given by the rows of the following matrix:
The Hasse diagram for the F4 root poset is shown below right.
F4 polynomial invariant
Just as O(n) is the group of automorphisms which keep the quadratic polynomials invariant, F4 is the group of automorphisms of the following set of 3 polynomials in 27 variables. (The first can easily be substituted into other two making 26 variables).
Where x, y, z are real-valued and X, Y, Z are octonion valued. Another way of writing these invariants is as (combinations of) Tr(M), Tr(M2) and Tr(M3) of the hermitian octonion matrix:
The set of polynomials defines a 24-dimensional compact surface.
Representations
The characters of finite dimensional representations of the real and complex Lie algebras and Lie groups are all given by the Weyl character formula. The dimensions of the smallest irreducible representations are :
1, 26, 52, 273, 324, 1053 (twice), 1274, 2652, 4096, 8424, 10829, 12376, 16302, 17901, 19278, 19448, 29172, 34749, 76076, 81081, 100776, 106496, 107406, 119119, 160056 (twice), 184756, 205751, 212992, 226746, 340119, 342056, 379848, 412776, 420147, 627912...
The 52-dimensional representation is the adjoint representation, and the 26-dimensional one is the trace-free part of the action of F4 on the exceptional Albert algebra of dimension 27.
There are two non-isomorphic irreducible representations of dimensions 1053, 160056, 4313088, etc. The fundamental representations are those with dimensions 52, 1274, 273, 26 (corresponding to the four nodes in the Dynkin diagram in the order such that the double arrow points from the second to the third).
Embeddings of the maximal subgroups of F4 up to dimension 273 with associated projection matrix are shown below.
See also
24-cell
Albert algebra
Cayley plane
Dynkin diagram
Fundamental representation
Simple Lie group
References
John Baez, The Octonions, Section 4.2: F4, Bull. Amer. Math. Soc. 39 (2002), 145-205. Online HTML version at http://math.ucr.edu/home/baez/octonions/node15.html.
Algebraic groups
Lie groups
Exceptional Lie algebras | F4 (mathematics) | [
"Mathematics"
] | 1,085 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
292,906 | https://en.wikipedia.org/wiki/Biofeedback | Biofeedback is the technique of gaining greater awareness of many physiological functions of one's own body by using electronic or other instruments, and with a goal of being able to manipulate the body's systems at will. Humans conduct biofeedback naturally all the time, at varied levels of consciousness and intentionality. Biofeedback and the biofeedback loop can also be thought of as self-regulation. Some of the processes that can be controlled include brainwaves, muscle tone, skin conductance, heart rate and pain perception.
Biofeedback may be used to improve health, performance, and the physiological changes that often occur in conjunction with changes to thoughts, emotions, and behavior. Recently, technologies have provided assistance with intentional biofeedback. Eventually, these changes may be maintained without the use of extra equipment, for no equipment is necessarily required to practice biofeedback.
Meta-analysis of different biofeedback treatments have shown some benefit in the treatment of headaches and migraines and ADHD, though most of the studies in these meta-analyses did not make comparisons with alternative treatments.
Information coded biofeedback
Information coded biofeedback is an evolving form and methodology in the field of biofeedback. Its uses may be applied in the areas of health, wellness and awareness. Biofeedback has its modern conventional roots in the early 1970s.
Over the years, biofeedback as a discipline and a technology has continued to mature and express new versions of the method with novel interpretations in areas utilizing the electromyograph, electrodermograph, electroencephalograph and electrocardiogram among others.
The concept of biofeedback is based on the fact that a wide variety of ongoing intrinsic natural functions of the organism occur at a level of awareness generally called the "unconscious". The biofeedback process is designed to interface with select aspects of these "unconscious" processes.
The definition reads:
Biofeedback is a process that enables an individual to learn how to change physiological activity for the purposes of improving health and performance. Precise instruments measure physiological activity such as brainwaves, heart function, breathing, muscle activity, and skin temperature. These instruments rapidly and accurately feed back information to the user. The presentation of this information—often in conjunction with changes in thinking, emotions, and behavior—supports desired physiological changes. Over time, these changes can endure without continued use of an instrument.
A more simple definition could be:
Biofeedback is the process of gaining greater awareness of many physiological functions primarily using instruments that provide information on the activity of those same systems, with a goal of being able to manipulate them at will. (Emphasis added by author.)
In both of these definitions, a cardinal feature of the concept is the association of the "will" with the result of a new cognitive "learning" skill. Some examine this concept and do not necessarily ascribe it simply to a willful acquisition of a new learned skill but also extend the dynamics into the realms of a behavioristic conditioning. Behaviorism contends that it is possible to change the actions and functions of an organism by exposing it to a number of conditions or influences. Key to the concept is not only that the functions are unconscious but that conditioning processes themselves may be unconscious to the organism. Information coded biofeedback relies primarily on the behavior conditioning aspect of biofeedback in promoting significant changes in the functioning of the organism.
The principle of information is both complex and, in part, controversial. The term itself is derived from the Latin verb which means literally 'to bring into form or shape'. The meaning of information is largely affected by the context of usage. Probably the simplest and perhaps most insightful definition of information was given by Gregory Bateson—"Information is news of change" or another as "the difference that makes a difference". Information may also be thought of as "any type of pattern that influences the formation or transformation of other patterns". Recognizing the inherent complexity of an organism, information coded biofeedback applies algorithmic calculations in a stochastic approach to identify significant probabilities in a limited set of possibilities.
Sensor modalities
Electromyograph
An electromyograph (EMG) uses surface electrodes to detect muscle action potentials from underlying skeletal muscles that initiate muscle contraction. Clinicians record the surface electromyogram (SEMG) using one or more active electrodes that are placed over a target muscle and a reference electrode that is placed within six inches of either active. The SEMG is measured in microvolts (millionths of a volt).
In addition to surface electrodes, clinicians may also insert wires or needles intramuscularly to record an EMG signal. While this is more painful and often costly, the signal is more reliable since surface electrodes pick up cross talk from nearby muscles. The use of surface electrodes is also limited to superficial muscles, making the intramuscular approach beneficial to access signals from deeper muscles. The electrical activity picked up by the electrodes is recorded and displayed in the same fashion as the surface electrodes. Prior to placing surface electrodes, the skin is normally shaved, cleaned and exfoliated to get the best signal. Raw EMG signals resemble noise (electrical signal not coming from the muscle of interest) and the voltage fluctuates; therefore, they are processed normally in three ways: rectification, filtering, and integration. This processing allows for a unified signal that is then able to be compared to other signals using the same processing techniques.
Biofeedback therapists use EMG biofeedback when treating anxiety and worry, chronic pain, computer-related disorder, essential hypertension, headache (migraine, mixed headache, and tension-type headache), low back pain, physical rehabilitation (cerebral palsy, incomplete spinal cord lesions, and stroke), temporomandibular joint dysfunction (TMD), torticollis, and fecal incontinence, urinary incontinence, and pelvic pain. Physical therapists have also used EMG biofeedback for evaluating muscle activation and providing feedback for their patients.
Feedback thermometer
A feedback thermometer detects skin temperature with a thermistor (a temperature-sensitive resistor) that is usually attached to a finger or toe and measured in degrees Celsius or Fahrenheit. Skin temperature mainly reflects arteriole diameter. Hand-warming and hand-cooling are produced by separate mechanisms, and their regulation involves different skills. Hand-warming involves arteriole vasodilation produced by a beta-2 adrenergic hormonal mechanism. Hand-cooling involves arteriole vasoconstriction produced by the increased firing of sympathetic C-fibers.
Biofeedback therapists use temperature biofeedback when treating chronic pain, edema, headache (migraine and tension-type headache), essential hypertension, Raynaud's disease, anxiety, and stress.
Electrodermograph
An electrodermograph (EDG) measures skin electrical activity directly (skin conductance and skin potential) and indirectly (skin resistance) using electrodes placed over the digits or hand and wrist. Orienting responses to unexpected stimuli, arousal and worry, and cognitive activity can increase eccrine sweat gland activity, increasing the conductivity of the skin for electric current.
In skin conductance, an electrodermograph imposes an imperceptible current across the skin and measures how easily it travels through the skin. When anxiety raises the level of sweat in a sweat duct, conductance increases. Skin conductance is measured in microsiemens (millionths of a siemens). In skin potential, a therapist places an active electrode over an active site (e.g., the palmar surface of the hand) and a reference electrode over a relatively inactive site (e.g., forearm). Skin potential is the voltage that develops between eccrine sweat glands and internal tissues and is measured in millivolts (thousandths of a volt). In skin resistance, also called galvanic skin response (GSR), an electrodermograph imposes a current across the skin and measures the amount of opposition it encounters. Skin resistance is measured in kΩ (thousands of ohms).
Biofeedback therapists use electrodermal biofeedback when treating anxiety disorders, hyperhidrosis (excessive sweating), and stress. Electrodermal biofeedback is used as an adjunct to psychotherapy to increase client awareness of their emotions. In addition, electrodermal measures have long served as one of the central tools in polygraphy (lie detection) because they reflect changes in anxiety or emotional activation.
Electroencephalograph
An electroencephalograph (EEG) measures the electrical activation of the brain from scalp sites located over the human cortex. The EEG shows the amplitude of electrical activity at each cortical site, the amplitude and relative power of various wave forms at each site, and the degree to which each cortical site fires in conjunction with other cortical sites (coherence and symmetry).
The EEG uses precious metal electrodes to detect a voltage between at least two electrodes located on the scalp. The EEG records both excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs) that largely occur in dendrites in pyramidal cells located in macrocolumns, several millimeters in diameter, in the upper cortical layers. Neurofeedback monitors both slow and fast cortical potentials.
Slow cortical potentials are gradual changes in the membrane potentials of cortical dendrites that last from 300 ms to several seconds. These potentials include the contingent negative variation (CNV), readiness potential, movement-related potentials (MRPs), and P300 and N400 potentials.
Fast cortical potentials range from 0.5 Hz to 100 Hz. The main frequency ranges include delta, theta, alpha, the sensorimotor rhythm, low beta, high beta, and gamma. The thresholds or boundaries defining the frequency ranges vary considerably among professionals. Fast cortical potentials can be described by their predominant frequencies, but also by whether they are synchronous or asynchronous wave forms. Synchronous wave forms occur at regular periodic intervals, whereas asynchronous wave forms are irregular.
The synchronous delta rhythm ranges from 0.5 to 3.5 Hz. Delta is the dominant frequency from ages 1 to 2, and is associated in adults with deep sleep, critical for memory, cognition, sleep maintenance, and mental health. Disorders that disrupt sleep such as insomnia, traumatic brain injury, obstructive sleep apnea, and other neuropsychiatric conditions are also associated with the delta rhythm.
The synchronous theta rhythm ranges from 4 to 7 Hz. Theta is the dominant frequency in healthy young children and is associated with drowsiness or starting to sleep, REM sleep, hypnagogic imagery (intense imagery experienced before the onset of sleep), hypnosis, attention, and processing of cognitive and perceptual information.
The synchronous alpha rhythm ranges from 8 to 13 Hz and is defined by its waveform and not by its frequency. Alpha activity can be observed in about 75% of awake, relaxed individuals and is replaced by low-amplitude desynchronized beta activity during movement, complex problem-solving, and visual focusing. This phenomenon is called alpha blocking.
The synchronous sensorimotor rhythm (SMR) ranges from 12 to 15 Hz and is located over the sensorimotor cortex (central sulcus). The sensorimotor rhythm is associated with the inhibition of movement and reduced muscle tone.
The beta rhythm consists of asynchronous waves and can be divided into low beta and high beta ranges (13–21 Hz and 20–32 Hz). Low beta is associated with activation and focused thinking. High beta is associated with anxiety, hypervigilance, panic, peak performance, and worry.
EEG activity from 36 to 44 Hz is also referred to as gamma. Gamma activity is associated with perception of meaning and meditative awareness.
Neurotherapists use EEG biofeedback when treating addiction, attention deficit hyperactivity disorder (ADHD), learning disability, anxiety disorders (including worry, obsessive-compulsive disorder and posttraumatic stress disorder), depression, migraine, and generalized seizures.
Photoplethysmograph
A photoplethysmograph (PPG) measures the relative blood flow through a digit using a photoplethysmographic (PPG) sensor attached by a Velcro band to the fingers or to the temple to monitor the temporal artery. An infrared light source is transmitted through or reflected off the tissue, detected by a phototransistor, and quantified in arbitrary units. Less light is absorbed when blood flow is greater, increasing the intensity of light reaching the sensor.
A photoplethysmograph can measure blood volume pulse (BVP), which is the phasic change in blood volume with each heartbeat, heart rate, and heart rate variability (HRV), which consists of beat-to-beat differences in intervals between successive heartbeats.
A photoplethysmograph can provide useful feedback when temperature feedback shows minimal change. This is because the PPG sensor is more sensitive than a thermistor to minute blood flow changes. Biofeedback therapists can use a photoplethysmograph to supplement temperature biofeedback when treating chronic pain, edema, headache (migraine and tension-type headache), essential hypertension, Raynaud's disease, anxiety, and stress.
Electrocardiogram
The electrocardiogram (ECG) uses electrodes placed on the torso, wrists, or legs, to measure the electrical activity of the heart and measures the interbeat interval (distances between successive R-wave peaks in the QRS complex). The interbeat interval, divided into 60 seconds, determines the heart rate at that moment. The statistical variability of that interbeat interval is what we call heart rate variability. The ECG method is more accurate than the PPG method in measuring heart rate variability.
Biofeedback therapists use heart rate variability (HRV) biofeedback when treating asthma, COPD, depression, anxiety, fibromyalgia, heart disease, and unexplained abdominal pain. Research shows that HRV biofeedback can also be used to improve physiological and psychological wellbeing in healthy individuals.
HRV data from both polyplethysmographs and electrocardiograms are analyzed via mathematical transformations such as the commonly-used fast Fourier transform (FFT). The FFT splits the HRV data into a power spectrum, revealing the waveform's constituent frequencies. Among those constituent frequencies, high-frequency (HF) and low-frequency (LF) components are defined as above and below .15 Hz, respectively. As a rule of thumb, the LF component of HRV represents sympathetic activity, and the HF component represents parasympathetic activity. The two main components are often represented as a LF/HF ratio and used to express sympathovagal balance. Some researchers consider a third, medium-frequency (MF) component from .08 Hz to .15 Hz, which has been shown to increase in power during times of appreciation.
Pneumograph
A pneumograph or respiratory strain gauge uses a flexible sensor band that is placed around the chest, abdomen, or both. The strain gauge method can provide feedback about the relative expansion/contraction of the chest and abdomen, and can measure respiratory rate (the number of breaths per minute). Clinicians can use a pneumograph to detect and correct dysfunctional breathing patterns and behaviors. Dysfunctional breathing patterns include clavicular breathing (breathing that primarily relies on the external intercostals and the accessory muscles of respiration to inflate the lungs), reverse breathing (breathing where the abdomen expands during exhalation and contracts during inhalation), and thoracic breathing (shallow breathing that primarily relies on the external intercostals to inflate the lungs). Dysfunctional breathing behaviors include apnea (suspension of breathing), gasping, sighing, and wheezing.
A pneumograph is often used in conjunction with an electrocardiograph (ECG) or photoplethysmograph (PPG) in heart rate variability (HRV) training.
Biofeedback therapists use pneumograph biofeedback with patients diagnosed with anxiety disorders, asthma, chronic pulmonary obstructive disorder (COPD), essential hypertension, panic attacks, and stress.
Capnometer
A capnometer or capnograph uses an infrared detector to measure end-tidal (the partial pressure of carbon dioxide in expired air at the end of expiration) exhaled through the nostril into a latex tube. The average value of end-tidal for a resting adult is 5% (). A capnometer is a sensitive index of the quality of patient breathing. Shallow, rapid, and effortful breathing lowers , while deep, slow, effortless breathing increases it.
Biofeedback therapists use capnometric biofeedback to supplement respiratory strain gauge biofeedback with patients diagnosed with anxiety disorders, asthma, chronic pulmonary obstructive disorder (COPD), essential hypertension, panic attacks, and stress.
Rheoencephalograph
Rheoencephalography (REG), or brain blood flow biofeedback, is a biofeedback technique of a conscious control of blood flow. An electronic device called a rheoencephalograph [from Greek 'stream, anything flowing', from 'to flow'] is utilized in brain blood flow biofeedback. Electrodes are attached to the skin at certain points on the head and permit the device to measure continuously the electrical conductivity of the tissues of structures located between the electrodes. The brain blood flow technique is based on non-invasive method of measuring bio-impedance. Changes in bio-impedance are generated by blood volume and blood flow and registered by a rheographic device. The pulsative bio-impedance changes directly reflect the total blood flow of the deep structures of brain due to high frequency impedance measurements.
Hemoencephalography
Hemoencephalography or HEG biofeedback is a functional infrared imaging technique. As its name describes, it measures the differences in the color of light reflected back through the scalp based on the relative amount of oxygenated and unoxygenated blood in the brain. Research continues to determine its reliability, validity, and clinical applicability. HEG is used to treat ADHD and migraine, and for research.
Pressure
Pressure can be monitored as a patient performs exercises while resting against an air-filled cushion. This is pertinent to physiotherapy. Alternatively, the patient may actively grip or press against an air-filled cushion of custom shape.
Applications
Urinary incontinence
Mowrer detailed the use of a bedwetting alarm that sounds when children urinate while asleep. This simple biofeedback device can quickly teach children to wake up when their bladders are full and to contract the urinary sphincter and relax the detrusor muscle, preventing further urine release. Through classical conditioning, sensory feedback from a full bladder replaces the alarm and allows children to continue sleeping without urinating.
Kegel developed the perineometer in 1947 to treat urinary incontinence (urine leakage) in women whose pelvic floor muscles are weakened during pregnancy and childbirth. The perineometer, which is inserted into the vagina to monitor pelvic floor muscle contraction, satisfies all the requirements of a biofeedback device and enhances the effectiveness of popular Kegel exercises. Contradicting this, a 2023 Systematic Review of the literature including eight studies found the scientific evidence to compare pelvic floor muscle training for urinary and anal incontinence after childbirth with and without biofeedback is considered insufficient.
In 1992, the United States Agency for Health Care Policy and Research recommended biofeedback as a first-line treatment for adult urinary incontinence.
In 2019, the National Institute for Health and Care Excellence recommended against the routine use of use biofeedback in managing urinary incontinence in women who can actively contract the pelvic floor. It may be considered though, to aid motivation and adherence to therapy.
Fecal incontinence, constipation and anismus
Biofeedback is a treatment for anismus (paradoxical contraction of puborectalis during defecation). This therapy directly evolved from the investigation anorectal manometry where a probe that can record pressure is placed in the anal canal. Biofeedback therapy is also a commonly used and researched therapy for fecal incontinence, but the benefits are uncertain. Biofeedback therapy varies in the way it is delivered. It is also unknown if one type has benefits over another. The aims have been described as to enhance either the rectoanal inhibitory reflex (RAIR), rectal sensitivity (by discrimination of progressively smaller volumes of a rectal balloon and promptly contracting the external anal sphincter (EAS)), or the strength and endurance of the EAS contraction. Three general types of biofeedback have been described, though they are not mutually exclusive, with many protocols combining these elements. Similarly there is variance of the length of both the individual sessions and the overall length of the training, and if home exercises are performed in addition and how. In rectal sensitivity training, a balloon is placed in the rectum, and is gradually distended until there is a sensation of rectal filling. Successively smaller volume reinflations of the balloon aim to help the person detect rectal distension at a lower threshold, giving more time to contract the EAS and prevent incontinence, or to journey to the toilet. Alternatively, in those with urge incontinence/ rectal hypersensitivity, training is aimed at teaching the person to tolerate progressively larger volumes. Strength training may involve electromyography (EMG) skin electrodes, manometric pressures, intra-anal EMG, or endoanal ultrasound. One of these measures are used to relay the muscular activity or anal canal pressure during anal sphincter exercise. Performance and progress can be monitored in this manner. Co-ordination training involves the placing of 3 balloons, in the rectum and in the upper and lower anal canal. The rectal balloon is inflated to trigger the RAIR, an event often followed by incontinence. Co-ordination training aims to teach voluntary contraction of EAS when the RAIR occurs (i.e. when there is rectal distension).
There is some research that shows the effects of biofeedback on irritable bowel syndrome. However, there may be some adverse effects when using these devices.
In 2010 and 2017, the National Institute for Health and Care Excellence recommended against the use of biofeedback in managing constipation in children.
EEG
Caton recorded spontaneous electrical potentials from the exposed cortical surface of monkeys and rabbits, and was the first to measure event-related potentials (EEG responses to stimuli) in 1875.
Danilevsky published Investigations in the Physiology of the Brain, which explored the relationship between the EEG and states of consciousness in 1877.
Beck published studies of spontaneous electrical potentials detected from the brains of dogs and rabbits, and was the first to document alpha blocking, where light alters rhythmic oscillations, in 1890.
Sherrington introduced the terms neuron and synapse and published the Integrative Action of the Nervous System in 1906.
Pravdich-Neminsky photographed the EEG and event related potentials from dogs, demonstrated a 12–14 Hz rhythm that slowed during asphyxiation, and introduced the term electrocerebrogram in 1912.
Forbes reported the replacement of the string galvanometer with a vacuum tube to amplify the EEG in 1920. The vacuum tube became the de facto standard by 1936.
Berger (1924) published the first human EEG data. He recorded electrical potentials from his son Klaus's scalp. At first he believed that he had discovered the physical mechanism for telepathy but was disappointed that the electromagnetic variations disappear only millimeters away from the skull. (He did continue to believe in telepathy throughout his life, however, having had a particularly confirming event regarding his sister). He viewed the EEG as analogous to the ECG and introduced the term . He believed that the EEG had diagnostic and therapeutic promise in measuring the impact of clinical interventions. Berger showed that these potentials were not due to scalp muscle contractions. He first identified the alpha rhythm, which he called the Berger rhythm, and later identified the beta rhythm and sleep spindles. He demonstrated that alterations in consciousness are associated with changes in the EEG and associated the beta rhythm with alertness. He described interictal activity (EEG potentials between seizures) and recorded a partial complex seizure in 1933. Finally, he performed the first QEEG, which is the measurement of the signal strength of EEG frequencies.
Adrian and Matthews confirmed Berger's findings in 1934 by recording their own EEGs using a cathode-ray oscilloscope. Their demonstration of EEG recording at the 1935 Physiological Society meetings in England caused its widespread acceptance. Adrian used himself as a subject and demonstrated the phenomenon of alpha blocking, where opening his eyes suppressed alpha rhythms.
Gibbs, Davis, and Lennox inaugurated clinical electroencephalography in 1935 by identifying abnormal EEG rhythms associated with epilepsy, including interictal spike waves and 3 Hz activity in absence seizures.
Bremer used the EEG to show how sensory signals affect vigilance in 1935.
Walter (1937, 1953) named the delta waves and theta waves, and the contingent negative variation (CNV), a slow cortical potential that may reflect expectancy, motivation, intention to act, or attention. He located an occipital lobe source for alpha waves and demonstrated that delta waves can help locate brain lesions like tumors. He improved Berger's electroencephalograph and pioneered EEG topography.
Kleitman has been recognized as the "Father of American sleep research" for his seminal work in the regulation of sleep-wake cycles, circadian rhythms, the sleep patterns of different age groups, and the effects of sleep deprivation. He discovered the phenomenon of rapid eye movement (REM) sleep with his graduate student Aserinsky in 1953.
Dement, another of Kleitman's students, described the EEG architecture and phenomenology of sleep stages and the transitions between them in 1955, associated REM sleep with dreaming in 1957, and documented sleep cycles in another species, cats, in 1958, which stimulated basic sleep research. He established the Stanford University Sleep Research Center in 1970.
Andersen and Andersson (1968) proposed that thalamic pacemakers project synchronous alpha rhythms to the cortex via thalamocortical circuits.
Kamiya (1968) demonstrated that the alpha rhythm in humans could be operantly conditioned. He published an influential article in Psychology Today that summarized research that showed that subjects could learn to discriminate when alpha was present or absent, and that they could use feedback to shift the dominant alpha frequency about 1 Hz. Almost half of his subjects reported experiencing a pleasant "alpha state" characterized as an "alert calmness." These reports may have contributed to the perception of alpha biofeedback as a shortcut to a meditative state. He also studied the EEG correlates of meditative states.
Brown (1970) demonstrated the clinical use of alpha-theta biofeedback. In research designed to identify the subjective states associated with EEG rhythms, she trained subjects to increase the abundance of alpha, beta, and theta activity using visual feedback and recorded their subjective experiences when the amplitude of these frequency bands increased. She also helped popularize biofeedback by publishing a series of books, including New Mind, New body (1974) and Stress and the Art of Biofeedback (1977).
Mulholland and Peper (1971) showed that occipital alpha increases with eyes open and not focused, and is disrupted by visual focusing; a rediscovery of alpha blocking.
Green and Green (1986) investigated voluntary control of internal states by individuals like Swami Rama and American Indian medicine man Rolling Thunder both in India and at the Menninger Foundation. They brought portable biofeedback equipment to India and monitored practitioners as they demonstrated self-regulation. A film containing footage from their investigations was released as Biofeedback: The Yoga of the West (1974). They developed alpha-theta training at the Menninger Foundation from the 1960s to the 1990s. They hypothesized that theta states allow access to unconscious memories and increase the impact of prepared images or suggestions. Their alpha-theta research fostered Peniston's development of an alpha-theta addiction protocol.
Sterman (1972) showed that cats and human subjects could be operantly trained to increase the amplitude of the sensorimotor rhythm (SMR) recorded from the sensorimotor cortex. He demonstrated that SMR production protects cats against drug-induced generalized seizures (tonic-clonic seizures involving loss of consciousness) and reduces the frequency of seizures in humans diagnosed with epilepsy. He found that his SMR protocol, which uses visual and auditory EEG biofeedback, normalizes their EEGs (SMR increases while theta and beta decrease toward normal values) even during sleep. Sterman also co-developed the Sterman-Kaiser (SKIL) QEEG database.
Birbaumer and colleagues (1981) have studied feedback of slow cortical potentials since the late 1970s. They have demonstrated that subjects can learn to control these DC potentials and have studied the efficacy of slow cortical potential biofeedback in treating ADHD, epilepsy, migraine, and schizophrenia.
Lubar (1989) studied SMR biofeedback to treat attention disorders and epilepsy in collaboration with Sterman. He demonstrated that SMR training can improve attention and academic performance in children diagnosed with Attention Deficit Disorder with Hyperactivity (ADHD). He documented the importance of theta-to-beta ratios in ADHD and developed theta suppression-beta enhancement protocols to decrease these ratios and improve student performance. The Neuropsychiatric EEG-Based Assessment Aid (NEBA) System, a device used to measure the Theta-to-Beta ratio, was approved as a tool to assist in diagnosis of ADHD on July 15, 2013. A 2019 Systematic Review studied the use of Qualitative EEG's as a biomarker for diagnosing ADHD and other child psychiatric disorders as compared to and showed higher theta/beta ratio in ADHD vs healthy controls.
Electrodermal system
Feré demonstrated the exosomatic method of recording of skin electrical activity by passing a small current through the skin in 1888.
Tarchanoff used the endosomatic method by recording the difference in skin electrical potential from points on the skin surface in 1889; no external current was applied.
Jung employed the galvanometer, which used the exosomatic method, in 1907 to study unconscious emotions in word-association experiments.
Marjorie and Hershel Toomim (1975) published a landmark article about the use of GSR biofeedback in psychotherapy.
Meyer and Reich discussed similar material in a British publication.
Musculoskeletal system
Jacobson (1930) developed hardware to measure EMG voltages over time, showed that cognitive activity (like imagery) affects EMG levels, introduced the deep relaxation method Progressive Relaxation, and wrote Progressive Relaxation (1929) and You Must Relax (1934). He prescribed daily Progressive Relaxation practice to treat diverse psychophysiological disorders like hypertension.
Several researchers showed that human subjects could learn precise control of individual motor units (motor neurons and the muscle fibers they control). Lindsley (1935) found that relaxed subjects could suppress motor unit firing without biofeedback training.
Harrison and Mortensen (1962) trained subjects using visual and auditory EMG biofeedback to control individual motor units in the tibialis anterior muscle of the leg.
Basmajian (1963) instructed subjects using unfiltered auditory EMG biofeedback to control separate motor units in the abductor pollicis muscle of the thumb in his Single Motor Unit Training (SMUT) studies. His best subjects coordinated several motor units to produce drum rolls. Basmajian demonstrated practical applications for neuromuscular rehabilitation, pain management, and headache treatment.
Marinacci (1960) applied EMG biofeedback to neuromuscular disorders (where proprioception is disrupted) including Bell Palsy (one-sided facial paralysis), polio, and stroke.
"While Marinacci used EMG to treat neuromuscular disorders, his colleagues used the EMG only for diagnosis. They were unable to recognize its potential as a teaching tool even when the evidence stared them in the face! Many electromyographers who performed nerve conduction studies used visual and auditory feedback to reduce interference when a patient recruited too many motor units. Even though they used EMG biofeedback to guide the patient to relax so that clean diagnostic EMG tests could be recorded, they were unable to envision EMG biofeedback treatment of motor disorders."
Whatmore and Kohli (1968) introduced the concept of dysponesis (misplaced effort) to explain how functional disorders (where body activity is disturbed) develop. Bracing your shoulders when you hear a loud sound illustrates dysponesis, since this action does not protect against injury. These clinicians applied EMG biofeedback to diverse functional problems like headache and hypertension. They reported case follow-ups ranging from 6 to 21 years. This was long compared with typical 0–24 month follow-ups in the clinical literature. Their data showed that skill in controlling misplaced efforts was positively related to clinical improvement. Last, they wrote The Pathophysiology and Treatment of Functional Disorders (1974) that outlined their treatment of functional disorders.
Wolf (1983) integrated EMG biofeedback into physical therapy to treat stroke patients and conducted landmark stroke outcome studies.
Peper (1997) applied SEMG to the workplace, studied the ergonomics of computer use, and promoted "healthy computing."
Taub (1999, 2006) demonstrated the clinical efficacy of constraint-induced movement therapy (CIMT) for the treatment of spinal cord-injured and stroke patients.
Cardiovascular system
Shearn (1962) operantly trained human subjects to increase their heart rates by 5 beats-per-minute to avoid electric shock. In contrast to Shearn's slight heart rate increases, Swami Rama used yoga to produce atrial flutter at an average 306 beats per minute before a Menninger Foundation audience. This briefly stopped his heart's pumping of blood and silenced his pulse.
Engel and Chism (1967) operantly trained subjects to decrease, increase, and then decrease their heart rates (this was analogous to ON-OFF-ON EEG training). He then used this approach to teach patients to control their rate of premature ventricular contractions (PVCs), where the ventricles contract too soon. Engel conceptualized this training protocol as illness onset training, since patients were taught to produce and then suppress a symptom. Peper has similarly taught asthmatics who wheeze to better control their breathing.
Schwartz (1971, 1972) examined whether specific patterns of cardiovascular activity are easier to learn than others due to biological constraints. He examined the constraints on learning integrated (two autonomic responses change in the same direction) and differentiated (two autonomic responses change inversely) patterns of blood pressure and heart rate change.
Schultz and Luthe (1969) developed Autogenic Training, which is a deep relaxation exercise derived from hypnosis. This procedure combines passive volition with imagery in a series of three treatment procedures (standard Autogenic exercises, Autogenic neutralization, and Autogenic meditation). Clinicians at the Menninger Foundation coupled an abbreviated list of standard exercises with thermal biofeedback to create autogenic biofeedback. Luthe (1973) also published a series of six volumes titled Autogenic therapy.
Fahrion and colleagues (1986) reported on an 18–26 session treatment program for hypertensive patients. The Menninger program combined breathing modification, autogenic biofeedback for the hands and feet, and frontal EMG training. The authors reported that 89% of their medication patients discontinued or reduced medication by one-half while significantly lowering blood pressure. While this study did not include a double-blind control, the outcome rate was impressive.
Freedman and colleagues (1991) demonstrated that hand-warming and hand-cooling are produced by different mechanisms. The primary hand-warming mechanism is beta-adrenergic (hormonal), while the main hand-cooling mechanism is alpha-adrenergic and involves sympathetic C-fibers. This contradicts the traditional view that finger blood flow is controlled exclusively by sympathetic C-fibers. The traditional model asserts that, when firing is slow, hands warm; when firing is rapid, hands cool. Freedman and colleagues' studies support the view that hand-warming and hand-cooling represent entirely different skills.
Vaschillo and colleagues (1983) published the first studies of heart rate variability (HRV) biofeedback with cosmonauts and treated patients diagnosed with psychiatric and psychophysiological disorders. Lehrer collaborated with Smetankin and Potapova in treating pediatric asthma patients and published influential articles on HRV asthma treatment in the medical journal Chest. The most direct effect of HRV biofeedback is on the baroreflex, a homeostatic reflex that helps control blood pressure fluctuations. When blood pressure goes up, the baroreflex makes heart rate go down. The opposite happens when blood pressure goes down. Because it takes about 5 seconds for blood pressure to change after changes in heart rate (think of different amounts of blood flowing through the same sized tube), the baroreflex produces a rhythm in heart rate with a period of about 10 seconds. Another rhythm in heart rate is caused by respiration (respiratory sinus arrhythmia), such that heart rate rises during inhalation and falls during exhalation. During HRV biofeedback, these two reflexes stimulate each other, stimulating resonance properties of the cardiovascular system caused by the inherent rhythm in the baroreflex, and thus causing very big oscillations in heart rate and large-amplitude stimulation of the baroreflex. Thus HRV biofeedback exercises the baroreflex, and strengthens it. This apparently has the effect of modulating autonomic reactivity to stimulation. Because the baroreflex is controlled through brain stem mechanisms that communicate directly with the insula and amygdala, which control emotion, HRV biofeedback also appears to modulate emotional reactivity, and to help people with anxiety, stress, and depression
Emotions are intimately linked to heart health, which is linked to physical and mental health. In general, good mental and physical health are correlated with positive emotions and high heart rate variability (HRV) modulated by mostly high frequencies. High HRV has been correlated with increased executive functioning skills such as memory and reaction time. Biofeedback that increased HRV and shifted power toward HF (high-frequencies) has been shown to lower blood pressure.
On the other hand, LF (low-frequency) power in the heart is associated with sympathetic vagal activity, which is known to increase the risk of heart attack.
LF-dominated HRV power spectra are also directly associated with higher mortality rates in healthy individuals, and among individuals with mood disorders.
Anger and frustration increase the LF range of HRV. Other studies have shown anger to increase the risk of heart attack,.
Because emotions have such an impact on cardiac function, which cascades to numerous other biological processes, emotional regulation techniques are able to effect practical, psychophysiological change.
McCraty et al. discovered that feelings of gratitude increased HRV and moved its power spectrum toward the MF (mid-frequency) and HF (high-frequency) ranges, while decreasing LF (low-frequency) power.
Other techniques that have been claimed to increase HRV include strenuous aerobic exercise, and meditation.
Pain
In 2021, the National Institute for Health and Care Excellence recommended against the use of biofeedback in managing chronic pain in adults.
Chronic back pain
Newton-John, Spense, and Schotte (1994) compared the effectiveness of Cognitive Behavior Therapy (CBT) and Electromyographic Biofeedback (EMG-Biofeedback) for 44 participants with chronic low back pain. Newton-John et al. (1994) split the participants into two groups, then measured the intensity of pain, the participants' perceived disability, and depression before treatment, after treatment and again six months later. Newton-John et al.(1994) found no significant differences between the group which received CBT and the group which received EMG-Biofeedback. This seems to indicate that biofeedback is as effective as CBT in chronic low back pain. Comparing the results of the groups before treatment and after treatment, indicates that EMG-Biofeedback reduced pain, disability, and depression as much as by half.
Muscle pain
Budzynski and Stoyva (1969) showed that EMG biofeedback could reduce frontalis muscle (forehead) contraction. They demonstrated in 1973 that analog (proportional) and binary (ON or OFF) visual EMG biofeedback were equally helpful in lowering masseter SEMG levels.
McNulty, Gevirtz, Hubbard, and Berkoff (1994) proposed that sympathetic nervous system innervation of muscle spindles underlies trigger points.
Tension headache
Budzynski, Stoyva, Adler, and Mullaney (1973) reported that auditory frontalis EMG biofeedback combined with home relaxation practice lowered tension headache frequency and frontalis EMG levels. A control group that received noncontingent (false) auditory feedback did not improve. This study helped make the frontalis muscle the placement-of-choice in EMG assessment and treatment of headache and other psychophysiological disorders.
Migraine
Sargent, Green, and Walters (1972, 1973) demonstrated that hand-warming could abort migraines and that autogenic biofeedback training could reduce headache activity. The early Menninger migraine studies, although methodologically weak (no pretreatment baselines, control groups, or random assignment to conditions), strongly influenced migraine treatment.
A 2013 review classified biofeedback among the techniques that might be of benefit in the management of chronic migraine.
Phantom-limb pain
Flor (2002) trained amputees to detect the location and frequency of shocks delivered to their stumps, which resulted in an expansion of corresponding cortical regions and significant reduction of their phantom limb pain.
Financial decision-making
Financial traders use biofeedback as a tool for regulating their level of emotional arousal in order to make better financial decisions. The technology company Philips and the Dutch bank ABN AMRO developed a biofeedback device for retail investors based on a galvanic skin response sensor. Astor et al. (2013) developed a biofeedback based serious game in which financial decision makers can learn how to effectively regulate their emotions using heart rate measurements.
Stress reduction
A randomized study by Sutarto et al. assessed the effect of resonant breathing biofeedback (recognize and control involuntary heart rate variability) among manufacturing operators; depression, anxiety and stress significantly decreased. Heart rate variability data can be analyzed with deep neural networks to accurately predict stress levels. This technology is utilized in a mobile app in combination with mindfulness techniques to effectively promote stress reduction.
Anxiety management
A meta analysis by the University of Cambridge compiles previous studies on biofeedback being used in the management and control of anxiety. In this article the previous studies are evaluated for validity and relevance into how they attribute to the effectiveness of biofeedback being used in tandem with other forms of therapy to produce reduced and manageable anxiety. This analysis concluded that the use of biofeedback in the form of HRV monitoring produced self reported large reduction of anxiety as a consistent finding in the studies that were a part of the meta analysis.
Clinical effectiveness
Research
Moss, LeVaque, and Hammond (2004) observed that "Biofeedback and neurofeedback seem to offer the kind of evidence-based practice that the healthcare establishment is demanding." "From the beginning biofeedback developed as a research-based approach emerging directly from laboratory research on psychophysiology and behavior therapy. The ties of biofeedback/neurofeedback to the biomedical paradigm and to research are stronger than is the case for many other behavioral interventions" (p. 151).
The Association for Applied Psychophysiology and Biofeedback (AAPB) and the International Society for Neurofeedback and Research (ISNR) have collaborated in validating and rating treatment protocols to address questions about the clinical efficacy of biofeedback and neurofeedback applications, like ADHD and headache. In 2001, Donald Moss, then president of the Association for Applied Psychophysiology and Biofeedback, and Jay Gunkelman, president of the International Society for Neurofeedback and Research, appointed a task force to establish standards for the efficacy of biofeedback and neurofeedback.
The Task Force document was published in 2002, and a series of white papers followed, reviewing the efficacy of a series of disorders. The white papers established the efficacy of biofeedback for functional anorectal disorders, attention deficit disorder, facial pain and temporomandibular joint dysfunction, hypertension, urinary incontinence, Raynaud's phenomenon, substance abuse, and headache.
A broader review was published and later updated, applying the same efficacy standards to the entire range of medical and psychological disorders. The 2008 edition reviewed the efficacy of biofeedback for over 40 clinical disorders, ranging from alcoholism/substance abuse to vulvar vestibulitis. The ratings for each disorder depend on the nature of research studies available on each disorder, ranging from anecdotal reports to double blind studies with a control group. Thus, a lower rating may reflect the lack of research rather than the ineffectiveness of biofeedback for the problem.
The randomized trial by Dehli et al. compared if the injection of a bulking agent in the anal canal was superior to sphincter training with biofeedback to treat fecal incontinence. Both methods lead to an improvement of FI, but comparisons of St Mark's scores between the groups showed no differences in effect between treatments.
Following their reviews, the National Institute for Health and Care Excellence have recommended against the use of biofeedback in the treatment of constipation in children, urinary incontinence in women, and chronic pain.
Efficacy
Yucha and Montgomery's (2008) ratings are listed for the five levels of efficacy recommended by a joint Task Force and adopted by the Boards of Directors of the Association for Applied Psychophysiology (AAPB) and the International Society for Neuronal Regulation (ISNR). From weakest to strongest, these levels include: not empirically supported, possibly efficacious, probably efficacious, efficacious, and efficacious and specific.
Level 1: Not empirically supported. This designation includes applications supported by anecdotal reports and/or case studies in non-peer-reviewed venues. Yucha and Montgomery (2008) assigned eating disorders, immune function, spinal cord injury, and syncope to this category.
Level 2: Possibly efficacious. This designation requires at least one study of sufficient statistical power with well-identified outcome measures but lacking randomized assignment to a control condition internal to the study. Yucha and Montgomery (2008) assigned asthma, autism, Bell palsy, cerebral palsy, COPD, coronary artery disease, cystic fibrosis, depression, erectile dysfunction, fibromyalgia, hand dystonia, irritable bowel syndrome, PTSD, repetitive strain injury, respiratory failure, stroke, tinnitus, and urinary incontinence in children to this category.
Level 3: Probably efficacious. This designation requires multiple observational studies, clinical studies, waitlist-controlled studies, and within subject and intrasubject replication studies that demonstrate efficacy. Yucha and Montgomery (2008) assigned alcoholism and substance abuse, arthritis, diabetes mellitus, fecal disorders in children, fecal incontinence in adults, insomnia, pediatric headache, traumatic brain injury, urinary incontinence in males, and vulvar vestibulitis (vulvodynia) to this category.
Level 4: Efficacious. This designation requires the satisfaction of six criteria:
(a) In a comparison with a no-treatment control group, alternative treatment group, or sham (placebo) control using randomized assignment, the investigational treatment is shown to be statistically significantly superior to the control condition or the investigational treatment is equivalent to a treatment of established efficacy in a study with sufficient power to detect moderate differences.
(b) The studies have been conducted with a population treated for a specific problem, for whom inclusion criteria are delineated in a reliable, operationally defined manner.
(c) The study used valid and clearly specified outcome measures related to the problem being treated.
(d) The data are subjected to appropriate data analysis.
(e) The diagnostic and treatment variables and procedures are clearly defined in a manner that permits replication of the study by independent researchers.
(f) The superiority or equivalence of the investigational treatment has been shown in at least two independent research settings.
Yucha and Montgomery (2008) assigned attention deficit hyperactivity disorder (ADHD), anxiety, chronic pain, epilepsy, constipation (adult), headache (adult), hypertension, motion sickness, Raynaud's disease, and temporomandibular joint dysfunction to this category.
Level 5: Efficacious and specific. The investigational treatment must be shown to be statistically superior to credible sham therapy, pill, or alternative bona fide treatment in at least two independent research settings. Yucha and Montgomery (2008) assigned urinary incontinence (females) to this category.
Criticisms
In a healthcare environment that emphasizes cost containment and evidence-based practice, critics question how these treatments compare with conventional behavioral and medical interventions on efficacy and cost. A review of a meta-analysis of biofeedback treatments noted the lack of comparisons with existing treatments in most of the studies included.
Organizations
The Association for Applied Psychophysiology and Biofeedback (AAPB) is a non-profit scientific and professional society for biofeedback and neurofeedback. The International Society for Neurofeedback and Research (ISNR) is a non-profit scientific and professional society for neurofeedback. The Biofeedback Foundation of Europe (BFE) sponsors international education, training, and research activities in biofeedback and neurofeedback. The Northeast Regional Biofeedback Association (NRBS) sponsors theme-centered educational conferences, political advocacy for biofeedback friendly legislation, and research activities in biofeedback and neurofeedback in the Northeast regions of the United States. The Southeast Biofeedback and Clinical Neuroscience Association (SBCNA) is a non-profit regional organization supporting biofeedback professionals with continuing education, ethics guidelines, and public awareness promoting the efficacy and safety of professional biofeedback. The SBCNA offers an Annual Conference for professional continuing education as well as promoting biofeedback as an adjunct to the allied health professions. The SBCNA was formally the North Carolina Biofeedback Society (NCBS), serving Biofeedback since the 1970s. In 2013, the NCBS reorganized as the SBCNA supporting and representing biofeedback and neurofeedback in the Southeast Region of the United States of America.
Certification
The Biofeedback Certification International Alliance (formerly the Biofeedback Certification Institute of America) is a non-profit organization that is a member of the Institute for Credentialing Excellence (ICE). BCIA offers biofeedback certification, neurofeedback (also called EEG biofeedback) certification, and pelvic muscle dysfunction biofeedback. BCIA certifies individuals meeting education and training standards in biofeedback and neurofeedback and progressively recertifies those satisfying continuing education requirements. BCIA certification has been endorsed by the Mayo Clinic, the Association for Applied Psychophysiology and Biofeedback (AAPB), the International Society for Neurofeedback and Research (ISNR), and the Washington State Legislature.
The BCIA didactic education requirement includes a 48-hour course from a regionally-accredited academic institution or a BCIA-approved training program that covers the complete General Biofeedback Blueprint of Knowledge and study of human anatomy and physiology. The General Biofeedback Blueprint of Knowledge areas include: I. Orientation to Biofeedback, II. Stress, Coping, and Illness, III. Psychophysiological Recording, IV. Surface Electromyographic (SEMG) Applications, V. Autonomic Nervous System (ANS) Applications, VI. Electroencephalographic (EEG) Applications, VII. Adjunctive Interventions, and VIII. Professional Conduct.
Applicants may demonstrate their knowledge of human anatomy and physiology by completing a course in human anatomy, human physiology, or human biology provided by a regionally-accredited academic institution or a BCIA-approved training program or by successfully completing an Anatomy and Physiology exam covering the organization of the human body and its systems.
Applicants must also document practical skills training that includes 20 contact hours supervised by a BCIA-approved mentor designed to them teach how to apply clinical biofeedback skills through self-regulation training, 50 patient/client sessions, and case conference presentations. Distance learning allows applicants to complete didactic course work over the internet. Distance mentoring trains candidates from their residence or office. They must recertify every 4 years, complete 55 hours of continuing education during each review period or complete the written exam, and attest that their license/credential (or their supervisor's license/credential) has not been suspended, investigated, or revoked.
History
Claude Bernard proposed in 1865 that the body strives to maintain a steady state in the internal environment (milieu intérieur), introducing the concept of homeostasis. In 1885, J.R. Tarchanoff showed that voluntary control of heart rate could be fairly direct (cortical-autonomic) and did not depend on "cheating" by altering breathing rate. In 1901, J. H. Bair studied voluntary control of the retrahens aurem muscle that wiggles the ear, discovering that subjects learned this skill by inhibiting interfering muscles and demonstrating that skeletal muscles are self-regulated. Alexander Graham Bell attempted to teach the deaf to speak through the use of two devices—the phonautograph, created by Édouard-Léon Scott's, and a manometric flame. The former translated sound vibrations into tracings on smoked glass to show their acoustic waveforms, while the latter allowed sound to be displayed as patterns of light. After World War II, mathematician Norbert Wiener developed cybernetic theory, that proposed that systems are controlled by monitoring their results. The participants at the landmark 1969 conference at the Surfrider Inn in Santa Monica coined the term biofeedback from Wiener's feedback. The conference resulted in the founding of the Bio-Feedback Research Society, which permitted normally isolated researchers to contact and collaborate with each other, as well as popularizing the term biofeedback. The work of B.F. Skinner led researchers to apply operant conditioning to biofeedback, decide which responses could be voluntarily controlled and which could not. In the first experimental demonstration of biofeedback, Shearn used these procedures with heart rate. The effects of the perception of autonomic nervous system activity was initially explored by George Mandler's group in 1958. In 1965, Maia Lisina combined classical and operant conditioning to train subjects to change blood vessel diameter, eliciting and displaying reflexive blood flow changes to teach subjects how to voluntarily control the temperature of their skin. In 1974, H.D. Kimmel trained subjects to sweat using the galvanic skin response.
Timeline
1958 – G. Mandler's group studied the process of autonomic feedback and its effects.
1962 – D. Shearn used feedback instead of conditioned stimuli to change heart rate.
1962 – Publication of Muscles Alive by John Basmajian and Carlo De Luca
1968 – Annual Veteran's Administration research meeting in Denver that brought together several biofeedback researchers
1969 – April: Conference on Altered States of Consciousness, Council Grove, KS; October: formation and first meeting of the Biofeedback Research Society (BRS), Surfrider Inn, Santa Monica, CA; co-founder Barbara B. Brown becomes the society's first president
1972 – Review and analysis of early biofeedback studies by D. Shearn in the 'Handbook of Psychophysiology'.
1974 – Publication of The Alpha Syllabus: A Handbook of Human EEG Alpha Activity and the first popular book on biofeedback, New Mind, New Body (December), both by Barbara B. Brown
1975 – American Association of Biofeedback Clinicians founded; publication of The Biofeedback Syllabus: A Handbook for the Psychophysiologic Study of Biofeedback by Barbara B. Brown
1976 – BRS renamed the Biofeedback Society of America (BSA)
1977 – Publication of Beyond Biofeedback by Elmer and Alyce Green and Biofeedback: Methods and Procedures in Clinical Practice by George Fuller and Stress and The Art of Biofeedback by Barbara B. Brown
1978 – Publication of Biofeedback: A Survey of the Literature by Francine Butler
1979 – Publication of Biofeedback: Principles and Practice for Clinicians by John Basmajian and Mind/Body Integration: Essential Readings in Biofeedback by Erik Peper, Sonia Ancoli, and Michele Quinn
1980 – First national certification examination in biofeedback offered by the Biofeedback Certification Institute of America (BCIA); publication of Biofeedback: Clinical Applications in Behavioral Medicine by David Olton and Aaron Noonberg and Supermind: The Ultimate Energy by Barbara B. Brown
1984 – Publication of Principles and Practice of Stress Management by Woolfolk and Lehrer and Between Health and Illness: New Notions on Stress and the Nature of Well Being by Barbara B. Brown
1984 - Publication of The Biofeedback Way To Starve Stress, by Mark Golin in Prevention Magazine 1984
1987 – Publication of Biofeedback: A Practitioner's Guide by Mark Schwartz
1989 – BSA renamed the Association for Applied Psychophysiology and Biofeedback
1991 – First national certification examination in stress management offered by BCIA
1994 – Brain Wave and EMG sections established within AAPB
1995 – Society for the Study of Neuronal Regulation (SSNR) founded
1996 – Biofeedback Foundation of Europe (BFE) established
1999 – SSNR renamed the Society for Neuronal Regulation (SNR)
2002 – SNR renamed the International Society for Neuronal Regulation (ISNR)
2003 – Publication of The Neurofeedback Book by Thompson and Thompson
2004 – Publication of Evidence-Based Practice in Biofeedback and Neurofeedback by Carolyn Yucha and Christopher Gilbert
2006 – ISNR renamed the International Society for Neurofeedback and Research (ISNR)
2008 – Biofeedback Neurofeedback Alliance formed to pool the resources of the AAPB, BCIA, and ISNR on joint initiatives
2008 – Biofeedback Alliance and Nomenclature Task Force define biofeedback
2009 – The International Society for Neurofeedback & Research defines neurofeedback
2010 – Biofeedback Certification Institute of America renamed the Biofeedback Certification International Alliance (BCIA)
In popular culture
Biofeedback data and biofeedback technology are used by Massimiliano Peretti in a contemporary art environment, the Amigdalae project. This project explores the way in which emotional reactions filter and distort human perception and observation. During the performance, biofeedback medical technology, such as the EEG, body temperature variations, heart rate, and galvanic responses, are used to analyze an audience's emotions while they watch the video art. Using these signals, the music changes so that the consequent sound environment simultaneously mirrors and influences the viewer's emotional state. More information is available at the website of the CNRS French National Center of Neural Research.
Charles Wehrenberg implemented competitive-relaxation as a gaming paradigm with the Will Ball Games circa 1973. In the first bio-mechanical versions, comparative GSR inputs monitored each player's relaxation response and moved the Will Ball across a playing field appropriately using stepper motors. In 1984, Wehrenberg programmed the Will Ball games for Apple II computers. The Will Ball game itself is described as pure competitive-relaxation; Brain Ball is a duel between one player's left- and right-brain hemispheres; Mood Ball is an obstacle-based game; Psycho Dice is a psycho-kinetic game.
In 2001, the company Journey to Wild Divine began producing biofeedback hardware and software for the Mac and Windows operating systems. Third-party and open-source software and games are also available for the Wild Divine hardware. Tetris 64 makes use of biofeedback to adjust the speed of the tetris puzzle game.
David Rosenboom has worked to develop musical instruments that would respond to mental and physiological commands. Playing these instruments can be learned through a process of biofeedback.
In the mid-1970s, an episode of the television series The Bionic Woman featured a doctor who could "heal" himself using biofeedback techniques to communicate to his body and react to stimuli. For example, he could exhibit "super" powers, such as walking on hot coals, by feeling the heat on the sole of his feet and then convincing his body to react by sending large quantities of perspiration to compensate. He could also convince his body to deliver extremely high levels of adrenalin to provide more energy to allow him to run faster and jump higher. When injured, he could slow his heart rate to reduce blood pressure, send extra platelets to aid in clotting a wound, and direct white blood cells to an area to attack infection.
In the science-fiction book Quantum Lens by Douglas E. Richards, bio-feedback is used to enhance certain abilities to detect quantum effects that give the user special powers.
See also
Direct visual feedback
Journey to Wild Divine, a biofeedback-based, multi-sensor, game-like software package
Polygraph, uses exact same sensors as biofeedback devices
Footnotes
External links
Association for Applied Psychophysiology and Biofeedback (AAPB)
Biofeedback Certification Institute of America (BCIA)
Biofeedback Foundation of Europe (BFE)
Deutsche Gesellschaft für Biofeedback e.V. (DGBfb e.V.)
International Society for Neurofeedback & Research (ISNR)
Physiology
Mind–body interventions
Devices to alter consciousness
Feedback | Biofeedback | [
"Biology"
] | 13,369 | [
"Physiology"
] |
292,941 | https://en.wikipedia.org/wiki/Polyethylene%20terephthalate | Polyethylene terephthalate (or poly(ethylene terephthalate), PET, PETE, or the obsolete PETP or PET-P), is the most common thermoplastic polymer resin of the polyester family and is used in fibres for clothing, containers for liquids and foods, and thermoforming for manufacturing, and in combination with glass fibre for engineering resins.
In 2016, annual production of PET was 56 million tons. The biggest application is in fibres (in excess of 60%), with bottle production accounting for about 30% of global demand. In the context of textile applications, PET is referred to by its common name, polyester, whereas the acronym PET is generally used in relation to packaging. Polyester makes up about 18% of world polymer production and is the fourth-most-produced polymer after polyethylene (PE), polypropylene (PP) and polyvinyl chloride (PVC).
PET consists of repeating (C10H8O4) units. PET is commonly recycled, and has the digit 1 (♳) as its resin identification code (RIC). The National Association for PET Container Resources (NAPCOR) defines PET as: "Polyethylene terephthalate items referenced are derived from terephthalic acid (or dimethyl terephthalate) and mono ethylene glycol, wherein the sum of terephthalic acid (or dimethyl terephthalate) and mono ethylene glycol reacted constitutes at least 90 percent of the mass of monomer reacted to form the polymer, and must exhibit a melting peak temperature between 225 °C and 255 °C, as identified during the second thermal scan in procedure 10.1 in ASTM D3418, when heating the sample at a rate of 10 °C/minute."
Depending on its processing and thermal history, polyethylene terephthalate may exist both as an amorphous (transparent) and as a semi-crystalline polymer. The semicrystalline material might appear transparent (particle size less than 500 nm) or opaque and white (particle size up to a few micrometers) depending on its crystal structure and particle size.
One process for making PET uses bis(2-hydroxyethyl) terephthalate, which can be synthesized by the esterification reaction between terephthalic acid and ethylene glycol with water as a byproduct (this is also known as a condensation reaction), or by transesterification reaction between ethylene glycol and dimethyl terephthalate (DMT) with methanol as a byproduct. Polymerization is through a polycondensation reaction of the monomers (done immediately after esterification/transesterification) with water as the byproduct.
Uses
Textiles
Polyester fibres are widely used in the textile industry. The invention of the polyester fibre is attributed to J. R. Whinfield. It was first commercialized in the 1940s by ICI, under the brand 'Terylene'. Subsequently E. I. DuPont launched the brand 'Dacron'. As of 2022, there are many brands around the world, mostly Asian.
Polyester fibres are used in fashion apparel often blended with cotton, as heat insulation layers in thermal wear, sportswear and workwear and automotive upholstery.
Rigid packaging
Plastic bottles made from PET are widely used for soft drinks, both still and sparkling. For beverages that are degraded by oxygen, such as beer, a multilayer structure is used. PET sandwiches an additional polyvinyl alcohol (PVOH) or polyamide (PA) layer to further reduce its oxygen permeability.
Non-oriented PET sheet can be thermoformed to make packaging trays and blister packs. Crystallizable PET withstands freezing and oven baking temperatures. Both amorphous PET and BoPET are transparent to the naked eye. Color-conferring dyes can easily be formulated into PET sheet.
PET is permeable to oxygen and carbon dioxide and this imposes shelf life limitations of contents packaged in PET.
In the early 2000s, the global PET packaging market grew at a compound annual growth rate of 9% to €17 billion in 2006.
Flexible packaging
Biaxially oriented PET (BOPET) film (including brands like "Mylar") can be aluminized by evaporating a thin film of metal onto it to reduce its permeability, and to make it reflective and opaque (MPET). These properties are useful in many applications, including flexible food packaging and thermal insulation (such as space blankets).
Photovoltaic modules
BOPET is used in the backsheet of photovoltaic modules. Most backsheets consist of a layer of BOPET laminated to a fluoropolymer or a layer of UV stabilized BOPET.
PET is also used as a substrate in thin film solar cells.
Thermoplastic resins
PET can be compounded with glass fibre and crystallization accelerators, to make thermoplastic resins. These can be injection moulded into parts such as housings, covers, electrical appliance components and elements of the ignition system.
Nanodiamonds
PET is stoichiometrically a mixture of carbon and , and therefore has been used in an experiment involving laser-driven shock compression which created nanodiamonds and superionic water. This could be a possible way of producing nanodiamonds commercially.
Other applications
A waterproofing barrier in undersea cables.
As a film base.
As a fibre, spliced into bell rope tops to help prevent wear on the ropes as they pass through the ceiling.
Since late 2014 as liner material in type IV composite high pressure gas cylinders. PET works as a much better barrier to oxygen than earlier used (LD)PE.
As a 3D printing filament, as well as in the 3D printing plastic PETG (polyethylene terephthalate glycol). In 3D printing PETG has become a popular material - used for high-end applications like surgical fracture tables to automotive and aeronautical sectors, among other industrial applications. The surface properties can be modified to make PETG self-cleaning for applications like the fabrication of traffic signs for the manufacture of light-emitting diode LED spotlights.
As one of three layers for the creation of glitter; acting as a plastic core coated with aluminum and topped with plastic to create a light reflecting surface, although as of 2021 many glitter manufacturing companies have begun to phase out the use of PET after calls from organizers of festivals to create bio-friendly glitter alternatives.
Film for tape applications, such as the carrier for magnetic tape or backing for pressure-sensitive adhesive tapes. Digitalization has caused the virtual disappeance of the magnetic audio and videotape application.
Water-resistant paper.
History
PET was patented in 1941 by John Rex Whinfield, James Tennant Dickson and their employer the Calico Printers' Association of Manchester, England. E. I. DuPont de Nemours in Delaware, United States, first produced Dacron (PET fiber) in 1950 and used the trademark Mylar (boPET film) in June 1951 and received registration of it in 1952. It is still the best-known name used for polyester film. The current owner of the trademark is DuPont Teijin Films.
In the Soviet Union, PET was first manufactured in the laboratories of the Institute of High-Molecular Compounds of the USSR Academy of Sciences in 1949, and its name "Lavsan" is an acronym thereof (лаборатории Института высокомолекулярных соединений Академии наук СССР).
The PET bottle was invented in 1973 by Nathaniel Wyeth and patented by DuPont.
Physical properties
PET in its most stable state is a colorless, semi-crystalline resin. However it is intrinsically slow to crystallize compared to other semicrystalline polymers. Depending on processing conditions it can be formed into either non-crystalline (amorphous) or crystalline articles. Its amenability to drawing in manufacturing makes PET useful in fibre and film applications. Like most aromatic polymers, it has better barrier properties than aliphatic polymers. It is strong and impact-resistant. PET is hygroscopic and absorbs water.
About 60% crystallization is the upper limit for commercial products, with the exception of polyester fibers. Transparent products can be produced by rapidly cooling molten polymer below the glass transition temperature (Tg) to form a non-crystalline amorphous solid. Like glass, amorphous PET forms when its molecules are not given enough time to arrange themselves in an orderly, crystalline fashion as the melt is cooled. While at room temperature the molecules are frozen in place, if enough heat energy is put back into them afterward by heating the material above Tg, they can begin to move again, allowing crystals to nucleate and grow. This procedure is known as solid-state crystallization. Amorphous PET also crystallizes and becomes opaque when exposed to solvents, such as chloroform or toluene.
A more crystalline product can be produced by allowing the molten polymer to cool slowly. Rather than forming one large single crystal, this material has a number of spherulites (crystallized areas) each containing many small crystallites (grains). Light tends to scatter as it crosses the boundaries between crystallites and the amorphous regions between them, causing the resulting solid to be translucent. Orientation also renders polymers more transparent. This is why BOPET film and bottles are both crystalline, to a degree, and transparent.
Flavor absorption
PET has an affinity for hydrophobic flavors, and drinks sometimes need to be formulated with a higher flavor dosage, compared to those going into glass, to offset the flavor taken up by the container. While heavy gauge PET bottles returned for re-use, as in some EU countries, the propensity of PET to absorb flavors makes it necessary to conduct a "sniffer test" on returned bottles to avoid cross-contamination of flavors.
Intrinsic viscosity
Different applications of PET require different degrees of polymerization, which can be obtained by modifying the process conditions. The molecular weight of PET is measured by solution viscosity. The preferred method to measure this viscosity is the intrinsic viscosity (IV) of the polymer. Intrinsic viscosity is a dimensionless measurement found by extrapolating the relative viscosity (measured in (dℓ/g)) to zero concentration. Shown below are the IV ranges for common applications:
Copolymers
PET is often copolymerized with other diols or diacids to optimize the properties for particular applications.
PETG
For example, cyclohexanedimethanol (CHDM) can be added to the polymer backbone in place of ethylene glycol. Since this building block is much larger (six additional carbon atoms) than the ethylene glycol unit it replaces, it does not fit in with the neighboring chains the way an ethylene glycol unit would. This interferes with crystallization and lowers the polymer's melting temperature. In general, such PET is known as PETG or PET-G (polyethylene terephthalate glycol-modified). It is a clear amorphous thermoplastic that can be injection-molded, sheet-extruded or extruded as filament for 3D printing. PETG can be colored during processing.
Isophthalic acid
Another common modifier is isophthalic acid, replacing some of the 1,4-(para-) linked terephthalate units. The 1,2-(ortho-) or 1,3-(meta-) linkage produces an angle in the chain, which also disturbs crystallinity.
Advantages
Such copolymers are advantageous for certain molding applications, such as thermoforming, which is used for example to make tray or blister packaging from co-PET film, or amorphous PET sheet (A-PET/PETA) or PETG sheet. On the other hand, crystallization is important in other applications where mechanical and dimensional stability are important, such as seat belts. For PET bottles, the use of small amounts of isophthalic acid, CHDM, diethylene glycol (DEG) or other comonomers can be useful: if only small amounts of comonomers are used, crystallization is slowed but not prevented entirely. As a result, bottles are obtainable via stretch blow molding ("SBM"), which are both clear and crystalline enough to be an adequate barrier to aromas and even gases, such as carbon dioxide in carbonated beverages.
Production
Polyethylene terephthalate is produced largely from purified terephthalic acid (PTA), as well as to a lesser extent from (mono-)ethylene glycol (MEG) and dimethyl terephthalate (DMT). As of 2022, ethylene glycol is made from ethene found in natural gas, while terephthalic acid comes from p-xylene made from crude oil. Typically an antimony or titanium compound is used as a catalyst, a phosphite is added as a stabilizer and a bluing agent such as cobalt salt is added to mask any yellowing.
Processes
Dimethyl terephthalate process
In the dimethyl terephthalate (DMT) process, DMT and excess ethylene glycol (MEG) are transesterified in the melt at 150–200 °C with a basic catalyst. Methanol (CH3OH) is removed by distillation to drive the reaction forward. Excess MEG is distilled off at higher temperature with the aid of vacuum. The second transesterification step proceeds at 270–280 °C, with continuous distillation of MEG as well.
The reactions can be summarized as follows:
First step
C6H4(CO2CH3)2 + 2 HOCH2CH2OH → C6H4(CO2CH2CH2OH)2 + 2 CH3OH
Second step
n C6H4(CO2CH2CH2OH)2 → [(CO)C6H4(CO2CH2CH2O)]n + n HOCH2CH2OH
Terephthalic acid process
In the terephthalic acid process, MEG and PTA are esterified directly at moderate pressure (2.7–5.5 bar) and high temperature (220–260 °C). Water is eliminated in the reaction, and it is also continuously removed by distillation:
n C6H4(CO2H)2 + n HOCH2CH2OH → [(CO)C6H4(CO2CH2CH2O)]n + 2n H2O
Bio-PET
Bio-PET is the bio-based counterpart of PET. Essentially in Bio-PET, the MEG is manufactured from ethylene derived from sugar cane ethanol. A better process based on oxidation of ethanol has been proposed, and it is also technically possible to make PTA from readily available bio-based furfural.
Bottle processing equipment
There are two basic molding methods for PET bottles, one-step and two-step. In two-step molding, two separate machines are used. The first machine injection molds the preform, which resembles a test tube, with the bottle-cap threads already molded into place. The body of the tube is significantly thicker, as it will be inflated into its final shape in the second step using stretch blow molding.
In the second step, the preforms are heated rapidly and then inflated against a two-part mold to form them into the final shape of the bottle. Preforms (uninflated bottles) are now also used as robust and unique containers themselves; besides novelty candy, some Red Cross chapters distribute them as part of the Vial of Life program to homeowners to store medical history for emergency responders.
The two-step process lends itself to third party production remote from the user site. The preforms can be transported and stored by the thousand in a much smaller space than would finished containers, for the second stage to be carried out on the user site on a 'just in time' basis.
In one-step machines, the entire process from raw material to finished container is conducted within one machine, making it especially suitable for molding non-standard shapes (custom molding), including jars, flat oval, flask shapes, etc. Its greatest merit is the reduction in space, product handling and energy, and far higher visual quality than can be achieved by the two-step system.
Degradation
PET is subject to degradation during processing. If the moisture level is too high, hydrolysis will reduce the molecular weight by chain scission, resulting in brittleness. If the residence time and/or melt temperature (temperature at melting) are too high, then thermal degradation or thermooxidative degradation will occur resulting in discoloration and reduced molecular weight, as well as the formation of acetaldehyde, and the formation "gel" or "fish-eye" formations through cross-linking. Mitigation measures include copolymerisation with other monomers like CHDM or isophthalic acid, which lower the melting point and thus the melt temperature of the resin, as well as the addition of polymer stabilisers such as phosphites.
Acetaldehyde
Acetaldehyde, which can form by degradation of PET after mishandling of the material, is a colorless, volatile substance with a fruity smell. Although it forms naturally in some fruit, it can cause an off-taste in bottled water. As well as high temperatures (PET decomposes above 300 °C or 570 °F) and long barrel residence times, high pressures and high extruder speeds (which cause shear raising the temperature), can also contribute to the production of acetaldehyde. Photo-oxidation can also cause the gradual formation acetaldehyde over the object's lifespan. This proceeds via a Type II Norrish reaction.
When acetaldehyde is produced, some of it remains dissolved in the walls of a container and then diffuses into the product stored inside, altering the taste and aroma. This is not such a problem for non-consumables (such as shampoo), for fruit juices (which already contain acetaldehyde), or for strong-tasting drinks like soft drinks. For bottled water, however, low acetaldehyde content is quite important, because if nothing masks the aroma, even extremely low concentrations (10–20 parts per billion in the water) of acetaldehyde can produce an off-taste.
Safety and environmental concerns
Commentary published in Environmental Health Perspectives in April 2010 suggested that PET might yield endocrine disruptors under conditions of common use and recommended research on this topic. Proposed mechanisms include leaching of phthalates as well as leaching of antimony.
An article published in Journal of Environmental Monitoring in April 2012 concludes that antimony concentration in deionized water stored in PET bottles stays within EU's acceptable limit even if stored briefly at temperatures up to 60 °C (140 °F), while bottled contents (water or soft drinks) may occasionally exceed the EU limit after less than a year of storage at room temperature.
Antimony
Antimony (Sb) is a metalloid element that is used as a catalyst in the form of compounds such as antimony trioxide (Sb2O3) or antimony triacetate in the production of PET. After manufacturing, a detectable amount of antimony can be found on the surface of the product. This residue can be removed with washing. Antimony also remains in the material itself and can, thus, migrate out into food and drinks. Exposing PET to boiling or microwaving can increase the levels of antimony significantly, possibly above US EPA maximum contamination levels.
The drinking water limit assessed by WHO is 20 parts per billion (WHO, 2003), and the drinking water limit in the United States is 6 parts per billion. Although antimony trioxide is of low toxicity when taken orally, its presence is still of concern. The Swiss Federal Office of Public Health investigated the amount of antimony migration, comparing waters bottled in PET and glass: The antimony concentrations of the water in PET bottles were higher, but still well below the allowed maximum concentration. The Swiss Federal Office of Public Health concluded that small amounts of antimony migrate from the PET into bottled water, but that the health risk of the resulting low concentrations is negligible (1% of the "tolerable daily intake" determined by the WHO). A later (2006) but more widely publicized study found similar amounts of antimony in water in PET bottles.
The WHO has published a risk assessment for antimony in drinking water.
Fruit juice concentrates (for which no guidelines are established), however, that were produced and bottled in PET in the UK were found to contain up to 44.7 μg/L of antimony, well above the EU limits for tap water of 5 μg/L.
Shed microfibres
Clothing sheds microfibres in use, during washing and machine drying. Plastic litter slowly forms small particles. Microplastics which are present on the bottom of the river or seabed can be ingested by small marine life, thus entering the food chain. As PET has a higher density than water, a significant amount of PET microparticles may be precipitated in sewage treatment plants. PET microfibers generated by apparel wear, washing or machine drying can become airborne, and be dispersed into fields, where they are ingested by livestock or plants and end up in the human food supply. SAPEA have declared that such particles 'do not pose a widespread risk'.
PET is known to degrade when exposed to sunlight and oxygen. As of 2016, scarce information exists regarding the life-time of the synthetic polymers in the environment.
Polyester recycling
While most thermoplastics can, in principle, be recycled, PET bottle recycling is more practical than many other plastic applications because of the high value of the resin and the almost exclusive use of PET for widely used water and carbonated soft drink bottling. PET bottles lend themselves well to recycling (see below). In many countries PET bottles are recycled to a substantial degree, for example about 75% in Switzerland. The term rPET is commonly used to describe the recycled material, though it is also referred to as R-PET or post-consumer PET (POSTC-PET).
The prime uses for recycled PET are polyester fiber, strapping, and non-food containers. Because of the recyclability of PET and the relative abundance of post-consumer waste in the form of bottles, PET is also rapidly gaining market share as a carpet fiber. PET, like many plastics, is also an excellent candidate for thermal disposal (incineration), as it is composed of carbon, hydrogen, and oxygen, with only trace amounts of catalyst elements (but no sulfur). In general, PET can either be chemically recycled into its original raw materials (PTA, DMT, and EG), destroying the polymer structure completely; mechanically recycled into a different form, without destroying the polymer; or recycled in a process that includes transesterification and the addition of other glycols, polyols, or glycerol to form a new polyol. The polyol from the third method can be used in polyurethane (PU foam) production, or epoxy-based products, including paints.
In 2023 a process was announced for using PET as the basis for supercapacitor production. PET, being stoichiometrically carbon and , can be turned into a form of carbon containing sheets and nanospheres, with a very high surface area. The process involves holding a mixture of PET, water, nitric acid, and ethanol at a high temperature and pressure for eight hours, followed by centrifugation and drying.
Significant investments were announced in 2021 and 2022 for chemical recycling of PET by glycolysis, methanolysis, and enzymatic recycling to recover monomers. Initially these will also use bottles as feedstock but it is expected that fibres will also be recycled this way in future.
PET is also a desirable fuel for waste-to-energy plants, as it has a high calorific value which helps to reduce the use of primary resources for energy generation.
Biodegradation
At least one species of bacterium in the genus Nocardia can degrade PET with an esterase enzyme. Esterases are enzymes able to cleave the ester bond between two oxygens that links subunits of PET. The initial degradation of PET can also be achieved esterases expressed by Bacillus, as well as Nocardia. Japanese scientists have isolated another bacterium, Ideonella sakaiensis, that possesses two enzymes which can break down the PET into smaller pieces digestible by the bacteria. A colony of I. sakaiensis can disintegrate a plastic film in about six weeks. French researchers report developing an improved PET hydrolase that can depolymerize (break apart) at least 90 percent of PET in 10 hours, breaking it down into individual monomers. Also, an enzyme based on a natural PET-ase was designed with the help of a machine learning algorithm to be able to tolerate pH and temperature changes by the University of Texas at Austin. The PET-ase was found to able to degrade various products and could break them down as fast as 24 hours.
See also
BoPET (biaxially oriented PET)
Bioplastic
PET bottle recycling
Plastic recycling
Polycyclohexylenedimethylene terephthalate—a polyester with a similar structure to PET
Polyester
Solar water disinfection—a method of disinfecting water using only sunlight and plastic PET bottles
References
External links
Arropol commercial producer of polyol from post-consumer PET fiber
American Plastics Council: PlasticInfo.org
KenPlas Industry Ltd.: "What is PET (Polyethylene Terephthalate)"
PET vs PETg: What’s the Difference?
"WAVE Polymer Technology: PET (Polyethylene Terephthalate) flakes processing"
Biomaterials
Terephthalate esters
Commodity chemicals
English inventions
Flexible electronics
Household chemicals
Plastics
Polyesters
Polymers
Thermoplastics
Transparent materials
Food packaging | Polyethylene terephthalate | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 5,594 | [
"Biomaterials",
"Physical phenomena",
"Commodity chemicals",
"Products of chemical industry",
"Unsolved problems in physics",
"Optical phenomena",
"Materials",
"Electronic engineering",
"Transparent materials",
"Polymer chemistry",
"Polymers",
"Flexible electronics",
"Medical technology",
... |
293,065 | https://en.wikipedia.org/wiki/Desiccant | A desiccant is a hygroscopic substance that is used to induce or sustain a state of dryness (desiccation) in its vicinity; it is the opposite of a humectant. Commonly encountered pre-packaged desiccants are solids that absorb water. Desiccants for specialized purposes may be in forms other than solid, and may work through other principles, such as chemical bonding of water molecules. They are commonly encountered in foods to retain crispness. Industrially, desiccants are widely used to control the level of water in gas streams.
Types of desiccants
Although some desiccants are chemically inert, others are extremely reactive and require specialized handling techniques. The most common desiccant is silica gel, an otherwise inert, nontoxic, water-insoluble white solid. Tens of thousands of tons are produced annually for this purpose. Other common desiccants include activated charcoal, calcium sulfate, calcium chloride, and molecular sieves (typically, zeolites). Desiccants may also be categorized by their type, either I, II, III, IV, or V. These types are a function of the shape of the desiccant's moisture sorption isotherm.
Alcohols and acetones are also dehydrating agents. Diethylene glycol is an important industrial desiccant. It absorbs water from natural gas, minimizing the formation of methane hydrates, which can block pipes.
Performance efficiency
One measure of desiccant efficiency is the ratio (or percentage) of water storable in the desiccant relative to the mass of desiccant. Another measure is the residual relative humidity of the air or other fluid being dried. For drying gases, a desiccant's performance can be precisely described by the dew point of the dried product.
Colored saturation indicators
Sometimes a humidity indicator is included in the desiccant to show, by color changes, the degree of water-saturation of the desiccant. One commonly used indicator is cobalt chloride (), which is blue when anhydrous, but turns purple upon bonding with two water molecules (·2). Further hydration results in the pink hexaaquacobalt(II) chloride complex . However, the use of cobalt chloride raises health concerns, being potentially carcinogenic.
Applications
Applications of desiccants are dominated by the petrochemical industry. Hydrocarbons, including natural gas, often must be anhydrous or nearly so for processing or for transport. Catalysts that are used to convert some petroleum fractions are generally deactivated by even traces of water. Natural gas tends to form solid methane hydrates which can block pipes.
Domestic uses
One example of desiccant usage is in the manufacture of insulated windows where zeolite spheroids fill a rectangular spacer tube at the perimeter of the panes of glass. The desiccant helps to prevent the condensation of moisture between the panes. Another use of zeolites is in the "dryer" component of refrigeration systems to absorb water carried by the refrigerant, whether residual water left over from the construction of the system, or water released by the degradation of other materials over time.
Bagged desiccants are also commonly used to protect goods in barrier-sealed shipping containers against moisture damage: rust, corrosion, etc. Hygroscopic cargo, such as cocoa, coffee, various nuts and grains, and other foods can be particularly susceptible to mold and rot when exposed to condensation and humidity. Because of this, shippers often take measures by deploying desiccants to protect against loss. Pharmaceutical packaging often includes small packets of desiccant to keep the atmosphere inside the package below critical levels of water vapor.
Air conditioning systems can be based on desiccants, as drier air feels more comfortable and absorbing water itself removes heat.
Desiccants are used in livestock farming, where, for example, new-born piglets are highly susceptible to hypothermia owing to their wetness.
Laboratory uses
Desiccants are also used to remove water from solvents. Drying generally involves mixing the solvent with the solid desiccant. Molecular sieves are superior as desiccants relative to chemical drying reagents such as sodium-benzophenone. Sieves offer the advantages of being safe in air and recyclable.
See also
Desiccator
Humidity buffering
Humidity indicator card
Moisture sorption isotherm
Solar air conditioning
Oxygen scavenger (oxygen absorber)
Sorbent
Volatile corrosion inhibitor
Cromer cycle
References
Further reading
Packaging | Desiccant | [
"Physics"
] | 988 | [
"Desiccants",
"Materials",
"Matter"
] |
293,336 | https://en.wikipedia.org/wiki/Ammonium%20chloride | Ammonium chloride is an inorganic chemical compound with the chemical formula , also written as . It is an ammonium salt of hydrogen chloride. It consists of ammonium cations and chloride anions . It is a white crystalline salt that is highly soluble in water. Solutions of ammonium chloride are mildly acidic. In its naturally occurring mineralogic form, it is known as salammoniac. The mineral is commonly formed on burning coal dumps from condensation of coal-derived gases. It is also found around some types of volcanic vents. It is mainly used as fertilizer and a flavouring agent in some types of liquorice. It is a product of the reaction of hydrochloric acid and ammonia.
Production
It is a product of the Solvay process used to produce sodium carbonate:
CO2 + 2 NH3 + 2 NaCl + H2O → 2 NH4Cl + Na2CO3
Not only is that method the principal one for the manufacture of ammonium chloride, but also it is used to minimize ammonia release in some industrial operations.
Ammonium chloride is prepared commercially by combining ammonia (NH3) with either hydrogen chloride (gas) or hydrochloric acid (water solution):
NH3 + HCl → NH4Cl
Ammonium chloride occurs naturally in volcanic regions, forming on volcanic rocks near fume-releasing vents (fumaroles). The crystals deposit directly from the gaseous state and tend to be short-lived, as they dissolve easily in water.
Reactions
Ammonium chloride appears to sublime upon heating but actually reversibly decomposes into ammonia and hydrogen chloride gas:
NH4Cl NH3 + HCl
Ammonium chloride reacts with a strong base, like sodium hydroxide, to release ammonia gas:
NH4Cl + NaOH → NH3 + NaCl + H2O
Similarly, ammonium chloride also reacts with alkali-metal carbonates at elevated temperatures, giving ammonia and alkali-metal chloride:
2 NH4Cl + Na2CO3 → 2 NaCl + CO2 + H2O + 2 NH3
A solution of 5% by mass of ammonium chloride in water has a pH in the range 4.6 to 6.0.
Some reactions of ammonium chloride with other chemicals are endothermic, such as its reaction with barium hydroxide and its dissolving in water.
Applications
Agriculture
The dominant application of ammonium chloride is as a nitrogen source in fertilizers (corresponding to 90% of the world production of ammonium chloride) such as chloroammonium phosphate. The main crops fertilized this way are rice and wheat in Asia. When using ammonium chloride as a nitrogen fertilizer for plants, the appropriate concentration is applied to provide sufficient nutrients without causing harm. Ammonium chloride is approximately 26% nitrogen by weight and can be used to supply nitrogen to plants, especially those preferring slightly acidic conditions. The concentration for nitrogen fertilization in solution is between 50 and 100 milligrams of nitrogen per liter of water (mg N/L), which is equivalent to 50–100 parts per million (ppm) nitrogen, which translates to approximately 0.2 to 0.4 grams of ammonium chloride per liter of water. Ammonium chloride can acidify the soil over time, so soil pH is regularly monitored, especially when growing plants sensitive to acidic conditions. Some plants are sensitive to chloride ions (e.g., avocados, beans, grapes), so applying ammonium chloride to such plants should be done with extra caution to prevent chloride toxicity. While ammonium chloride can be beneficial as a nitrogen source, improper use can harm plants and the environment.
Ammonium chloride solutions are generally stable and can be stored for a certain period if kept under appropriate conditions, that is in airtight containers (to prevent contamination, evaporation and hydrolysis), away from light (to prevent photodegradation) and heat sources (to reduce microbial growth and chemical degradation), and if contamination is prevented. In agricultural applications the solution us used shortly after preparation, for the following reasons:
Nutrient-rich solutions can promote the growth of microorganisms over time, so that microbial activity can alter the chemical composition of the solution, potentially reducing its efficacy as a fertilizer and introducing pathogens to plants.
Over time, water can evaporate from the solution, especially if not stored in a tightly sealed container, which increases the concentration of ammonium chloride, and may lead to over-fertilization and potential damage to plants when applied.
While ammonium chloride is relatively stable, prolonged storage may lead to minor changes in pH due to ongoing hydrolysis, especially if the solution is exposed to air, potentially impacting plants sensitive to acidity of the soil.
If the water used is not distilled or deionized, dissolved minerals and impurities may precipitate over time, altering the nutrient balance of the solution.
Pyrotechnics
Ammonium chloride was used in pyrotechnics in the 18th century but was superseded by safer and less hygroscopic chemicals. Its purpose was to provide a chlorine donor to enhance the green and blue colours from copper ions in the flame.
It had a secondary use to provide white smoke, but its ready double decomposition reaction with potassium chlorate producing the highly unstable ammonium chlorate made its use very dangerous.
Metalwork
Ammonium chloride is used as a flux in preparing metals to be tin coated, galvanized or soldered. It works as a flux by cleaning the surface of workpieces by reacting with the metal oxides at the surface to form a volatile metal chloride. For that purpose, it is sold in blocks at hardware stores for use in cleaning the tip of a soldering iron, and it can also be included in solder as flux.
Medicine
Ammonium chloride is used as an expectorant in cough medicine. Its expectorant action is caused by irritative action on the bronchial mucosa, which causes the production of excess respiratory tract fluid, which presumably is easier to cough up. Ammonium salts are an irritant to the gastric mucosa and may induce nausea and vomiting.
Ammonium chloride is used as a systemic acidifying agent in treatment of severe metabolic alkalosis, in oral acid loading test to diagnose distal renal tubular acidosis, to maintain the urine at an acid pH in the treatment of some urinary-tract disorders.
Food
Ammonium chloride, under the name sal ammoniac or salmiak is used as food additive under the E number E510, working as a yeast nutrient in breadmaking and as an acidifier. It is a feed supplement for cattle and an ingredient in nutritive media for yeasts and many microorganisms.
Ammonium chloride is used to spice up dark sweets called salty liquorice (popular in the Nordic countries, Benelux and northern Germany), in baking to give cookies a very crisp texture, and in the liquor Salmiakki Koskenkorva for flavouring. In Turkey, Iran, Tajikistan, India, Pakistan and Arab countries it is called "noshader" and is used to improve the crispness of snacks such as samosas and jalebi.
In the laboratory
Ammonium chloride has been used historically to produce low temperatures in cooling baths.
Ammonium chloride solutions with ammonia are used as buffer solutions including ACK (Ammonium-Chloride-Potassium) lysis buffer.
In paleontology, ammonium chloride vapor is deposited on fossils, where the substance forms a brilliant white, easily removed and fairly harmless and inert layer of tiny crystals that covers up any coloration the fossil may have, and if lighted at an angle highly enhances contrast in photographic documentation of three-dimensional specimens. The same technique is applied in archaeology to eliminate reflection on glass and similar specimens for photography.
In organic synthesis saturated NH4Cl solution is typically used to quench reaction mixtures.
It has a lambda transition at 242.8 K and zero pressure.
Flotation
Giant squid and some other large squid species maintain neutral buoyancy in seawater through an ammonium chloride solution which is found throughout their bodies and is less dense than seawater. This differs from the method of flotation used by most fish, which involves a gas-filled swim bladder.
Batteries
Around the turn of the 20th century, ammonium chloride was used in aqueous solution as the electrolyte in Leclanché cells that found a commercial use as the "local battery" in subscribers' telephone installations. Those cells later evolved into zinc–carbon batteries still using ammonium chloride as electrolyte.
Concrete treatments
Ammonium chloride is known to be an aggressive cleaning agent.
A penetrating and intense reddish brown color is stained into concrete surfaces with a mixture of ammonium chloride and ferric chloride. Pre-treatment with acid is unnecessary.
Photography
Ammonium chloride can also be used in the process of making albumen silver prints, commonly known as albumen prints. In traditional photographic printing processes of the 19th century, ammonium chloride served as a key component in preparing the albumen solution used to coat the photographic paper. Albumen printing was the dominant photographic printing technique from the 1850s through the 1890s, prized for its fine detail and rich tonal rendition. The incorporation of ammonium chloride in the albumen solution was a significant factor in the quality and popularity of this photographic process. The process involves mixing egg whites (albumen) with ammonium chloride to create a viscous solution. This mixture is then applied as a thin layer onto paper, which, after drying, forms a smooth and glossy surface. Ammonium chloride acts as a salting agent, contributing chloride ions that are essential for forming light-sensitive silver chloride when the coated paper is subsequently sensitized with a solution of silver nitrate. Upon exposure to light, the silver chloride reduces to metallic silver, creating a visible image. The use of ammonium chloride, as opposed to sodium chloride (common salt), can influence the contrast and tonal range of the final print, often yielding warmer tones and greater image clarity.
Other applications
Ammonium chloride is used in a ~5% aqueous solution to work on oil wells with clay swelling problems. Other uses include in hair shampoo, in the glue that bonds plywood, and in cleaning products. In hair shampoo, it is used as a thickening agent in ammonium-based surfactant systems such as ammonium lauryl sulfate. Ammonium chloride is used in the textile and leather industry, in dyeing, tanning, textile printing and cotton clustering. In woodworking, a solution of ammonium chloride and water, when applied to unfinished wood, will burn when subjected to a heat gun resulting in a branding iron mark without use of a branding iron. The solution can be painted onto the wood or applied with a common rubber stamp.
History
Etymology
Pliny, in Book XXXI of his Natural History, refers to a salt produced in the Roman province of Cyrenaica named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon). However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. Nevertheless, that salt ultimately gave ammonia and ammonium compounds their name.
Ancient China
The earliest mention of ammonium chloride was in 554 in China. At that time, ammonium chloride came from two sources: (1) the vents of underground coal fires in Central Asia, specifically, in the Tian Shan mountains (which extend from Xinjiang province of northwestern China through Kyrgyzstan) as well as in the Alay (or Alai) mountains of southwestern Kyrgyzstan, and (2) the fumaroles of the volcano Mount Taftan in southeastern Iran. (Indeed, the word for ammonium chloride in several Asian languages derives from the Iranian phrase anosh adur (immortal fire), a reference to the underground fires.) Ammonium chloride was then transported along the Silk Road eastwards to China and westwards to the Muslim lands and Europe.
Jabirian alchemists
Around 800 A.D. the iranian chemist jaber ibn hayan discovered ammonium chloride in the soot that resulted from burning camel dung, and this source became an alternative to those in Central Asia.
The Jabirian alchemists were the authors of the Jabirian corpus, tentatively dated to . The word for ammonium chloride in the Jabirian corpus was nošāder, Iranian in origin. Whereas Greek alchemical texts had been almost exclusively focused on the use of mineral substances, Jabirian alchemy pioneered the use of vegetable and animal substances, and so represented an innovative shift towards 'organic chemistry'. In the Jabirian corpus, the production of ammonium chloride from organic substances (such as plants, blood, and hair) is described. These are the oldest known instructions for deriving an inorganic compound from organic substances by chemical means.
One of the innovations in Jabirian alchemy was the addition of ammonium chloride to the category of chemical substances known as 'spirits' (i.e., strongly volatile substances). This included both naturally occurring sal ammoniac and synthetic ammonium chloride produced from organic substances. The addition of sal ammoniac to the list of 'spirits' can perhaps also be seen as a product of this new focus on organic chemistry.
Late Middle Ages
The first attested reference to sal ammoniac as ammonium chloride is in the Pseudo-Geber work De inventione veritatis, where a preparation of sal ammoniac is given in the chapter De Salis armoniaci præparatione, salis armoniaci being a common name in the Middle Ages for sal ammoniac.
References
Bibliography
External links
Calculators: surface tensions, and densities, molarities and molalities of aqueous ammonium chloride
CDC - NIOSH Pocket Guide to Chemical Hazards
Ammonium compounds
Chlorides
Edible salt
Nonmetal halides
Alchemical substances
Food additives | Ammonium chloride | [
"Chemistry"
] | 2,911 | [
"Chlorides",
"Inorganic compounds",
"Alchemical substances",
"Salts",
"Ammonium compounds",
"Edible salt"
] |
293,392 | https://en.wikipedia.org/wiki/Field%20electron%20emission | Field electron emission, also known as field emission (FE) and electron field emission, is emission of electrons induced by an electrostatic field. The most common context is field emission from a solid surface into a vacuum. However, field emission can take place from solid or liquid surfaces, into a vacuum, a fluid (e.g. air), or any non-conducting or weakly conducting dielectric. The field-induced promotion of electrons from the valence to conduction band of semiconductors (the Zener effect) can also be regarded as a form of field emission. The terminology is historical because related phenomena of surface photoeffect, thermionic emission (or Richardson–Dushman effect) and "cold electronic emission", i.e. the emission of electrons in strong static (or quasi-static) electric fields, were discovered and studied independently from the 1880s to 1930s. When the term field emission is used without qualifiers, it typically means "cold emission".
Field emission in pure metals occurs in high electric fields: the gradients are typically higher than 1 gigavolt per metre and strongly dependent upon the work function. While electron sources based on field emission have a number of applications, field emission is most commonly an undesirable primary source of vacuum breakdown and electrical discharge phenomena, which engineers work to prevent. Examples of applications for surface field emission include the construction of bright electron sources for high-resolution electron microscopes or the discharge of induced charges from spacecraft. Devices that eliminate induced charges are termed charge-neutralizers.
Field emission was explained by quantum tunneling of electrons in the late 1920s. This was one of the triumphs of the nascent quantum mechanics. The theory of field emission from bulk metals was proposed by Ralph H. Fowler and Lothar Wolfgang Nordheim.
A family of approximate equations, Fowler–Nordheim equations, is named after them. Strictly, Fowler–Nordheim equations apply only to field emission from bulk metals and (with suitable modification) to other bulk crystalline solids, but they are often used – as a rough approximation – to describe field emission from other materials.
Terminology and conventions
Field electron emission, field-induced electron emission, field emission and electron field emission are general names for this experimental phenomenon and its theory. The first name is used here.
Fowler–Nordheim tunneling is the wave-mechanical tunneling of electrons through a rounded triangular barrier created at the surface of an electron conductor by applying a very high electric field. Individual electrons can escape by Fowler–Nordheim tunneling from many materials in various different circumstances.
Cold field electron emission (CFE) is the name given to a particular statistical emission regime, in which the electrons in the emitter are initially in internal thermodynamic equilibrium, and in which most emitted electrons escape by Fowler–Nordheim tunneling from electron states close to the emitter Fermi level. (By contrast, in the Schottky emission regime, most electrons escape over the top of a field-reduced barrier, from states well above the Fermi level.) Many solid and liquid materials can emit electrons in a CFE regime if an electric field of an appropriate size is applied.
Fowler–Nordheim-type equations are a family of approximate equations derived to describe CFE from the internal electron states in bulk metals. The different members of the family represent different degrees of approximation to reality. Approximate equations are necessary because, for physically realistic models of the tunneling barrier, it is in principle mathematically impossible to solve the Schrödinger equation exactly in any simple way. There is no theoretical reason to believe that Fowler–Nordheim-type equations validly describe field emission from materials other than bulk crystalline solids.
For metals, the CFE regime extends to well above room temperature. There are other electron emission regimes (such as "thermal electron emission" and "Schottky emission") that require significant external heating of the emitter. There are also emission regimes where the internal electrons are not in thermodynamic equilibrium and the emission current is, partly or completely, determined by the supply of electrons to the emitting region. A non-equilibrium emission process of this kind may be called field (electron) emission if most of the electrons escape by tunneling, but strictly it is not CFE, and is not accurately described by a Fowler–Nordheim-type equation.
Care is necessary because in some contexts (e.g. spacecraft engineering), the name "field emission" is applied to the field-induced emission of ions (field ion emission), rather than electrons, and because in some theoretical contexts "field emission" is used as a general name covering both field electron emission and field ion emission.
Historically, the phenomenon of field electron emission has been known by a variety of names, including "the aeona effect", "autoelectronic emission", "cold emission", "cold cathode emission", "field emission", "field electron emission" and "electron field emission".
Equations in this article are written using the International System of Quantities (ISQ). This is the modern (post-1970s) international system, based around the rationalized-meter-kilogram-second (rmks) system of equations, which is used to define SI units. Older field emission literature (and papers that directly copy equations from old literature) often write some equations using an older equation system that does not use the quantity ε0. In this article, all such equations have been converted to modern international form. For clarity, this should always be done.
Since work function is normally given with the unit electronvolt (eV), and for fields it is often convenient to use the unit volt per nanometer (V/nm), values of most universal constants are given here in units that involve eV, V and nm. Increasingly, this is normal practice in field emission research. However, all equations here are ISQ-compatible equations and remain dimensionally consistent, as is required by the modern international system. To indicate their status, numerical values of universal constants are given to seven significant figures. Values are derived using the 2006 values of the fundamental constants.
Early history of field electron emission
Field electron emission has a long, complicated and messy history. This section covers the early history, up to the derivation of the original Fowler–Nordheim-type equation in 1928.
In retrospect, it seems likely that the electrical discharges reported by J.H. Winkler in 1744 were started by CFE from his wire electrode. However, meaningful investigations had to wait until after J.J. Thomson's identification of the electron in 1897, and until after it was understood – from thermal emission and photo-emission work – that electrons could be emitted from inside metals (rather than from surface-adsorbed gas molecules), and that – in the absence of applied fields – electrons escaping from metals had to overcome a work function barrier.
It was suspected at least as early as 1913 that field-induced emission was a separate physical effect. However, only after vacuum and specimen cleaning techniques had significantly improved, did this become well established. Lilienfeld (who was primarily interested in electron sources for medical X-ray applications) published in 1922 the first clear account in English of the experimental phenomenology of the effect he had called "autoelectronic emission". He had worked on this topic, in Leipzig, since about 1910. Kleint describes this and other early work.
After 1922, experimental interest increased, particularly in the groups led by Millikan at the California Institute of Technology (Caltech) in Pasadena, California, and by Gossling at the General Electric Company in London. Attempts to understand autoelectronic emission included plotting experimental current–voltage (i–V) data in different ways, to look for a straight-line relationship. Current increased with voltage more rapidly than linearly, but plots of type log(i) vs. V were not straight. Walter H. Schottky suggested in 1923 that the effect might be due to thermally induced emission over a field-reduced barrier. If so, then plots of log(i) vs. should be straight, but they were not. Nor is Schottky's explanation compatible with the experimental observation of only very weak temperature dependence in CFE – a point initially overlooked.
A breakthrough came when C.C. Lauritsen
(and J. Robert Oppenheimer independently) found that plots of log(i) vs. 1/V yielded good straight lines. This result, published by Millikan and Lauritsen in early 1928, was known to Fowler and Nordheim.
Oppenheimer had predicted that the field-induced tunneling of electrons from atoms (the effect now called field ionization) would have this i(V) dependence, had found this dependence in the published experimental field emission results of Millikan and Eyring, and proposed that CFE was due to field-induced tunneling of electrons from atomic-like orbitals in surface metal atoms. An alternative Fowler–Nordheim theory explained both the Millikan–Lauritsen finding and the very weak dependence of current on temperature. Fowler–Nordheim theory predicted both to be consequences if CFE were due to field-induced tunneling from free-electron-type states in what we would now call a metal conduction band, with the electron states occupied in accordance with Fermi–Dirac statistics.
Oppenheimer had mathematical details of his theory seriously incorrect. There was also a small numerical error in the final equation given by Fowler–Nordheim theory for CFE current density: this was corrected in the 1929 paper of .
Strictly, if the barrier field in Fowler–Nordheim 1928 theory is exactly proportional to the applied voltage, and if the emission area is independent of voltage, then the Fowler–Nordheim 1928 theory predicts that plots of the form (log(i/V2) vs. 1/V) should be exact straight lines. However, contemporary experimental techniques were not good enough to distinguish between the Fowler–Nordheim theoretical result and the Millikan–Lauritsen experimental result.
Thus, by 1928 basic physical understanding of the origin of CFE from bulk metals had been achieved, and the original Fowler–Nordheim-type equation had been derived.
The literature often presents Fowler–Nordheim work as a proof of the existence of electron tunneling, as predicted by wave-mechanics. Whilst this is correct, the validity of wave-mechanics was largely accepted by 1928. The more important role of the Fowler–Nordheim paper was that it was a convincing argument from experiment that Fermi–Dirac statistics applied to the behavior of electrons in metals, as suggested by Sommerfeld in 1927. The success of Fowler–Nordheim theory did much to support the correctness of Sommerfeld's ideas, and greatly helped to establish modern electron band theory. In particular, the original Fowler–Nordheim-type equation was one of the first to incorporate the statistical-mechanical consequences of the existence of electron spin into the theory of an experimental condensed-matter effect. The Fowler–Nordheim paper also established the physical basis for a unified treatment of field-induced and thermally induced electron emission. Prior to 1928 it had been hypothesized that two types of electrons, "thermions" and "conduction electrons", existed in metals, and that thermally emitted electron currents were due to the emission of thermions, but that field-emitted currents were due to the emission of conduction electrons. The Fowler–Nordheim 1928 work suggested that thermions did not need to exist as a separate class of internal electrons: electrons could come from a single band occupied in accordance with Fermi–Dirac statistics, but would be emitted in statistically different ways under different conditions of temperature and applied field.
The ideas of Oppenheimer, Fowler and Nordheim were also an important stimulus to the development, by George Gamow, and Ronald W. Gurney and Edward Condon, later in 1928, of the theory of the radioactive decay of nuclei (by alpha particle tunneling).
Practical applications: past and present
Field electron microscopy and related basics
As already indicated, the early experimental work on field electron emission (1910–1920) was driven by Lilienfeld's desire to develop miniaturized X-ray tubes for medical applications. However, it was too early for this technology to succeed.
After Fowler–Nordheim theoretical work in 1928, a major advance came with the development in 1937 by Erwin W. Mueller of the spherical-geometry field electron microscope (FEM) (also called the "field emission microscope"). In this instrument, the electron emitter is a sharply pointed wire, of apex radius r. This is placed, in a vacuum enclosure, opposite an image detector (originally a phosphor screen), at a distance R from it. The microscope screen shows a projection image of the distribution of current-density J across the emitter apex, with magnification approximately (R/r), typically 105 to 106. In FEM studies the apex radius is typically 100 nm to 1 μm. The tip of the pointed wire, when referred to as a physical object, has been called a "field emitter", a "tip", or (recently) a "Mueller emitter".
When the emitter surface is clean, this FEM image is characteristic of: (a) the material from which the emitter is made: (b) the orientation of the material relative to the needle/wire axis; and (c) to some extent, the shape of the emitter endform. In the FEM image, dark areas correspond to regions where the local work function φ is relatively high and/or the local barrier field F is relatively low, so J is relatively low; the light areas correspond to regions where φ is relatively low and/or F is relatively high, so J is relatively high. This is as predicted by the exponent of Fowler–Nordheim-type equations [see eq. (30) below].
The adsorption of layers of gas atoms (such as oxygen) onto the emitter surface, or part of it, can create surface electric dipoles that change the local work function of this part of the surface. This affects the FEM image; also, the change of work-function can be measured using a Fowler–Nordheim plot (see below). Thus, the FEM became an early observational tool of surface science. For example, in the 1960s, FEM results contributed significantly to discussions on heterogeneous catalysis. FEM has also been used for studies of surface-atom diffusion. However, FEM has now been almost completely superseded by newer surface-science techniques.
A consequence of FEM development, and subsequent experimentation, was that it became possible to identify (from FEM image inspection) when an emitter was "clean", and hence exhibiting its clean-surface work-function as established by other techniques. This was important in experiments designed to test the validity of the standard Fowler–Nordheim-type equation. These experiments deduced a value of voltage-to-barrier-field conversion factor β from a Fowler–Nordheim plot (see below), assuming the clean-surface φ–value for tungsten, and compared this with values derived from electron-microscope observations of emitter shape and electrostatic modeling. Agreement to within about 10% was achieved. Only very recently has it been possible to do the comparison the other way round, by bringing a well-prepared probe so close to a well-prepared surface that approximate parallel-plate geometry can be assumed and the conversion factor can be taken as 1/W, where W is the measured probe-to emitter separation. Analysis of the resulting Fowler–Nordheim plot yields a work-function value close to the independently known work-function of the emitter.
Field electron spectroscopy (electron energy analysis)
Energy distribution measurements of field-emitted electrons were first reported in 1939. In 1959 it was realized theoretically by Young, and confirmed experimentally by Young and Mueller that the quantity measured in spherical geometry was the distribution of the total energy of the emitted electron (its "total energy distribution"). This is because, in spherical geometry, the electrons move in such a fashion that angular momentum about a point in the emitter is very nearly conserved. Hence any kinetic energy that, at emission, is in a direction parallel to the emitter surface gets converted into energy associated with the radial direction of motion. So what gets measured in an energy analyzer is the total energy at emission.
With the development of sensitive electron energy analyzers in the 1960s, it became possible to measure fine details of the total energy distribution. These reflect fine details of the surface physics, and the technique of Field Electron Spectroscopy flourished for a while, before being superseded by newer surface-science techniques.
Field electron emitters as electron-gun sources
To achieve high-resolution in electron microscopes and other electron beam instruments (such as those used for electron beam lithography), it is helpful to start with an electron source that is small, optically bright and stable. Sources based on the geometry of a Mueller emitter qualify well on the first two criteria. The first electron microscope (EM) observation of an individual atom was made by Crewe, Wall and Langmore in 1970, using a scanning electron microscope equipped with an early field emission gun.
From the 1950s onwards, extensive effort has been devoted to the development of field emission sources for use in electron guns. [e.g., DD53] Methods have been developed for generating on-axis beams, either by field-induced emitter build-up, or by selective deposition of a low-work-function adsorbate (usually Zirconium oxide – ZrO) into the flat apex of a (100) oriented Tungsten emitter.
Sources that operate at room temperature have the disadvantage that they rapidly become covered with adsorbate molecules that arrive from the vacuum system walls, and the emitter has to be cleaned from time to time by "flashing" to high temperature. Nowadays, it is more common to use Mueller-emitter-based sources that are operated at elevated temperatures, either in the Schottky emission regime or in the so-called temperature-field intermediate regime. Many modern high-resolution electron microscopes and electron beam instruments use some form of Mueller-emitter-based electron source. Currently, attempts are being made to develop carbon nanotubes (CNTs) as electron-gun field emission sources.
The use of field emission sources in electron optical instruments has involved the development of appropriate theories of charged particle optics, and the development of related modeling. Various shape models have been tried for Mueller emitters; the best seems to be the "Sphere on Orthogonal Cone" (SOC) model introduced by Dyke, Trolan. Dolan and Barnes in 1953. Important simulations, involving trajectory tracing using the SOC emitter model, were made by Wiesener and Everhart. Nowadays, the facility to simulate field emission from Mueller emitters is often incorporated into the commercial electron-optics programmes used to design electron beam instruments. The design of efficient modern field-emission electron guns requires highly specialized expertise.
Atomically sharp emitters
Nowadays it is possible to prepare very sharp emitters, including emitters that end in a single atom. In this case, electron emission comes from an area about twice the crystallographic size of a single atom. This was demonstrated by comparing FEM and field ion microscope (FIM) images of the emitter. Single-atom-apex Mueller emitters also have relevance to the scanning probe microscopy and helium scanning ion microscopy (He SIM). Techniques for preparing them have been under investigation for many years. A related important recent advance has been the development (for use in the He SIM) of an automated technique for restoring a three-atom ("trimer") apex to its original state, if the trimer breaks up.
Large-area field emission sources: vacuum nanoelectronics
Materials aspects
Large-area field emission sources have been of interest since the 1970s. In these devices, a high density of individual field emission sites is created on a substrate (originally silicon). This research area became known, first as "vacuum microelectronics", now as "vacuum nanoelectronics".
One of the original two device types, the "Spindt array", used silicon-integrated-circuit (IC) fabrication techniques to make regular arrays in which molybdenum cones were deposited in small cylindrical voids in an oxide film, with the void covered by a counterelectrode with a central circular aperture. This overall geometry has also been used with carbon nanotubes grown in the void.
The other original device type was the "Latham emitter". These were MIMIV (metal-insulator-metal-insulator-vacuum) – or, more generally, CDCDV (conductor-dielectric-conductor-dielectric-vacuum) – devices that contained conducting particulates in a dielectric film. The device field-emits because its microstructure/nanostructure has field-enhancing properties. This material had a potential production advantage, in that it could be deposited as an "ink", so IC fabrication techniques were not needed. However, in practice, uniformly reliable devices proved difficult to fabricate.
Research advanced to look for other materials that could be deposited/grown as thin films with suitable field-enhancing properties. In a parallel-plate arrangement, the "macroscopic" field FM between the plates is given by , where W is the plate separation and V is the applied voltage. If a sharp object is created on one plate, then the local field F at its apex is greater than FM and can be related to FM by
The parameter γ is called the "field enhancement factor" and is basically determined by the object's shape. Since field emission characteristics are determined by the local field F, then the higher the γ-value of the object, then the lower the value of FM at which significant emission occurs. Hence, for a given value of W, the lower the applied voltage V at which significant emission occurs.
For a roughly ten year-period from the mid-1990s, there was great interest in field emission from plasma-deposited films of amorphous and "diamond-like" carbon. However, interest subsequently lessened, partly due to the arrival of CNT emitters, and partly because evidence emerged that the emission sites might be associated with particulate carbon objects created in an unknown way during the deposition process: this suggested that quality control of an industrial-scale production process might be problematic.
The introduction of CNT field emitters, both in "mat" form and in "grown array" forms, was a significant step forward. Extensive research has been undertaken into both their physical characteristics and possible technological applications. For field emission, an advantage of CNTs is that, due to their shape, with its high aspect ratio, they are "natural field-enhancing objects".
In recent years there has also been massive growth in interest in the development of other forms of thin-film emitter, both those based on other carbon forms (such as "carbon nanowalls") and on various forms of wide-band-gap semiconductor. A particular aim is to develop "high-γ" nanostructures with a sufficiently high density of individual emission sites. Thin films of nanotubes in form of nanotube webs are also used for development of field emission electrodes. It is shown that by fine-tuning the fabrication parameters, these webs can achieve an optimum density of individual emission sites. Double-layered electrodes made by deposition of two layers of these webs with perpendicular alignment towards each other are shown to be able to lower the turn-on electric field (electric field required for achieving an emission current of 10 μA/cm2) down to 0.3 V/μm and provide a stable field emission performance.
Common problems with all field-emission devices, particularly those that operate in "industrial vacuum conditions" is that the emission performance can be degraded by the adsorption of gas atoms arriving from elsewhere in the system, and the emitter shape can be in principle be modified deleteriously by a variety of unwanted subsidiary processes, such as bombardment by ions created by the impact of emitted electrons onto gas-phase atoms and/or onto the surface of counter-electrodes. Thus, an important industrial requirement is "robustness in poor vacuum conditions"; this needs to be taken into account in research on new emitter materials.
At the time of writing, the most promising forms of large-area field emission source (certainly in terms of achieved average emission current density) seem to be Spindt arrays and the various forms of source based on CNTs.
Applications
The development of large-area field emission sources was originally driven by the wish to create new, more efficient, forms of electronic information display. These are known as "field-emission displays" or "nano-emissive displays". Although several prototypes have been demonstrated, the development of such displays into reliable commercial products has been hindered by a variety of industrial production problems not directly related to the source characteristics [En08].
Other proposed applications of large-area field emission sources include microwave generation, space-vehicle neutralization, X-ray generation, and (for array sources) multiple e-beam lithography. There are also recent attempts to develop large-area emitters on flexible substrates, in line with wider trends towards "plastic electronics".
The development of such applications is the mission of vacuum nanoelectronics. However, field emitters work best in conditions of good ultrahigh vacuum. Their most successful applications to date (FEM, FES and EM guns) have occurred in these conditions. The sad fact remains that field emitters and industrial vacuum conditions do not go well together, and the related problems of reliably ensuring good "vacuum robustness" of field emission sources used in such conditions still await better solutions (probably cleverer materials solutions) than we currently have.
Vacuum breakdown and electrical discharge phenomena
As already indicated, it is now thought that the earliest manifestations of field electron emission were the electrical discharges it caused. After Fowler–Nordheim work, it was understood that CFE was one of the possible primary underlying causes of vacuum breakdown and electrical discharge phenomena. (The detailed mechanisms and pathways involved can be very complicated, and there is no single universal cause) Where vacuum breakdown is known to be caused by electron emission from a cathode, then the original thinking was that the mechanism was CFE from small conducting needle-like surface protrusions. Procedures were (and are) used to round and smooth the surfaces of electrodes that might generate unwanted field electron emission currents. However the work of Latham and others showed that emission could also be associated with the presence of semiconducting inclusions in smooth surfaces. The physics of how the emission is generated is still not fully understood, but suspicion exists that so-called "triple-junction effects" may be involved. Further information may be found in Latham's book and in the on-line bibliography.
Internal electron transfer in electronic devices
In some electronic devices, electron transfer from one material to another, or (in the case of sloping bands) from one band to another ("Zener tunneling"), takes place by a field-induced tunneling process that can be regarded as a form of Fowler–Nordheim tunneling. For example, Rhoderick's book discusses the theory relevant to metal–semiconductor contacts.
Fowler–Nordheim tunneling
Introduction
The next part of this article deals with the basic theory of cold field electron emission from bulk metals. This is best treated in four main stages, involving theory associated with: (1) derivation of a formula for "escape probability", by considering electron tunneling through a rounded triangular barrier; (2) an integration over internal electron states to obtain the "total energy distribution"; (3) a second integration, to obtain the emission current density as a function of local barrier field and local work function; (4) conversion of this to a formula for current as a function of applied voltage. The modified equations needed for large-area emitters, and issues of experimental data analysis, are dealt with separately.
Fowler–Nordheim tunneling is the wave-mechanical tunneling of an electron through an exact or rounded triangular barrier. Two basic situations are recognized: (1) when the electron is initially in a localized state; (2) when the electron is initially not strongly localized, and is best represented by a travelling wave. Emission from a bulk metal conduction band is a situation of the second type, and discussion here relates to this case. It is also assumed that the barrier is one-dimensional (i.e., has no lateral structure), and has no fine-scale structure that causes "scattering" or "resonance" effects. To keep this explanation of Fowler–Nordheim tunneling relatively simple, these assumptions are needed; but the atomic structure of matter is in effect being disregarded.
Motive energy
For an electron, the one-dimensional Schrödinger equation can be written in the form
where Ψ(x) is the electron wave-function, expressed as a function of distance x measured from the emitter's electrical surface, ħ is the reduced Planck constant, m is the electron mass, U(x) is the electron potential energy, En is the total electron energy associated with motion in the x-direction, and M(x) is called the electron motive energy. M(x) can be interpreted as the negative of the electron kinetic energy associated with the motion of a hypothetical classical point electron in the x-direction, and is positive in the barrier.
The shape of a tunneling barrier is determined by how M(x) varies with position in the region where . Two models have special status in field emission theory: the exact triangular (ET) barrier and the Schottky–Nordheim (SN) barrier. These are given by equations (2) and (3), respectively:
Here h is the zero-field height (or unreduced height) of the barrier, e is the elementary positive charge, F is the barrier field, and ε0 is the electric constant. By convention, F is taken as positive, even though the classical electrostatic field would be negative. The SN equation uses the classical image potential energy to represent the physical effect "correlation and exchange".
Escape probability
For an electron approaching a given barrier from the inside, the probability of escape (or "transmission coefficient" or "penetration coefficient") is a function of h and F, and is denoted by . The primary aim of tunneling theory is to calculate . For physically realistic barrier models, such as the Schottky–Nordheim barrier, the Schrödinger equation cannot be solved exactly in any simple way. The following so-called "semi-classical" approach can be used. A parameter can be defined by the JWKB (Jeffreys-Wentzel-Kramers-Brillouin) integral:
where the integral is taken across the barrier (i.e., across the region where M > 0), and the parameter g is a universal constant given by
Forbes has re-arranged a result proved by Fröman and Fröman, to show that, formally – in a one-dimensional treatment – the exact solution for D can be written
where the tunneling pre-factor P can in principle be evaluated by complicated iterative integrations along a path in complex space. In the CFE regime we have (by definition) G ≫ 1. Also, for simple models P ≈ 1. So eq. (6) reduces to the so-called simple JWKB formula:
For the exact triangular barrier, putting eq. (2) into eq. (4) yields , where
This parameter b is a universal constant sometimes called the second Fowler–Nordheim constant. For barriers of other shapes, we write
where is a correction factor that in general has to be determined by numerical integration, using eq. (4).
Correction factor for the Schottky–Nordheim barrier
The Schottky–Nordheim barrier, which is the barrier model used in deriving the standard Fowler–Nordheim-type equation, is a special case. In this case, it is known that the correction factor is a function of a single variable fh, defined by fh = F/Fh, where Fh is the field necessary to reduce the height of a Schottky–Nordheim barrier from h to 0. This field is given by
The parameter fh runs from 0 to 1, and may be called the scaled barrier field, for a Schottky–Nordheim barrier of zero-field height h.
For the Schottky–Nordheim barrier, is given by the particular value ν(fh) of a function ν(ℓ). The latter is a function of mathematical physics in its own right and has been called the principal Schottky–Nordheim barrier function. An explicit series expansion for ν(ℓ) is derived in a 2008 paper by J. Deane. The following good simple approximation for ν(fh) has been found:
Decay width
The decay width (in energy), dh, measures how fast the escape probability D decreases as the barrier height h increases; dh is defined by:
When h increases by dh then the escape probability D decreases by a factor close to e ( ≈ 2.718282). For an elementary model, based on the exact triangular barrier, where we put ν = 1 and P ≈ 1, we get
The decay width dh derived from the more general expression (12) differs from this by a "decay-width correction factor" λd, so:
Usually, the correction factor can be approximated as unity.
The decay-width dF for a barrier with h equal to the local work-function φ is of special interest. Numerically this is given by:
For metals, the value of dF is typically of order 0.2 eV, but varies with barrier-field F.
Comments
A historical note is necessary. The idea that the Schottky–Nordheim barrier needed a correction factor, as in eq. (9), was introduced by Nordheim in 1928, but his mathematical analysis of the factor was incorrect. A new (correct) function was introduced by Burgess, Kroemer and Houston in 1953, and its mathematics was developed further by Murphy and Good in 1956. This corrected function, sometimes known as a "special field emission elliptic function", was expressed as a function of a mathematical variable y known as the "Nordheim parameter". Only recently (2006 to 2008) has it been realized that, mathematically, it is much better to use the variable ℓ . And only recently has it been possible to complete the definition of ν(ℓ) by developing and proving the validity of an exact series expansion for this function (by starting from known special-case solutions of the Gauss hypergeometric differential equation). Also, approximation (11) has been found only recently. Approximation (11) outperforms, and will presumably eventually displace, all older approximations of equivalent complexity. These recent developments, and their implications, will probably have a significant impact on field emission research in due course.
The following summary brings these results together. For tunneling well below the top of a well-behaved barrier of reasonable height, the escape probability is given formally by:
where is a correction factor that in general has to be found by numerical integration. For the special case of a Schottky–Nordheim barrier, an analytical result exists and is given by ν(fh), as discussed above; approximation (11) for ν(fh) is more than sufficient for all technological purposes. The pre-factor P is also in principle a function of h and (maybe) F, but for the simple physical models discussed here it is usually satisfactory to make the approximation P = 1. The exact triangular barrier is a special case where the Schrödinger equation can be solved exactly, as was done by Fowler and Nordheim; for this physically unrealistic case, ν(fh) = 1, and an analytical approximation for P exists.
The approach described here was originally developed to describe Fowler–Nordheim tunneling from smooth, classically flat, planar emitting surfaces. It is adequate for smooth, classical curved surfaces of radii down to about 10 to 20 nm. It can be adapted to surfaces of sharper radius, but quantities such as ν and D then become significant functions of the parameter(s) used to describe the surface curvature. When the emitter is so sharp that atomic-level detail cannot be neglected, and/or the tunneling barrier is thicker than the emitter-apex dimensions, then a more sophisticated approach is desirable.
As noted at the beginning, the effects of the atomic structure of materials are disregarded in the relatively simple treatments of field electron emission discussed here. Taking atomic structure properly into account is a very difficult problem, and only limited progress has been made. However, it seems probable that the main influences on the theory of Fowler–Nordheim tunneling will (in effect) be to change the values of P and ν in eq. (15), by amounts that cannot easily be estimated at present.
All these remarks apply in principle to Fowler Nordheim tunneling from any conductor where (before tunneling) the electrons may be treated as in travelling-wave states. The approach may be adapted to apply (approximately) to situations where the electrons are initially in localized states at or very close inside the emitting surface, but this is beyond the scope of this article.
Total-energy distribution
The energy distribution of the emitted electrons is important both for scientific experiments that use the emitted electron energy distribution to probe aspects of the emitter surface physics and for the field emission sources used in electron beam instruments such as electron microscopes. In the latter case, the "width" (in energy) of the distribution influences how finely the beam can be focused.
The theoretical explanation here follows the approach of Forbes. If ε denotes the total electron energy relative to the emitter Fermi level, and Kp denotes the kinetic energy of the electron parallel to the emitter surface, then the electron's normal energy εn (sometimes called its "forwards energy") is defined by
Two types of theoretical energy distribution are recognized: the normal-energy distribution (NED), which shows how the energy εn is distributed immediately after emission (i.e., immediately outside the tunneling barrier); and the total-energy distribution, which shows how the total energy ε is distributed. When the emitter Fermi level is used as the reference zero level, both ε and εn can be either positive or negative.
Energy analysis experiments have been made on field emitters since the 1930s. However, only in the late 1950s was it realized (by Young and Mueller [,YM58]) that these experiments always measured the total energy distribution, which is now usually denoted by j(ε). This is also true (or nearly true) when the emission comes from a small field enhancing protrusion on an otherwise flat surface.
To see how the total energy distribution can be calculated within the framework of a Sommerfeld free-electron-type model, look at the P-T energy-space diagram (P-T="parallel-total").
This shows the "parallel kinetic energy" Kp on the horizontal axis and the total energy ε on the vertical axis. An electron inside the bulk metal usually has values of Kp and ε that lie within the lightly shaded area. It can be shown that each element dεdKp of this energy space makes a contribution to the electron current density incident on the inside of the emitter boundary. Here, zS is the universal constant (called here the Sommerfeld supply density):
and is the Fermi–Dirac distribution function:
where T is thermodynamic temperature and kB is the Boltzmann constant.
This element of incident current density sees a barrier of height h given by:
The corresponding escape probability is : this may be expanded (approximately) in the form
where DF is the escape probability for a barrier of unreduced height equal to the local work-function φ. Hence, the element dεdKp makes a contribution to the emission current density, and the total contribution made by incident electrons with energies in the elementary range dε is thus
where the integral is in principle taken along the strip shown in the diagram, but can in practice be extended to ∞ when the decay-width dF is very much less than the Fermi energy KF (which is always the case for a metal). The outcome of the integration can be written:
where and are values appropriate to a barrier of unreduced height h equal to the local work function φ, and is defined by this equation.
For a given emitter, with a given field applied to it, is independent of F, so eq. (21) shows that the shape of the distribution (as ε increases from a negative value well below the Fermi level) is a rising exponential, multiplied by the FD distribution function. This generates the familiar distribution shape first predicted by Young. At low temperatures, goes sharply from 1 to 0 in the vicinity of the Fermi level, and the FWHM of the distribution is given by:
The fact that experimental CFE total energy distributions have this basic shape is a good experimental confirmation that electrons in metals obey Fermi–Dirac statistics.
Cold field electron emission
Fowler–Nordheim-type equations
Introduction
Fowler–Nordheim-type equations, in the J–F form, are (approximate) theoretical equations derived to describe the local current density J emitted from the internal electron states in the conduction band of a bulk metal. The emission current density (ECD) J for some small uniform region of an emitting surface is usually expressed as a function of the local work-function φ and the local barrier field F that characterize the small region. For sharply curved surfaces, J may also depend on the parameter(s) used to describe the surface curvature.
Owing to the physical assumptions made in the original derivation, the term Fowler–Nordheim-type equation has long been used only for equations that describe the ECD at zero temperature. However, it is better to allow this name to include the slightly modified equations (discussed below) that are valid for finite temperatures within the CFE emission regime.
Zero-temperature form
Current density is best measured in A/m2. The total current density emitted from a small uniform region can be obtained by integrating the total energy distribution j(ε) with respect to total electron energy ε. At zero temperature, the Fermi–Dirac distribution function for , and for . So the ECD at 0 K, J0, is given from eq. (18) by
where is the effective supply for state F, and is defined by this equation. Strictly, the lower limit of the integral should be −KF, where KF is the Fermi energy; but if dF is very much less than KF (which is always the case for a metal) then no significant contribution to the integral comes from energies below KF, and it can formally be extended to –∞.
Result (23) can be given a simple and useful physical interpretation by referring to Fig. 1. The electron state at point "F" on the diagram ("state F") is the "forwards moving state at the Fermi level" (i.e., it describes a Fermi-level electron moving normal to and towards the emitter surface). At 0 K, an electron in this state sees a barrier of unreduced height φ, and has an escape probability DF that is higher than that for any other occupied electron state. So it is convenient to write J0 as ZFDF, where the "effective supply" ZF is the current density that would have to be carried by state F inside the metal if all of the emission came out of state F.
In practice, the current density mainly comes out of a group of states close in energy to state F, most of which lie within the heavily shaded area in the energy-space diagram. Since, for a free-electron model, the contribution to the current density is directly proportional to the area in energy space (with the Sommerfeld supply density zS as the constant of proportionality), it is useful to think of the ECD as drawn from electron states in an area of size dF2 (measured in eV2) in the energy-space diagram. That is, it is useful to think of the ECD as drawn from states in the heavily shaded area in Fig. 1. (This approximation gets slowly worse as temperature increases.)
ZF can also be written in the form:
where the universal constant a, sometimes called the First Fowler–Nordheim Constant, is given by
This shows clearly that the pre-exponential factor aφ−1F2, that appears in Fowler–Nordheim-type equations, relates to the effective supply of electrons to the emitter surface, in a free-electron model.
Non-zero temperatures
To obtain a result valid for non-zero temperature, we note from eq. (23) that zSdFDF = J0/dF. So when eq. (21) is integrated at non-zero temperature, then – on making this substitution, and inserting the explicit form of the Fermi–Dirac distribution function – the ECD J can be written in the form:
where λT is a temperature correction factor given by the integral. The integral can be transformed, by writing and , and then , into the standard result:
This is valid for (i.e., ). Hence – for temperatures such that :
where the expansion is valid only if (. An example value (for , , ) is . Normal thinking has been that, in the CFE regime, λT is always small in comparison with other uncertainties, and that it is usually unnecessary to explicitly include it in formulae for the current density at room temperature.
The emission regimes for metals are, in practice, defined, by the ranges of barrier field F and temperature T for which a given family of emission equations is mathematically adequate. When the barrier field F is high enough for the CFE regime to be operating for metal emission at 0 K, then the condition provides a formal upper bound (in temperature) to the CFE emission regime. However, it has been argued that (due to approximations made elsewhere in the derivation) the condition is a better working limit: this corresponds to a λT-value of around 1.09, and (for the example case) an upper temperature limit on the CFE regime of around 1770 K. This limit is a function of barrier field.
Note that result (28) here applies for a barrier of any shape (though dF will be different for different barriers).
Physically complete Fowler–Nordheim-type equation
Result (23) also leads to some understanding of what happens when atomic-level effects are taken into account, and the band-structure is no longer free-electron like. Due to the presence of the atomic ion-cores, the surface barrier, and also the electron wave-functions at the surface, will be different. This will affect the values of the correction factor , the prefactor P, and (to a limited extent) the correction factor λd. These changes will, in turn, affect the values of the parameter DF and (to a limited extent) the parameter dF. For a real metal, the supply density will vary with position in energy space, and the value at point "F" may be different from the Sommerfeld supply density. We can take account of this effect by introducing an electronic-band-structure correction factor λB into eq. (23). Modinos has discussed how this factor might be calculated: he estimates that it is most likely to be between 0.1 and 1; it might lie outside these limits but is most unlikely to lie outside the range .
By defining an overall supply correction factor λZ equal to , and combining equations above, we reach the so-called physically complete Fowler–Nordheim-type equation:
where [= (φ, F)] is the exponent correction factor for a barrier of unreduced height φ. This is the most general equation of the Fowler–Nordheim type. Other equations in the family are obtained by substituting specific expressions for the three correction factors , PF and λZ it contains. The so-called elementary Fowler–Nordheim-type equation, that appears in undergraduate textbook discussions of field emission, is obtained by putting , , ; this does not yield good quantitative predictions because it makes the barrier stronger than it is in physical reality. The so-called standard Fowler–Nordheim-type equation, originally developed by Murphy and Good, and much used in past literature, is obtained by putting , , , where vF is v(f), where f is the value of fh obtained by putting , and tF is a related parameter (of value close to unity).
Within the more complete theory described here, the factor tF−2 is a component part of the correction factor λd2 [see, and note that λd2 is denoted by λD there]. There is no significant value in continuing the separate identification of tF−2. Probably, in the present state of knowledge, the best approximation for simple Fowler–Nordheim-type equation based modeling of CFE from metals is obtained by putting , , . This re-generates the Fowler–Nordheim-type equation used by Dyke and Dolan in 1956, and can be called the "simplified standard Fowler–Nordheim-type equation".
Recommended form for simple Fowler–Nordheim-type calculations
Explicitly, this recommended simplified standard Fowler–Nordheim-type equation, and associated formulae, are:
where Fφ here is the field needed to reduce to zero a Schottky–Nordheim barrier of unreduced height equal to the local work-function φ, and f is the scaled barrier field for a Schottky–Nordheim barrier of unreduced height φ. [This quantity f could have been written more exactly as fφSN, but it makes this Fowler–Nordheim-type equation look less cluttered if the convention is adopted that simple f means the quantity denoted by fφSN in, eq. (2.16).] For the example case (, ), and ; practical ranges for these parameters are discussed further in.
Note that the variable f (the scaled barrier field) is not the same as the variable y (the Nordheim parameter) extensively used in past field emission literature, and that "v(f)" does NOT have the same mathematical meaning and values as the quantity "v(y)" that appears in field emission literature. In the context of the revised theory described here, formulae for v(y), and tables of values for v(y) should be disregarded, or treated as values of v(f1/2). If more exact values for v(f) are required, then provides formulae that give values for v(f) to an absolute mathematical accuracy of better than 8×10−10. However, approximation formula (30c) above, which yields values correct to within an absolute mathematical accuracy of better 0.0025, should gives values sufficiently accurate for all technological purposes.
Comments
A historical note on methods of deriving Fowler–Nordheim-type equations is necessary. There are several possible approaches to deriving these equations, using free-electron theory. The approach used here was introduced by Forbes in 2004 and may be described as "integrating via the total energy distribution, using the parallel kinetic energy Kp as the first variable of integration". Basically, it is a free-electron equivalent of the Modinos procedure (in a more advanced quantum-mechanical treatment) of "integrating over the surface Brillouin zone". By contrast, the free-electron treatments of CFE by Young in 1959, Gadzuk and Plummer in 1973 and Modinos in 1984, also integrate via the total energy distribution, but use the normal energy εn (or a related quantity) as the first variable of integration.
There is also an older approach, based on a seminal paper by Nordheim in 1928, that formulates the problem differently and then uses first Kp and then εn (or a related quantity) as the variables of integration: this is known as "integrating via the normal-energy distribution". This approach continues to be used by some authors. Although it has some advantages, particularly when discussing resonance phenomena, it requires integration of the Fermi–Dirac distribution function in the first stage of integration: for non-free-electron-like electronic band-structures this can lead to very complex and error-prone mathematics (as in the work of Stratton on semiconductors). Further, integrating via the normal-energy distribution does not generate experimentally measured electron energy distributions.
In general, the approach used here seems easier to understand, and leads to simpler mathematics.
It is also closer in principle to the more sophisticated approaches used when dealing with real bulk crystalline solids, where the first step is either to integrate contributions to the ECD over constant energy surfaces in a wave-vector space (k-space), or to integrate contributions over the relevant surface Brillouin zone. The Forbes approach is equivalent either to integrating over a spherical surface in k-space, using the variable Kp to define a ring-like integration element that has cylindrical symmetry about an axis in a direction normal to the emitting surface, or to integrating over an (extended) surface Brillouin zone using circular-ring elements.
CFE theoretical equations
The preceding section explains how to derive Fowler–Nordheim-type equations. Strictly, these equations apply only to CFE from bulk metals. The ideas in the following sections apply to CFE more generally, but eq. (30) will be used to illustrate them.
For CFE, basic theoretical treatments provide a relationship between the local emission current density J and the local barrier field F, at a local position on the emitting surface. Experiments measure the emission current i from some defined part of the emission surface, as a function of the voltage V applied to some counter-electrode. To relate these variables to J and F, auxiliary equations are used.
The voltage-to-barrier-field conversion factor β is defined by:
The value of F varies from position to position on an emitter surface, and the value of β varies correspondingly.
For a metal emitter, the β−value for a given position will be constant (independent of voltage) under the following conditions: (1) the apparatus is a "diode" arrangement, where the only electrodes present are the emitter and a set of "surroundings", all parts of which are at the same voltage; (2) no significant field-emitted vacuum space-charge (FEVSC) is present (this will be true except at very high emission current densities, around 109 A/m2 or higher); (3) no significant "patch fields" exist, as a result of non-uniformities in local work-function (this is normally assumed to be true, but may not be in some circumstances). For non-metals, the physical effects called "field penetration" and "band bending" [M084] can make β a function of applied voltage, although – surprisingly – there are few studies of this effect.
The emission current density J varies from position to position across the emitter surface. The total emission current i from a defined part of the emitter is obtained by integrating J across this part. To obtain a simple equation for i(V), the following procedure is used. A reference point "r" is selected within this part of the emitter surface (often the point at which the current density is highest), and the current density at this reference point is denoted by Jr. A parameter Ar, called the notional emission area (with respect to point "r"), is then defined by:
where the integral is taken across the part of the emitter of interest.
This parameter Ar was introduced into CFE theory by Stern, Gossling and Fowler in 1929 (who called it a "weighted mean area"). For practical emitters, the emission current density used in Fowler–Nordheim-type equations is always the current density at some reference point (though this is usually not stated). Long-established convention denotes this reference current density by the simple symbol J, and the corresponding local field and conversion factor by the simple symbols F and β, without the subscript "r" used above; in what follows, this convention is used.
The notional emission area Ar will often be a function of the reference local field (and hence voltage), and in some circumstances might be a significant function of temperature.
Because Ar has a mathematical definition, it does not necessarily correspond to the area from which emission is observed to occur from a single-point emitter in a field electron (emission) microscope. With a large-area emitter, which contains many individual emission sites, Ar will nearly always be very very much less than the "macroscopic" geometrical area (AM) of the emitter as observed visually (see below).
Incorporating these auxiliary equations into eq. (30a) yields
This is the simplified standard Fowler–Nordheim-type equation, in i–V form. The corresponding "physically complete" equation is obtained by multiplying by λZPF.
Modified equations for large-area emitters
The equations in the preceding section apply to all field emitters operating in the CFE regime. However, further developments are useful for large-area emitters that contain many individual emission sites.
For such emitters, the notional emission area will nearly always be very very much less than the apparent "macroscopic" geometrical area (AM) of the physical emitter as observed visually. A dimensionless parameter αr, the area efficiency of emission, can be defined by
Also, a "macroscopic" (or "mean") emission current density JM (averaged over the geometrical area AM of the emitter) can be defined, and related to the reference current density Jr used above, by
This leads to the following "large-area versions" of the simplified standard Fowler–Nordheim-type equation:
Both these equations contain the area efficiency of emission αr. For any given emitter this parameter has a value that is usually not well known. In general, αr varies greatly as between different emitter materials, and as between different specimens of the same material prepared and processed in different ways. Values in the range 10−10 to 10−6 appear to be likely, and values outside this range may be possible.
The presence of αr in eq. (36) accounts for the difference between the macroscopic current densities often cited in the literature (typically 10 A/m2 for many forms of large-area emitter other than Spindt arrays) and the local current densities at the actual emission sites, which can vary widely but which are thought to be generally of the order of 109 A/m2, or possibly slightly less.
A significant part of the technological literature on large-area emitters fails to make clear distinctions between local and macroscopic current densities, or between notional emission area Ar and macroscopic area AM, and/or omits the parameter αr from cited equations. Care is necessary in order to avoid errors of interpretation.
It is also sometimes convenient to split the conversion factor βr into a "macroscopic part" that relates to the overall geometry of the emitter and its surroundings, and a "local part" that relates to the ability of the very-local structure of the emitter surface to enhance the electric field. This is usually done by defining a "macroscopic field" FM that is the field that would be present at the emitting site in the absence of the local structure that causes enhancement. This field FM is related to the applied voltage by a "voltage-to-macroscopic-field conversion factor" βM defined by:
In the common case of a system comprising two parallel plates, separated by a distance W, with emitting nanostructures created on one of them, .
A "field enhancement factor" γ is then defined and related to the values of βr and βM by
With eq. (31), this generates the following formulae:
where, in accordance with the usual convention, the suffix "r" has now been dropped from parameters relating to the reference point. Formulae exist for the estimation of γ, using classical electrostatics, for a variety of emitter shapes, in particular the "hemisphere on a post".
Equation (40) implies that versions of Fowler–Nordheim-type equations can be written where either F or βV is everywhere replaced by . This is often done in technological applications where the primary interest is in the field enhancing properties of the local emitter nanostructure. However, in some past work, failure to make a clear distinction between barrier field F and macroscopic field FM has caused confusion or error.
More generally, the aims in technological development of large-area field emitters are to enhance the uniformity of emission by increasing the value of the area efficiency of emission αr, and to reduce the "onset" voltage at which significant emission occurs, by increasing the value of β. Eq. (41) shows that this can be done in two ways: either by trying to develop "high-γ" nanostructures, or by changing the overall geometry of the system so that βM is increased. Various trade-offs and constraints exist.
In practice, although the definition of macroscopic field used above is the commonest one, other (differently defined) types of macroscopic field and field enhancement factor are used in the literature, particularly in connection with the use of probes to investigate the i–V characteristics of individual emitters.
In technological contexts field-emission data are often plotted using (a particular definition of) FM or 1/FM as the x-coordinate. However, for scientific analysis it usually better not to pre-manipulate the experimental data, but to plot the raw measured i–V data directly. Values of technological parameters such as (the various forms of) γ can then be obtained from the fitted parameters of the i–V data plot (see below), using the relevant definitions.
Modified equations for nanometrically sharp emitters
Most of the theoretical derivations in the field emission theory are done under the assumption that the barrier takes the Schottky–Nordheim form eq. (3). However, this barrier form is not valid for emitters with radii of curvature R comparable to the length of the tunnelling barrier. The latter depends on the work function and the field, but in cases of practical interest, the SN barrier approximation can be considered valid for emitters with radii , as explained in the next paragraph.
The main assumption of the SN barrier approximation is that the electrostatic potential term takes the linear form in the tunnelling region. The latter has been proved to hold only if . Therefore, if the tunnelling region has a length , for all that determines the tunnelling process; thus if eq. (1) holds and the SN barrier approximation is valid. If the tunnelling probability is high enough to produce measurable field emission, L does not exceed 1–2 nm. Hence, the SN barrier is valid for emitters with radii of the order of some tens of nm.
However, modern emitters are much sharper than this, with radii that of the order of a few nm. Therefore, the standard FN equation, or any version of it that assumes the SN barrier, leads to significant errors for such sharp emitters. This has been both shown theoretically and confirmed experimentally.
The above problem was tackled by Kyritsakis and Xanthakis, who generalized the SN barrier by including the electrostatic effects of the emitter curvature. The general barrier form for an emitter with radius of average curvature (inverse of the average of the two principal curvatures) can be asymptotically expanded as
After neglecting all terms, and employing the JWKB approximation (4) for this barrier, the Gamow exponent takes a form that generalizes eq. (5)
where is defined by (30d), is given by (30c) and is a new function that can be approximated in a similar manner as (30c) (there are typographical mistakes in ref., corrected here):
Given the expression for the Gamow exponent as a function of the field-free barrier height , the emitted current density for cold field emission can be obtained from eq. (23). It yields
where the functions and are defined as
and
In equation (46), for completeness purposes, λd is not approximated by unity as in (29) and (30a), although for most practical cases it is a very good approximation. Apart from this, equations (43), (44) and (46) coincide with the corresponding ones of the standard Fowler–Nordheim theory (3), (9), and (30a), in the limit ; this is expected since the former equations generalise the latter.
Finally, note that the above analysis is asymptotic in the limit , similarly to the standard Fowler–Nordheim theory using the SN barrier. However, the addition of the quadratic terms renders it significantly more accurate for emitters with radii of curvature in the range ~ 5–20 nm. For sharper emitters there is no general approximation for the current density. In order to obtain the current density, one has to calculate the electrostatic potential and evaluate the JWKB integral numerically. For this purpose, scientific computing software has been developed (see e.g. GETELEC).
Empirical CFE i–V equation
At the present stage of CFE theory development, it is important to make a distinction between theoretical CFE equations and an empirical CFE equation. The former are derived from condensed matter physics (albeit in contexts where their detailed development is difficult). An empirical CFE equation, on the other hand, simply attempts to represent the actual experimental form of the dependence of current i on voltage V.
In the 1920s, empirical equations were used to find the power of V that appeared in the exponent of a semi-logarithmic equation assumed to describe experimental CFE results. In 1928, theory and experiment were brought together to show that (except, possibly, for very sharp emitters) this power is V−1. It has recently been suggested that CFE experiments should now be carried out to try to find the power (κ) of V in the pre-exponential of the following empirical CFE equation:
where B, C and κ are treated as constants.
From eq. (42) it is readily shown that
In the 1920s, experimental techniques could not distinguish between the results (assumed by Millikan and Laurtisen) and (predicted by the original Fowler–Nordheim-type equation). However, it should now be possible to make reasonably accurate measurements of dlni/d(1/V) (if necessary by using lock-in amplifier/phase-sensitive detection techniques and computer-controlled equipment), and to derive κ from the slope of an appropriate data plot.
Following the discovery of approximation (30b), it is now very clear that – even for CFE from bulk metals – the value is not expected. This can be shown as follows. Using eq. (30c) above, a dimensionless parameter η may be defined by
For , this parameter has the value . Since and v(f) is given by eq (30b), the exponent in the simplified standard Fowler–Nordheim-type equation (30) can be written in an alternative form and then expanded as follows:
Provided that the conversion factor β is independent of voltage, the parameter f has the alternative definition , where Vφ is the voltage needed, in a particular experimental system, to reduce the height of a Schottky–Nordheim barrier from φ to zero. Thus, it is clear that the factor v(f) in the exponent of the theoretical equation (30) gives rise to additional V-dependence in the pre-exponential of the empirical equation. Thus, (for effects due to the Schottky–Nordheim barrier, and for an emitter with ) we obtain the prediction:
Since there may also be voltage dependence in other factors in a Fowler–Nordheim-type equation, in particular in the notional emission area Ar and in the local work-function, it is not necessarily expected that κ for CFE from a metal of local work-function 4.5 eV should have the value κ = 1.23, but there is certainly no reason to expect that it will have the original Fowler–Nordheim value .
A first experimental test of this proposal has been carried out by Kirk, who used a slightly more complex form of data analysis to find a value 1.36 for his parameter κ. His parameter κ is very similar to, but not quite the same as, the parameter κ used here, but nevertheless his results do appear to confirm the potential usefulness of this form of analysis.
Use of the empirical CFE equation (42), and the measurement of κ, may be of particular use for non-metals. Strictly, Fowler–Nordheim-type equations apply only to emission from the conduction band of bulk crystalline solids. However, empirical equations of form (42) should apply to all materials (though, conceivably, modification might be needed for very sharp emitters). It seems very likely that one way in which CFE equations for newer materials may differ from Fowler–Nordheim-type equations is that these CFE equations may have a different power of F (or V) in their pre-exponentials. Measurements of κ might provide some experimental indication of this.
Fowler–Nordheim plots and Millikan–Lauritsen plots
The original theoretical equation derived by Fowler and Nordheim has, for the last 80 years, influenced the way that experimental CFE data has been plotted and analyzed. In the very widely used Fowler–Nordheim plot, as introduced by Stern et al. in 1929, the quantity ln{i/V2} is plotted against 1/V. The original thinking was that (as predicted by the original or the elementary Fowler–Nordheim-type equation) this would generate an exact straight line of slope SFN. SFN would be related to the parameters that appear in the exponent of a Fowler–Nordheim-type equation of i–V form by:
Hence, knowledge of φ would allow β to be determined, or vice versa.
[In principle, in system geometries where there is local field-enhancing nanostructure present, and the macroscopic conversion factor βM can be determined, knowledge of β then allows the value of the emitter's effective field enhancement factor γ to be determined from the formula . In the common case of a film emitter generated on one plate of a two-plate arrangement with plate-separation W (so ) then
Nowadays, this is one of the most likely applications of Fowler–Nordheim plots.]
It subsequently became clear that the original thinking above is strictly correct only for the physically unrealistic situation of a flat emitter and an exact triangular barrier. For real emitters and real barriers a "slope correction factor" σFN has to be introduced, yielding the revised formula
The value of σFN will, in principle, be influenced by any parameter in the physically complete Fowler–Nordheim-type equation for i(V) that has a voltage dependence.
At present, the only parameter that is considered important is the correction factor relating to the barrier shape, and the only barrier for which there is any well-established detailed theory is the Schottky–Nordheim barrier. In this case, σFN is given by a mathematical function called s. This function s was first tabulated correctly (as a function of the Nordheim parameter y) by Burgess, Kroemer and Houston in 1953; and a modern treatment that gives s as function of the scaled barrier field f for a Schottky–Nordheim barrier is given in. However, it has long been clear that, for practical emitter operation, the value of s lies in the range 0.9 to 1.
In practice, due to the extra complexity involved in taking the slope correction factor into detailed account, many authors (in effect) put in eq. (49), thereby generating a systematic error in their estimated values of β and/or γ, thought usually to be around 5%.
However, empirical equation (42) – which in principle is more general than Fowler–Nordheim-type equations – brings with it possible new ways of analyzing field emission i–V data. In general, it may be assumed that the parameter B in the empirical equation is related to the unreduced height H of some characteristic barrier seen by tunneling electrons by
(In most cases, but not necessarily all, H would be equal to the local work-function; certainly this is true for metals.) The issue is how to determine the value of B by experiment. There are two obvious ways. (1) Suppose that eq. (43) can be used to determine a reasonably accurate experimental value of κ, from the slope of a plot of form [−dln{i}/d(1/V) vs. V]. In this case, a second plot, of ln(i)/Vκ vs. 1/V, should be an exact straight line of slope −B. This approach should be the most accurate way of determining B.
(2) Alternatively, if the value of κ is not exactly known, and cannot be accurately measured, but can be estimated or guessed, then a value for B can be derived from a plot of the form [ln{i} vs. 1/V]. This is the form of plot used by Millikan and Lauritsen in 1928. Rearranging eq. (43) gives
Thus, B can be determined, to a good degree of approximation, by determining the mean slope of a Millikan–Lauritsen plot over some range of values of 1/V, and by applying a correction, using the value of 1/V at the midpoint of the range and an assumed value of κ.
The main advantages of using a Millikan–Lauritsen plot, and this form of correction procedure, rather than a Fowler–Nordheim plot and a slope correction factor, are seen to be the following. (1) The plotting procedure is marginally more straightforward. (2) The correction involves a physical parameter (V) that is a measured quantity, rather than a physical parameter (f) that has to be calculated [in order to then calculate a value of s(f) or, more generally σFN(f)]. (3) Both the parameter κ itself, and the correction procedure, are more transparent (and more readily understood) than the Fowler–Nordheim-plot equivalents. (4) This procedure takes into account all physical effects that influence the value of κ, whereas the Fowler–Nordheim-plot correction procedure (in the form in which it has been carried out for the last 50 years) takes into account only those effects associated with barrier shape – assuming, furthermore, that this shape is that of a Schottky–Nordheim barrier. (5) There is a cleaner separation of theoretical and technological concerns: theoreticians will be interested in establishing what information any measured values of κ provide about CFE theory; but experimentalists can simply use measured values of κ to make more accurate estimates (if needed) of field enhancement factors.
This correction procedure for Millikan–Lauritsen plots will become easier to apply when a sufficient number of measurements of κ have been made, and a better idea is available of what typical values actually are. At present, it seems probable that for most materials κ will lie in the range .
Further theoretical information
Developing the approximate theory of CFE from metals above is comparatively easy, for the following reasons. (1) Sommerfeld's free-electron theory, with its particular assumptions about the distribution of internal electron states in energy, applies adequately to many metals as a first approximation. (2) Most of the time, metals have no surface states and (in many cases) metal wave-functions have no significant "surface resonances". (3) Metals have a high density of states at the Fermi level, so the charge that generates/screens external electric fields lies mainly on the outside of the top atomic layer, and no meaningful "field penetration" occurs. (4) Metals have high electrical conductivity: no significant voltage drops occur inside metal emitters: this means that there are no factors obstructing the supply of electrons to the emitting surface, and that the electrons in this region can be both in effective local thermodynamic equilibrium and in effective thermodynamic equilibrium with the electrons in the metal support structure on which the emitter is mounted. (5) Atomic-level effects are disregarded.
The development of "simple" theories of field electron emission, and in particular the development of Fowler–Nordheim-type equations, relies on all five of the above factors being true. For materials other than metals (and for atomically sharp metal emitters) one or more of the above factors will be untrue. For example, crystalline semiconductors do not have a free-electron-like band-structure, do have surface states, are subject to field penetration and band bending, and may exhibit both internal voltage drops and statistical decoupling of the surface-state electron distribution from the electron distribution in the surface region of the bulk band-structure (this decoupling is known as "the Modinos effect").
In practice, the theory of the actual Fowler–Nordheim tunneling process is much the same for all materials (though details of barrier shape may vary, and modified theory has to be developed for initial states that are localized rather than are travelling-wave-like). However, notwithstanding such differences, one expects (for thermodynamic equilibrium situations) that all CFE equations will have exponents that behave in a generally similar manner. This is why applying Fowler–Nordheim-type equations to materials outside the scope of the derivations given here often works. If interest is only in parameters (such as field enhancement factor) that relate to the slope of Fowler–Nordheim or Millikan–Lauritsen plots and to the exponent of the CFE equation, then Fowler–Nordheim-type theory will often give sensible estimates. However, attempts to derive meaningful current density values will usually or always fail.
Note that a straight line in a Fowler–Nordheim or Millikan–Lauritsen plot does not indicate that emission from the corresponding material obeys a Fowler–Nordheim-type equation: it indicates only that the emission mechanism for individual electrons is probably Fowler–Nordheim tunneling.
Different materials may have radically different distributions in energy of their internal electron states, so the process of integrating current-density contributions over the internal electron states may give rise to significantly different expressions for the current-density pre-exponentials, for different classes of material. In particular, the power of barrier field appearing in the pre-exponential may be different from the original Fowler–Nordheim value "2". Investigation of effects of this kind is an active research topic. Atomic-level "resonance" and "scattering" effects, if they occur, will also modify the theory.
Where materials are subject to field penetration and band bending, a necessary preliminary is to have good theories of such effects (for each different class of material) before detailed theories of CFE can be developed. Where voltage-drop effects occur, then the theory of the emission current may, to a greater or lesser extent, become theory that involves internal transport effects, and may become very complex.
See also
Field-emission microscope
Field emission probes
Field emitter array
Field-emission display
Franz–Keldysh effect
References
Further reading
General information
Field penetration and band bending (semiconductors)
A. Many, Y. Goldstein, and N.B. Grover, Semiconductor Surfaces (North Holland, Amsterdam, 1965).
W. Mönsch, Semiconductor Surfaces and Interfaces (Springer, Berlin, 1995).
Field emitted vacuum space-charge
Field emission at high temperatures, and photo-field emission
Field-induced explosive electron emission
G.A. Mesyats, Explosive Electron Emission (URO Press, Ekaterinburg, 1998),
Quantum mechanics
Electrical engineering
Electronics concepts | Field electron emission | [
"Physics",
"Engineering"
] | 16,740 | [
"Electrical engineering",
"Theoretical physics",
"Quantum mechanics"
] |
293,504 | https://en.wikipedia.org/wiki/Orbifold | In the mathematical disciplines of topology and geometry, an orbifold (for "orbit-manifold") is a generalization of a manifold. Roughly speaking, an orbifold is a topological space that is locally a finite group quotient of a Euclidean space.
Definitions of orbifold have been given several times: by Ichirō Satake in the context of automorphic forms in the 1950s under the name V-manifold; by William Thurston in the context of the geometry of 3-manifolds in the 1970s when he coined the name orbifold, after a vote by his students; and by André Haefliger in the 1980s in the context of Mikhail Gromov's programme on CAT(k) spaces under the name orbihedron.
Historically, orbifolds arose first as surfaces with singular points long before they were formally defined. One of the first classical examples arose in the theory of modular forms with the action of the modular group on the upper half-plane: a version of the Riemann–Roch theorem holds after the quotient is compactified by the addition of two orbifold cusp points. In 3-manifold theory, the theory of Seifert fiber spaces, initiated by Herbert Seifert, can be phrased in terms of 2-dimensional orbifolds. In geometric group theory, post-Gromov, discrete groups have been studied in terms of the local curvature properties of orbihedra and their covering spaces.
In string theory, the word "orbifold" has a slightly different meaning, discussed in detail below. In two-dimensional conformal field theory, it refers to the theory attached to the fixed point subalgebra of a vertex algebra under the action of a finite group of automorphisms.
The main example of underlying space is a quotient space of a manifold under the properly discontinuous action of a possibly infinite group of diffeomorphisms with finite isotropy subgroups. In particular this applies to any action of a finite group; thus a manifold with boundary carries a natural orbifold structure, since it is the quotient of its double by an action of .
One topological space can carry different orbifold structures. For example, consider the orbifold associated with a quotient space of the 2-sphere along a rotation by ; it is homeomorphic to the 2-sphere, but the natural orbifold structure is different. It is possible to adopt most of the characteristics of manifolds to orbifolds and these characteristics are usually different from correspondent characteristics of underlying space. In the above example, the orbifold fundamental group of is and its orbifold Euler characteristic is 1.
Formal definitions
Definition using orbifold atlas
Like a manifold, an orbifold is specified by local conditions; however, instead of being locally modelled on open subsets of , an orbifold is locally modelled on quotients of open subsets of by finite group actions. The structure of an orbifold encodes not only that of the underlying quotient space, which need not be a manifold, but also that of the isotropy subgroups.
An -dimensional orbifold is a Hausdorff topological space , called the underlying space, with a covering by a collection of open sets , closed under finite intersection. For each , there is
an open subset of , invariant under a faithful linear action of a finite group ;
a continuous map of onto invariant under , called an orbifold chart, which defines a homeomorphism between and .
The collection of orbifold charts is called an orbifold atlas if the following properties are satisfied:
for each inclusion there is an injective group homomorphism .
for each inclusion there is a -equivariant homeomorphism , called a gluing map, of onto an open subset of .
the gluing maps are compatible with the charts, i.e. .
the gluing maps are unique up to composition with group elements, i.e. any other possible gluing map from to has the form for a unique .
As for atlases on manifolds, two orbifold atlases of are equivalent if they can be consistently combined to give a larger orbifold atlas. An orbifold structure is therefore an equivalence class of orbifold atlases.
Note that the orbifold structure determines the isotropy subgroup of any point of the orbifold up to isomorphism: it can be computed as the stabilizer of the point in any orbifold chart. If Ui Uj Uk, then there is a unique transition element gijk in Γk such that
gijk·ψik = ψjk·ψij
These transition elements satisfy
(Ad gijk)·fik = fjk·fij
as well as the cocycle relation (guaranteeing associativity)
fkm(gijk)·gikm = gijm·gjkm.
More generally, attached to an open covering of an orbifold by orbifold charts, there is the combinatorial data of a so-called complex of groups (see below).
Exactly as in the case of manifolds, differentiability conditions can be imposed on the gluing maps to give a definition of a differentiable orbifold. It will be a Riemannian orbifold if in addition there are invariant Riemannian metrics on the orbifold charts and the gluing maps are isometries.
Definition using Lie groupoids
Recall that a groupoid consists of a set of objects , a set of arrows , and structural maps including the source and the target maps and other maps allowing arrows to be composed and inverted. It is called a Lie groupoid if both and are smooth manifolds, all structural maps are smooth, and both the source and the target maps are submersions. The intersection of the source and the target fiber at a given point , i.e. the set , is the Lie group called the isotropy group of at . A Lie groupoid is called proper if the map is a proper map, and étale if both the source and the target maps are local diffeomorphisms.
An orbifold groupoid is given by one of the following equivalent definitions:
a proper étale Lie groupoid;
a proper Lie groupoid whose isotropies are discrete spaces.
Since the isotropy groups of proper groupoids are automatically compact, the discreteness condition implies that the isotropies must be actually finite groups.
Orbifold groupoids play the same role as orbifold atlases in the definition above. Indeed, an orbifold structure on a Hausdorff topological space is defined as the Morita equivalence class of an orbifold groupoid together with a homeomorphism , where is the orbit space of the Lie groupoid (i.e. the quotient of by the equivalent relation when if there is a with and ). This definition shows that orbifolds are a particular kind of differentiable stack.
Relation between the two definitions
Given an orbifold atlas on a space , one can build a pseudogroup made up by all diffeomorphisms between open sets of which preserve the transition functions . In turn, the space of germs of its elements is an orbifold groupoid. Moreover, since by definition of orbifold atlas each finite group acts faithfully on , the groupoid is automatically effective, i.e. the map is injective for every . Two different orbifold atlases give rise to the same orbifold structure if and only if their associated orbifold groupoids are Morita equivalent. Therefore, any orbifold structure according to the first definition (also called a classical orbifold) is a special kind of orbifold structure according to the second definition.
Conversely, given an orbifold groupoid , there is a canonical orbifold atlas over its orbit space, whose associated effective orbifold groupoid is Morita equivalent to . Since the orbit spaces of Morita equivalent groupoids are homeomorphic, an orbifold structure according to the second definition reduces an orbifold structure according to the first definition in the effective case.
Accordingly, while the notion of orbifold atlas is simpler and more commonly present in the literature, the notion of orbifold groupoid is particularly useful when discussing non-effective orbifolds and maps between orbifolds. For example, a map between orbifolds can be described by a homomorphism between groupoids, which carries more information than the underlying continuous map between the underlying topological spaces.
Examples
Any manifold without boundary is trivially an orbifold, where each of the groups Γi is the trivial group. Equivalently, it corresponds to the Morita equivalence class of the unit groupoid.
If N is a compact manifold with boundary, its double M can be formed by gluing together a copy of N and its mirror image along their common boundary. There is natural reflection action of Z2 on the manifold M fixing the common boundary; the quotient space can be identified with N, so that N has a natural orbifold structure.
If M is a Riemannian n-manifold with a cocompact proper isometric action of a discrete group Γ, then the orbit space X = M/Γ has natural orbifold structure: for each x in X take a representative m in M and an open neighbourhood Vm of m invariant under the stabiliser Γm, identified equivariantly with a Γm-subset of TmM under the exponential map at m; finitely many neighbourhoods cover X and each of their finite intersections, if non-empty, is covered by an intersection of Γ-translates gm·Vm with corresponding group gm Γ gm−1. Orbifolds that arise in this way are called developable or good.
A classical theorem of Henri Poincaré constructs Fuchsian groups as hyperbolic reflection groups generated by reflections in the edges of a geodesic triangle in the hyperbolic plane for the Poincaré metric. If the triangle has angles /ni for positive integers ni, the triangle is a fundamental domain and naturally a 2-dimensional orbifold. The corresponding group is an example of a hyperbolic triangle group. Poincaré also gave a 3-dimensional version of this result for Kleinian groups: in this case the Kleinian group Γ is generated by hyperbolic reflections and the orbifold is H3 / Γ.
If M is a closed 2-manifold, new orbifold structures can be defined on Mi by removing finitely many disjoint closed discs from M and gluing back copies of discs D/ Γi where D is the closed unit disc and Γi is a finite cyclic group of rotations. This generalises Poincaré's construction.
Orbifold fundamental group
There are several ways to define the orbifold fundamental group. More sophisticated approaches use orbifold covering spaces or classifying spaces of groupoids. The simplest approach (adopted by Haefliger and known also to Thurston) extends the usual notion of loop used in the standard definition of the fundamental group.
An orbifold path is a path in the underlying space provided with an explicit piecewise lift of path segments to orbifold charts and explicit group elements identifying paths in overlapping charts; if the underlying path is a loop, it is called an orbifold loop. Two orbifold paths are identified if they are related through multiplication by group elements in orbifold charts. The orbifold fundamental group is the group formed by homotopy classes of orbifold loops.
If the orbifold arises as the quotient of a simply connected manifold M by a proper rigid action of a discrete group Γ, the orbifold fundamental group can be identified with Γ. In general it is an extension of Γ by 1 M.
The orbifold is said to be developable or good if it arises as the quotient by a group action; otherwise it is called bad. A universal covering orbifold can be constructed for an orbifold by direct analogy with the construction of the universal covering space of a topological space, namely as the space of pairs consisting of points of the orbifold and homotopy classes of orbifold paths joining them to the basepoint. This space is naturally an orbifold.
Note that if an orbifold chart on a contractible open subset corresponds to a group Γ, then there is a natural local homomorphism of Γ into the orbifold fundamental group.
In fact the following conditions are equivalent:
The orbifold is developable.
The orbifold structure on the universal covering orbifold is trivial.
The local homomorphisms are all injective for a covering by contractible open sets.
Orbifolds as diffeologies
Orbifolds can be defined in the general framework of diffeology and have been proved to be equivalent to Ichirô Satake's original definition:
Definition: An orbifold is a diffeological space locally diffeomorphic at each point to some , where is an integer and is a finite linear group which may change from point to point.
This definition calls a few remarks:
This definition mimics the definition of a manifold in diffeology, which is a diffeological space locally diffeomorphic at each point to .
An orbifold is regarded first as a diffeological space, a set equipped with a diffeology. Then, the diffeology is tested to be locally diffeomorphic at each point to a quotient with a finite linear group.
This definition is equivalent with Haefliger orbifolds.
{Orbifolds} makes a subcategory of the category {Diffeology} whose objects are diffeological spaces and morphisms smooth maps. A smooth map between orbifolds is any map which is smooth for their diffeologies. This resolves, in the context of Satake's definition, his remark: "The notion of -map thus defined is inconvenient in the point that a composite of two -map defined in a different choice of defining families is not always a -map." Indeed, there are smooth maps between orbifolds that do not lift locally as equivariant maps.
Note that the fundamental group of an orbifold as a diffeological space is not the same as the fundamental group as defined above. That last one is related to the structure groupoid and its isotropy groups.
Orbispaces
For applications in geometric group theory, it is often convenient to have a slightly more general notion of orbifold, due to Haefliger. An orbispace is to topological spaces what an orbifold is to manifolds. An orbispace is a topological generalization of the orbifold concept. It is defined by replacing the model for the orbifold charts by a locally compact space with a rigid action of a finite group, i.e. one for which points with trivial isotropy are dense. (This condition is automatically satisfied by faithful linear actions, because the points fixed by any non-trivial group element form a proper linear subspace.) It is also useful to consider metric space structures on an orbispace, given by invariant metrics on the orbispace charts for which the gluing maps preserve distance. In this case each orbispace chart is usually required to be a length space with unique geodesics connecting any two points.
Let X be an orbispace endowed with a metric space structure for which the charts are geodesic length spaces. The preceding definitions and results for orbifolds can be generalized to give definitions of orbispace fundamental group and universal covering orbispace, with analogous criteria for developability. The distance functions on the orbispace charts can be used to define the length of an orbispace path in the universal covering orbispace. If the distance function in each chart is non-positively curved, then the Birkhoff curve shortening argument can be used to prove that any orbispace path with fixed endpoints is homotopic to a unique geodesic. Applying this to constant paths in an orbispace chart, it follows that each local homomorphism is injective and hence:
every non-positively curved orbispace is developable (i.e. good).
Complexes of groups
Every orbifold has associated with it an additional combinatorial structure given by a complex of groups.
Definition
A complex of groups (Y,f,g) on an abstract simplicial complex Y is given by
a finite group Γσ for each simplex σ of Y
an injective homomorphism fστ : Γτ Γσ whenever σ τ
for every inclusion ρ σ τ, a group element gρστ in Γρ such that (Ad gρστ)·fρτ = fρσ·fστ (here Ad denotes the adjoint action by conjugation)
The group elements must in addition satisfy the cocycle condition
fρ(gρστ) gπρτ = gστ gρσ
for every chain of simplices (This condition is vacuous if Y has dimension 2 or less.)
Any choice of elements hστ in Γσ yields an equivalent complex of groups by defining
fστ = (Ad hστ)·fστ
gρστ = hρσ·fρσ(hστ)·gρστ·hρτ−1
A complex of groups is called simple whenever gρστ = 1 everywhere.
An easy inductive argument shows that every complex of groups on a simplex is equivalent to a complex of groups with gρστ = 1 everywhere.
It is often more convenient and conceptually appealing to pass to the barycentric subdivision of Y. The vertices of this subdivision correspond to the simplices of Y, so that each vertex has a group attached to it. The edges of the barycentric subdivision are naturally oriented (corresponding to inclusions of simplices) and each directed edge gives an inclusion of groups. Each triangle has a transition element attached to it belonging to the group of exactly one vertex; and the tetrahedra, if there are any, give cocycle relations for the transition elements. Thus a complex of groups involves only the 3-skeleton of the barycentric subdivision; and only the 2-skeleton if it is simple.
Example
If X is an orbifold (or orbispace), choose a covering by open subsets from amongst the orbifold charts fi : Vi Ui. Let Y be the abstract simplicial complex given by the nerve of the covering: its vertices are the sets of the cover and its n-simplices correspond to non-empty intersections Uα = Ui1 ··· Uin. For each such simplex there is an associated group Γα and the homomorphisms fij become the homomorphisms fστ. For every triple ρ σ τ corresponding to intersections
there are charts φi : Vi Ui, φij : Vij Ui Uj and φijk : Vijk Ui Uj Uk and gluing maps ψ : V ij Vi, ψ' : V ijk Vij and ψ" : V ijk Vi.
There is a unique transition element gρστ in Γi such that gρστ·ψ" = ψ·. The relations satisfied by the transition elements of an orbifold imply those required for a complex of groups. In this way a complex of groups can be canonically associated to the nerve of an open covering by orbifold (or orbispace) charts. In the language of non-commutative sheaf theory and gerbes, the complex of groups in this case arises as a sheaf of groups associated to the covering Ui; the data gρστ is a 2-cocycle in non-commutative sheaf cohomology and the data hστ gives a 2-coboundary perturbation.
Edge-path group
The edge-path group of a complex of groups can be defined as a natural generalisation of the edge path group of a simplicial complex. In the barycentric subdivision of Y, take generators eij corresponding to edges from i to j where i j, so that there is an injection ψij : Γi Γj. Let Γ be the group generated by the eij and Γk with relations
eij −1 · g · eij = ψij(g)
for g in Γi and
eik = ejk·eij·gijk
if i j k.
For a fixed vertex i0, the edge-path group Γ(i0) is defined to be the subgroup of Γ generated by all products
g0 · ei0 i1 · g1 · ei1 i2 · ··· · gn · eini 0
where i0, i1, ..., in, i0
is an edge-path, gk lies in Γik and eji=eij−1 if i j.
Developable complexes
A simplicial proper action of a discrete group Γ on a simplicial complex X with finite quotient is said to be regular if it
satisfies one of the following equivalent conditions:
X admits a finite subcomplex as fundamental domain;
the quotient Y = X/Γ has a natural simplicial structure;
the quotient simplicial structure on orbit-representatives of vertices is consistent;
if (v0, ..., vk) and (g0·v0, ..., gk·vk) are simplices, then g·vi = gi·vi for some g in Γ.
The fundamental domain and quotient Y = X / Γ can naturally be identified as simplicial complexes in this case, given by the stabilisers of the simplices in the fundamental domain. A complex of groups Y is said to be developable if it arises in this way.
A complex of groups is developable if and only if the homomorphisms of Γσ into the edge-path group are injective.
A complex of groups is developable if and only if for each simplex σ there is an injective homomorphism θσ from Γσ into a fixed discrete group Γ such that θτ·fστ = θσ. In this case the simplicial complex X is canonically defined: it has k-simplices (σ, xΓσ) where σ is a k-simplex of Y and x runs over Γ / Γσ. Consistency can be checked using the fact that the restriction of the complex of groups to a simplex is equivalent to one with trivial cocycle gρστ.
The action of Γ on the barycentric subdivision X ' of X always satisfies the following condition, weaker than regularity:
whenever σ and g·σ are subsimplices of some simplex τ, they are equal, i.e. σ = g·σ
Indeed, simplices in X ' correspond to chains of simplices in X, so that a subsimplices, given by subchains of simplices, is uniquely determined by the sizes of the simplices in the subchain. When an action satisfies this condition, then g necessarily fixes all the vertices of σ. A straightforward inductive argument shows that such an action becomes regular on the barycentric subdivision; in particular
the action on the second barycentric subdivision X" is regular;
Γ is naturally isomorphic to the edge-path group defined using edge-paths and vertex stabilisers for the barycentric subdivision of the fundamental domain in X".
There is in fact no need to pass to a third barycentric subdivision: as Haefliger observes using the language of category theory, in this case the 3-skeleton of the fundamental domain of X" already carries all the necessary data – including transition elements for triangles – to define an edge-path group isomorphic to Γ.
In two dimensions this is particularly simple to describe. The fundamental domain of X" has the same structure as the barycentric subdivision Y ' of a complex of groups Y, namely:
a finite 2-dimensional simplicial complex Z;
an orientation for all edges i j;
if i j and j k are edges, then i k is an edge and (i, j, k) is a triangle;
finite groups attached to vertices, inclusions to edges and transition elements, describing compatibility, to triangles.
An edge-path group can then be defined. A similar structure is inherited by the barycentric subdivision Z ' and its edge-path group is isomorphic to that of Z.
Orbihedra
If a countable discrete group acts by a regular simplicial proper action on a simplicial complex, the quotient can be given not only the structure of a complex of groups, but also that of an orbispace. This leads more generally to the definition of "orbihedron", the simplicial analogue of an orbifold.
Definition
Let X be a finite simplicial complex with barycentric subdivision X '. An orbihedron structure consists of:
for each vertex i of X ', a simplicial complex Li' endowed with a rigid simplicial action of a finite group Γi.
a simplicial map φi of Li' onto the link Li of i in X ', identifying the quotient Li' / Γi with Li.
This action of Γi on Li' extends to a simplicial action on the simplicial cone Ci over Li' (the simplicial join of i and Li'), fixing the centre i of the cone. The map φi extends to a simplicial map of
Ci onto the star St(i) of i, carrying the centre onto i; thus φi identifies Ci / Γi, the quotient of the star of i in Ci, with St(i) and gives an orbihedron chart at i.
for each directed edge i j of X ', an injective homomorphism fij of Γi into Γj.
for each directed edge i j, a Γi equivariant simplicial gluing map ψij of Ci into Cj.
the gluing maps are compatible with the charts, i.e. φj·ψij = φi.
the gluing maps are unique up to composition with group elements, i.e. any other possible gluing map from Vi to Vj has the form g·ψij for a unique g in Γj.
If i j k, then there is a unique transition element gijk in Γk such that
gijk·ψik = ψjk·ψij
These transition elements satisfy
(Ad gijk)·fik = fjk·fij
as well as the cocycle relation
ψkm(gijk)·gikm = gijm·gjkm.
Main properties
The group theoretic data of an orbihedron gives a complex of groups on X, because the vertices i of the barycentric subdivision X ' correspond to the simplices in X.
Every complex of groups on X is associated with an essentially unique orbihedron structure on X. This key fact follows by noting that the star and link of a vertex i of X ', corresponding to a simplex σ of X, have natural decompositions: the star is isomorphic to the abstract simplicial complex given by the join of σ and the barycentric subdivision σ' of σ; and the link is isomorphic to join of the link of σ in X and the link of the barycentre of σ in σ'. Restricting the complex of groups to the link of σ in X, all the groups Γτ come with injective homomorphisms into Γσ. Since the link of i in X ' is canonically covered by a simplicial complex on which Γσ acts, this defines an orbihedron structure on X.
The orbihedron fundamental group is (tautologically) just the edge-path group of the associated complex of groups.
Every orbihedron is also naturally an orbispace: indeed in the geometric realization of the simplicial complex, orbispace charts can be defined using the interiors of stars.
The orbihedron fundamental group can be naturally identified with the orbispace fundamental group of the associated orbispace. This follows by applying the simplicial approximation theorem to segments of an orbispace path lying in an orbispace chart: it is a straightforward variant of the classical proof that the fundamental group of a polyhedron can be identified with its edge-path group.
The orbispace associated to an orbihedron has a canonical metric structure, coming locally from the length metric in the standard geometric realization in Euclidean space, with vertices mapped to an orthonormal basis. Other metric structures are also used, involving length metrics obtained by realizing the simplices in hyperbolic space, with simplices identified isometrically along common boundaries.
The orbispace associated to an orbihedron is non-positively curved if and only if the link in each orbihedron chart has girth greater than or equal to 6, i.e. any closed circuit in the link has length at least 6. This condition, well known from the theory of Hadamard spaces, depends only on the underlying complex of groups.
When the universal covering orbihedron is non-positively curved the fundamental group is infinite and is generated by isomorphic copies of the isotropy groups. This follows from the corresponding result for orbispaces.
Triangles of groups
Historically one of the most important applications of orbifolds in geometric group theory has been to triangles of groups. This is the simplest 2-dimensional example generalising the 1-dimensional "interval of groups" discussed in Serre's lectures on trees, where amalgamated free products are studied in terms of actions on trees. Such triangles of groups arise any time a discrete group acts simply transitively on the triangles in the affine Bruhat–Tits building for SL3(Qp); in 1979 Mumford discovered the first example for p = 2 (see below) as a step in producing an algebraic surface not isomorphic to projective space, but having the same Betti numbers. Triangles of groups were worked out in detail by Gersten and Stallings, while the more general case of complexes of groups, described above, was developed independently by Haefliger. The underlying geometric method of analysing finitely presented groups in terms of metric spaces of non-positive curvature is due to Gromov. In this context triangles of groups correspond to non-positively curved 2-dimensional simplicial complexes with the regular action of a group, transitive on triangles.
A triangle of groups is a simple complex of groups consisting of a triangle with vertices A, B, C. There are groups
ΓA, ΓB, ΓC at each vertex
ΓBC, ΓCA, ΓAB for each edge
ΓABC for the triangle itself.
There is an injective homomorphisms of ΓABC into all the other groups and of an edge group ΓXY into ΓX and ΓY. The three ways of mapping ΓABC into a vertex group all agree. (Often ΓABC is the trivial group.) The Euclidean metric structure on the corresponding orbispace is non-positively curved if and only if the link of each of the vertices in the orbihedron chart has girth at least 6.
This girth at each vertex is always even and, as observed by Stallings, can be described at a vertex A, say, as the length of the smallest word in the kernel of the natural homomorphism into ΓA of the amalgamated free product over ΓABC of the edge groups ΓAB and ΓAC:
The result using the Euclidean metric structure is not optimal. Angles α, β, γ at the vertices A, B and C were defined by Stallings as 2π divided by the girth. In the Euclidean case α, β, γ ≤ π/3. However, if it is only required that α + β + γ ≤ π, it is possible to identify the
triangle with the corresponding geodesic triangle in the hyperbolic plane with the Poincaré metric (or the Euclidean plane if equality holds). It is a classical result from hyperbolic geometry that the hyperbolic medians intersect in the hyperbolic barycentre, just as in the familiar Euclidean case. The barycentric subdivision and metric from this model yield a non-positively curved metric structure on the corresponding orbispace. Thus, if α+β+γ≤π,
the orbispace of the triangle of groups is developable;
the corresponding edge-path group, which can also be described as the colimit of the triangle of groups, is infinite;
the homomorphisms of the vertex groups into the edge-path group are injections.
Mumford's example
Let α = be given by the binomial expansion of (1 − 8)1/2 in Q2 and set K = Q(α) Q2. Let
ζ = exp 2i/7
λ = (α − 1)/2 = ζ + ζ2 + ζ4
μ = λ/λ*.
Let E = Q(ζ), a 3-dimensional vector space over K with basis 1, ζ, and ζ2. Define K-linear operators on E as follows:
σ is the generator of the Galois group of E over K, an element of order 3 given by σ(ζ) = ζ2
τ is the operator of multiplication by ζ on E, an element of order 7
ρ is the operator given by ρ(ζ) = 1, ρ(ζ2) = ζ and ρ(1) = μ·ζ2, so that ρ3 is scalar multiplication by μ.
The elements ρ, σ, and τ generate a discrete subgroup of GL3(K) which acts properly on the affine Bruhat–Tits building corresponding to SL3(Q2). This group acts transitively on all vertices, edges and triangles in the building. Let
σ1 = σ, σ2 = ρσρ−1, σ3 = ρ2σρ−2.
Then
σ1, σ2 and σ3 generate a subgroup Γ of SL3(K).
Γ is the smallest subgroup generated by σ and τ, invariant under conjugation by ρ.
Γ acts simply transitively on the triangles in the building.
There is a triangle Δ such that the stabiliser of its edges are the subgroups of order 3 generated by the σi's.
The stabiliser of a vertices of Δ is the Frobenius group of order 21 generated by the two order 3 elements stabilising the edges meeting at the vertex.
The stabiliser of Δ is trivial.
The elements σ and τ generate the stabiliser of a vertex. The link of this vertex can be identified with the spherical building of SL3(F2) and the stabiliser can be identified with the collineation group of the Fano plane generated by a 3-fold symmetry σ fixing a point and a cyclic permutation τ of all 7 points, satisfying στ = τ2σ. Identifying F8* with the Fano plane, σ can be taken to be the restriction of the Frobenius automorphism σ(x) = x22 of F8 and τ to be multiplication by any element not in the prime field F2, i.e. an order 7 generator of the cyclic multiplicative group of F8. This Frobenius group acts simply transitively on the 21 flags in the Fano plane, i.e. lines with marked points. The formulas for σ and τ on E thus "lift" the formulas on F8.
Mumford also obtains an action simply transitive on the vertices of the building by passing to a subgroup of Γ1 = <ρ, σ, τ, −I>. The group Γ1 preserves the Q(α)-valued Hermitian form
f(x,y) = xy* + σ(xy*) + σ2(xy*)
on Q(ζ) and can be identified with U3(f) GL3(S) where S = Z[α,]. Since S/(α) = F7, there is a homomorphism of the group Γ1 into GL3(F7). This action leaves invariant a 2-dimensional subspace in F73 and hence gives rise to a homomorphism Ψ of Γ1 into SL2(F7), a group of order 16·3·7. On the other hand, the stabiliser of a vertex is a subgroup of order 21 and Ψ is injective on this subgroup. Thus if the congruence subgroup Γ0 is defined as the inverse image under Ψ of the 2-Sylow subgroup of SL2(F7), the action of
Γ0 on vertices must be simply transitive.
Generalizations
Other examples of triangles or 2-dimensional complexes of groups can be constructed by variations of the above example.
Cartwright et al. consider actions on buildings that are simply transitive on vertices. Each such action produces a bijection (or modified duality) between the points x and lines x* in the flag complex of a finite projective plane and a collection of oriented triangles of points (x,y,z), invariant under cyclic permutation, such that x lies on z*, y lies on x* and z lies on y* and any two points uniquely determine the third. The groups produced have generators x, labelled by points, and relations xyz = 1 for each triangle. Generically this construction will not correspond to an action on a classical affine building.
More generally, as shown by Ballmann and Brin, similar algebraic data encodes all actions that are simply transitively on the vertices of a non-positively curved 2-dimensional simplicial complex, provided the link of each vertex has girth at least 6. This data consists of:
a generating set S containing inverses, but not the identity;
a set of relations g h k = 1, invariant under cyclic permutation.
The elements g in S label the vertices g·v in the link of a fixed vertex v; and the relations correspond to edges (g−1·v, h·v) in that link. The graph with vertices S and edges (g, h), for g−1h in S, must have girth at least 6. The original simplicial complex can be reconstructed using complexes of groups and the second barycentric subdivision.
Further examples of non-positively curved 2-dimensional complexes of groups have been constructed by Swiatkowski based on actions simply transitive on oriented edges and inducing a 3-fold symmetry on each triangle; in this case too the complex of groups is obtained from the regular action on the second barycentric subdivision. The simplest example, discovered earlier with Ballmann, starts from a finite group H with a symmetric set of generators S, not containing the identity, such that the corresponding Cayley graph has girth at least 6. The associated group is generated by H and an involution τ subject to (τg)3 = 1 for each g in S.
In fact, if Γ acts in this way, fixing an edge (v, w), there is an involution τ interchanging v and w. The link of v is made up of vertices g·w for g in a symmetric subset S of H = Γv, generating H if the link is connected. The assumption on triangles implies that
τ·(g·w) = g−1·w
for g in S. Thus, if σ = τg and u = g−1·w, then
σ(v) = w, σ(w) = u, σ(u) = w.
By simple transitivity on the triangle (v, w, u), it follows that σ3 = 1.
The second barycentric subdivision gives a complex of groups consisting of singletons or pairs of barycentrically subdivided triangles joined along their large sides: these pairs are indexed by the quotient space S/~ obtained by identifying inverses in S. The single or "coupled" triangles are in turn joined along one common "spine". All stabilisers of simplices are trivial except for the two vertices at the ends of the spine, with stabilisers H and <τ>, and the remaining vertices of the large triangles, with stabiliser generated by an appropriate σ. Three of the smaller triangles in each large triangle contain transition elements.
When all the elements of S are involutions, none of the triangles need to be doubled. If H is taken to be the dihedral group D7 of order 14, generated by an involution a and an element b of order 7 such that
ab= b−1a,
then H is generated by the 3 involutions a, ab and ab5. The link of each vertex is given by the corresponding Cayley graph, so is just the bipartite Heawood graph, i.e. exactly the same as in the affine building for SL3(Q2). This link structure implies that the corresponding simplicial complex is necessarily a Euclidean building. At present, however, it seems to be unknown whether any of these types of action can in fact be realised on a classical affine building: Mumford's group Γ1 (modulo scalars) is only simply transitive on edges, not on oriented edges.
Two-dimensional orbifolds
Two-dimensional orbifolds have the following three types of singular points:
A boundary point
An elliptic point or gyration point of order n, such as the origin of R2 quotiented out by a cyclic group of order n of rotations.
A corner reflector of order n: the origin of R2 quotiented out by a dihedral group of order 2n.
A compact 2-dimensional orbifold has an Euler characteristic
given by
,
where is the Euler characteristic of the underlying topological manifold , and are the orders of the corner reflectors, and are the orders of the elliptic points.
A 2-dimensional compact connected orbifold has a hyperbolic structure if its Euler characteristic is less than 0, a Euclidean structure if it is 0, and if its Euler characteristic is positive it is either bad or has an elliptic structure (an orbifold is called bad if it does not have a manifold as a covering space). In other words, its universal covering space has a hyperbolic, Euclidean, or spherical structure.
The compact 2-dimensional connected orbifolds that are not hyperbolic are listed in the table below. The 17 parabolic orbifolds are the quotients of the plane by the 17 wallpaper groups.
3-dimensional orbifolds
A 3-manifold is said to be small if it is closed, irreducible and does not contain any incompressible surfaces.
Orbifold Theorem. Let M be a small 3-manifold. Let φ be a non-trivial periodic orientation-preserving diffeomorphism of M. Then M admits a φ-invariant hyperbolic or Seifert fibered structure.
This theorem is a special case of Thurston's orbifold theorem, announced without proof in 1981; it forms part of his geometrization conjecture for 3-manifolds. In particular it implies that if X is a compact, connected, orientable, irreducible, atoroidal 3-orbifold with non-empty singular locus, then M has a geometric structure (in the sense of orbifolds). A complete proof of the theorem was published by Boileau, Leeb & Porti in 2005.
Applications
Orbifolds in string theory
In string theory, the word "orbifold" has a slightly new meaning. For mathematicians, an orbifold is a generalization of the notion of manifold that allows the presence of the points whose neighborhood is diffeomorphic to a quotient of Rn by a finite group, i.e. Rn/Γ. In physics, the notion of an orbifold usually describes an object that can be globally written as an orbit space M/G where M is a manifold (or a theory), and G is a group of its isometries (or symmetries) — not necessarily all of them. In string theory, these symmetries do not have to have a geometric interpretation.
A quantum field theory defined on an orbifold becomes singular near the fixed points of G. However string theory requires us to add new parts of the closed string Hilbert space — namely the twisted sectors where the fields defined on the closed strings are periodic up to an action from G. Orbifolding is therefore a general procedure of string theory to derive a new string theory from an old string theory in which the elements of G have been identified with the identity. Such a procedure reduces the number of states because the states must be invariant under G, but it also increases the number of states because of the extra twisted sectors. The result is usually a perfectly smooth, new string theory.
D-branes propagating on the orbifolds are described, at low energies, by gauge theories defined by the quiver diagrams. Open strings attached to these D-branes have no twisted sector, and so the number of open string states is reduced by the orbifolding procedure.
More specifically, when the orbifold group G is a discrete subgroup of spacetime isometries, then if it has no fixed point, the result is usually a compact smooth space; the twisted sector consists of closed strings wound around the compact dimension, which are called winding states.
When the orbifold group G is a discrete subgroup of spacetime isometries, and it has fixed points, then these usually have conical singularities, because Rn/Zk has such a singularity at the fixed point of Zk. In string theory, gravitational singularities are usually a sign of extra degrees of freedom which are located at a locus point in spacetime. In the case of the orbifold these degrees of freedom are the twisted states, which are strings "stuck" at the fixed points. When the fields related with these twisted states acquire a non-zero vacuum expectation value, the singularity is deformed, i.e. the metric is changed and becomes regular at this point and around it. An example for a resulting geometry is the Eguchi–Hanson spacetime.
From the point of view of D-branes in the vicinity of the fixed points, the effective theory of the open strings attached to these D-branes is a supersymmetric field theory, whose space of vacua has a singular point, where additional massless degrees of freedom exist. The fields related with the closed string twisted sector couple to the open strings in such a way as to add a Fayet–Iliopoulos term to the supersymmetric field theory Lagrangian, so that when such a field acquires a non-zero vacuum expectation value, the Fayet–Iliopoulos term is non-zero, and thereby deforms the theory (i.e. changes it) so that the singularity no longer exists , .
Calabi–Yau manifolds
In superstring theory,
the construction of realistic phenomenological models requires dimensional reduction because the strings naturally propagate in a 10-dimensional space whilst the observed dimension of space-time of the universe is 4. Formal constraints on the theories nevertheless place restrictions on the compactified space in which the extra "hidden" variables live: when looking for realistic 4-dimensional models with supersymmetry, the auxiliary compactified space must be a 6-dimensional Calabi–Yau manifold.
There are a large number of possible Calabi–Yau manifolds (tens of thousands), hence the use of the term "landscape" in the current theoretical physics literature to describe the baffling choice. The general study of Calabi–Yau manifolds is mathematically complex and for a long time examples have been hard to construct explicitly. Orbifolds have therefore proved very useful since they automatically satisfy the constraints imposed by supersymmetry. They provide degenerate examples of Calabi–Yau manifolds due to their singular points, but this is completely acceptable from the point of view of theoretical physics. Such orbifolds are called "supersymmetric": they are technically easier to study than general Calabi–Yau manifolds. It is very often possible to associate a continuous family of non-singular Calabi–Yau manifolds to a singular supersymmetric orbifold. In 4 dimensions this can be illustrated using complex K3 surfaces:
Every K3 surface admits 16 cycles of dimension 2 that are topologically equivalent to usual 2-spheres. Making the surface of these spheres tend to zero, the K3 surface develops 16 singularities. This limit represents a point on the boundary of the moduli space of K3 surfaces and corresponds to the orbifold obtained by taking the quotient of the torus by the symmetry of inversion.
The study of Calabi–Yau manifolds in string theory and the duality between different models of string theory (type IIA and IIB) led to the idea of mirror symmetry in 1988. The role of orbifolds was first pointed out by Dixon, Harvey, Vafa and Witten around the same time.
Music theory
Beyond their manifold and various applications in mathematics and physics, orbifolds have been applied to music theory at least as early as 1985 in the work of Guerino Mazzola and later by Dmitri Tymoczko and collaborators. One of the papers of Tymoczko was the first music theory paper published by the journal Science. Mazzola and Tymoczko have participated in debate regarding their theories documented in a series of commentaries available at their respective web sites.
Tymoczko models musical chords consisting of n notes, which are not necessarily distinct, as points in the orbifold – the space of n unordered points (not necessarily distinct) in the circle, realized as the quotient of the n-torus (the space of n ordered points on the circle) by the symmetric group (corresponding from moving from an ordered set to an unordered set).
Musically, this is explained as follows:
Musical tones depend on the frequency (pitch) of their fundamental, and thus are parametrized by the positive real numbers, R+.
Musical tones that differ by an octave (a doubling of frequency) are considered the same tone – this corresponds to taking the logarithm base 2 of frequencies (yielding the real numbers, as ), then quotienting by the integers (corresponding to differing by some number of octaves), yielding a circle (as ).
Chords correspond to multiple tones without respect to order – thus t notes (with order) correspond to t ordered points on the circle, or equivalently a single point on the t-torus and omitting order corresponds to taking the quotient by yielding an orbifold.
For dyads (two tones), this yields the closed Möbius strip; for triads (three tones), this yields an orbifold that can be described as a triangular prism with the top and bottom triangular faces identified with a 120° twist (a twist) – equivalently, as a solid torus in 3 dimensions with a cross-section an equilateral triangle and such a twist.
The resulting orbifold is naturally stratified by repeated tones (properly, by integer partitions of t) – the open set consists of distinct tones (the partition ), while there is a 1-dimensional singular set consisting of all tones being the same (the partition ), which topologically is a circle, and various intermediate partitions. There is also a notable circle which runs through the center of the open set consisting of equally spaced points. In the case of triads, the three side faces of the prism correspond to two tones being the same and the third different (the partition ), while the three edges of the prism correspond to the 1-dimensional singular set. The top and bottom faces are part of the open set, and only appear because the orbifold has been cut – if viewed as a triangular torus with a twist, these artifacts disappear.
Tymoczko argues that chords close to the center (with tones equally or almost equally spaced) form the basis of much of traditional Western harmony, and that visualizing them in this way assists in analysis. There are 4 chords on the center (equally spaced under equal temperament – spacing of 4/4/4 between tones), corresponding to the augmented triads (thought of as musical sets) C♯FA, DF♯A♯, D♯GB, and EG♯C (then they cycle: FAC♯ = C♯FA), with the 12 major chords and 12 minor chords being the points next to but not on the center – almost evenly spaced but not quite. Major chords correspond to 4/3/5 (or equivalently, 5/4/3) spacing, while minor chords correspond to 3/4/5 spacing. Key changes then correspond to movement between these points in the orbifold, with smoother changes effected by movement between nearby points.
See also
Branched covering
Euler characteristic of an orbifold
Geometric quotient
Kawasaki's Riemann–Roch formula
Orbifold notation
Orientifold
Ring of modular forms
Stack (mathematics)
Notes
References
.
.
.
.
Errata:
English translation of:
Differential topology
Generalized manifolds
Group actions (mathematics) | Orbifold | [
"Physics",
"Mathematics"
] | 11,237 | [
"Topology",
"Differential topology",
"Group actions",
"Symmetry"
] |
293,533 | https://en.wikipedia.org/wiki/Sign%20convention | In physics, a sign convention is a choice of the physical significance of signs (plus or minus) for a set of quantities, in a case where the choice of sign is arbitrary. "Arbitrary" here means that the same physical system can be correctly described using different choices for the signs, as long as one set of definitions is used consistently. The choices made may differ between authors. Disagreement about sign conventions is a frequent source of confusion, frustration, misunderstandings, and even outright errors in scientific work. In general, a sign convention is a special case of a choice of coordinate system for the case of one dimension.
Sometimes, the term "sign convention" is used more broadly to include factors of the imaginary unit and , rather than just choices of sign.
Relativity
Metric signature
In relativity, the metric signature can be either or . (Throughout this article, the signs of the eigenvalues of the metric are displayed in the order that presents the timelike component first, followed by the spacelike components). A similar convention is used in higher-dimensional relativistic theories; that is, or . A choice of signature is associated with a variety of names, physics discipline, and notable graduate-level textbooks:
Curvature
The Ricci tensor is defined as the contraction of the Riemann tensor. Some authors use the contraction , whereas others use the alternative . Due to the symmetries of the Riemann tensor, these two definitions differ by a minus sign.
In fact, the second definition of the Ricci tensor is . The sign of the Ricci tensor does not change, because the two sign conventions concern the sign of the Riemann tensor. The second definition just compensates the sign, and it works together with the second definition of the Riemann tensor (see e.g. Barrett O'Neill's Semi-riemannian geometry).
Other sign conventions
The sign choice for time in frames of reference and proper time: + for future and − for past is universally accepted.
The choice of in the Dirac equation.
The sign of the electric charge, field strength tensor in gauge theories and classical electrodynamics.
Time dependence of a positive-frequency wave (see, e.g., the electromagnetic wave equation):
(mainly used by physicists)
(mainly used by engineers)
The sign for the imaginary part of permittivity (in fact dictated by the choice of sign for time-dependence).
The signs of distances and radii of curvature of optical surfaces in optics.
The sign of work in the first law of thermodynamics.
The sign of the weight of a tensor density, such as the weight of the determinant of the covariant metric tensor.
The active and passive sign convention of current, voltage and power in electrical engineering.
A sign convention used for curved mirrors assigns a positive focal length to concave mirrors and a negative focal length to convex mirrors.
It is often considered good form to state explicitly which sign convention is to be used at the beginning of each book or article.
See also
Orientation (vector space)
Symmetry (physics)
Gauge theory
Negative logic
References
Mathematical physics | Sign convention | [
"Physics",
"Mathematics"
] | 635 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
293,629 | https://en.wikipedia.org/wiki/Wick%20rotation | In physics, Wick rotation, named after Italian physicist Gian Carlo Wick, is a method of finding a solution to a mathematical problem in Minkowski space from a solution to a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable.
Wick rotations are useful because of an analogy between two important but seemingly distinct fields of physics: statistical mechanics and quantum mechanics. In this analogy, inverse temperature plays a role in statistical mechanics formally akin to imaginary time in quantum mechanics: that is, , where is time and is the imaginary unit ().
More precisely, in statistical mechanics, the Gibbs measure describes the relative probability of the system to be in any given state at temperature , where is a function describing the energy of each state and is the Boltzmann constant. In quantum mechanics, the transformation describes time evolution, where is an operator describing the energy (the Hamiltonian) and is the reduced Planck constant. The former expression resembles the latter when we replace with , and this replacement is called Wick rotation.
Wick rotation is called a rotation because when we represent complex numbers as a plane, the multiplication of a complex number by the imaginary unit is equivalent to counter-clockwise rotating the vector representing that number by an angle of magnitude about the origin.
Overview
Wick rotation is motivated by the observation that the Minkowski metric in natural units (with metric signature convention)
and the four-dimensional Euclidean metric
are equivalent if one permits the coordinate to take on imaginary values. The Minkowski metric becomes Euclidean when is restricted to the imaginary axis, and vice versa. Taking a problem expressed in Minkowski space with coordinates , , , , and substituting
sometimes yields a problem in real Euclidean coordinates , , , which is easier to solve. This solution may then, under reverse substitution, yield a solution to the original problem.
Statistical and quantum mechanics
Wick rotation connects statistical mechanics to quantum mechanics by replacing inverse temperature with imaginary time, or more precisely replacing with , where is temperature, is the Boltzmann constant, is time, and is the reduced Planck constant.
For example, consider a quantum system whose Hamiltonian has eigenvalues . When this system is in thermal equilibrium at temperature , the probability of finding it in its th energy eigenstate is proportional to . Thus, the expected value of any observable that commutes with the Hamiltonian is, up to a normalizing constant,
where runs over all energy eigenstates and is the value of in the th eigenstate.
Alternatively, consider this system in a superposition of energy eigenstates, evolving for a time under the Hamiltonian . After time , the relative phase change of the th eigenstate is . Thus, the probability amplitude that a uniform (equally weighted) superposition of states
evolves to an arbitrary superposition
is, up to a normalizing constant,
Note that this formula can be obtained from the formula for thermal equilibrium by replacing with .
Statics and dynamics
Wick rotation relates statics problems in dimensions to dynamics problems in dimensions, trading one dimension of space for one dimension of time. A simple example where is a hanging spring with fixed endpoints in a gravitational field. The shape of the spring is a curve . The spring is in equilibrium when the energy associated with this curve is at a critical point (an extremum); this critical point is typically a minimum, so this idea is usually called "the principle of least energy". To compute the energy, we integrate the energy spatial density over space:
where is the spring constant, and is the gravitational potential.
The corresponding dynamics problem is that of a rock thrown upwards. The path the rock follows is that which extremalizes the action; as before, this extremum is typically a minimum, so this is called the "principle of least action". Action is the time integral of the Lagrangian:
We get the solution to the dynamics problem (up to a factor of ) from the statics problem by Wick rotation, replacing by and the spring constant by the mass of the rock :
Both thermal/quantum and static/dynamic
Taken together, the previous two examples show how the path integral formulation of quantum mechanics is related to statistical mechanics. From statistical mechanics, the shape of each spring in a collection at temperature will deviate from the least-energy shape due to thermal fluctuations; the probability of finding a spring with a given shape decreases exponentially with the energy difference from the least-energy shape. Similarly, a quantum particle moving in a potential can be described by a superposition of paths, each with a phase : the thermal variations in the shape across the collection have turned into quantum uncertainty in the path of the quantum particle.
Further details
The Schrödinger equation and the heat equation are also related by Wick rotation.
Wick rotation also relates a quantum field theory at a finite inverse temperature to a statistical-mechanical model over the "tube" with the imaginary time coordinate being periodic with period . However, there is a slight difference. Statistical-mechanical -point functions satisfy positivity, whereas Wick-rotated quantum field theories satisfy reflection positivity.
Note, however, that the Wick rotation cannot be viewed as a rotation on a complex vector space that is equipped with the conventional norm and metric induced by the inner product, as in this case the rotation would cancel out and have no effect.
Rigorous proof
Dirk Schlingemann proved that a more rigorous link between Euclidean and quantum field theory can be constructed using the Osterwalder–Schrader axioms.
See also
Complex spacetime
Imaginary time
Schwinger function
References
External links
A Spring in Imaginary Time – a worksheet in Lagrangian mechanics illustrating how replacing length by imaginary time turns the parabola of a hanging spring into the inverted parabola of a thrown particle
Euclidean Gravity – a short note by Ray Streater on the "Euclidean Gravity" programme.
Quantum field theory
Statistical mechanics | Wick rotation | [
"Physics"
] | 1,216 | [
"Quantum field theory",
"Statistical mechanics",
"Quantum mechanics"
] |
293,639 | https://en.wikipedia.org/wiki/Lift-to-drag%20ratio | In aerodynamics, the lift-to-drag ratio (or L/D ratio) is the lift generated by an aerodynamic body such as an aerofoil or aircraft, divided by the aerodynamic drag caused by moving through air. It describes the aerodynamic efficiency under given flight conditions. The L/D ratio for any given body will vary according to these flight conditions.
For an aerofoil wing or powered aircraft, the L/D is specified when in straight and level flight. For a glider it determines the glide ratio, of distance travelled against loss of height.
The term is calculated for any particular airspeed by measuring the lift generated, then dividing by the drag at that speed. These vary with speed, so the results are typically plotted on a 2-dimensional graph. In almost all cases the graph forms a U-shape, due to the two main components of drag. The L/D may be calculated using computational fluid dynamics or computer simulation. It is measured empirically by testing in a wind tunnel or in free flight test.
The L/D ratio is affected by both the form drag of the body and by the induced drag associated with creating a lifting force. It depends principally on the lift and drag coefficients, angle of attack to the airflow and the wing aspect ratio.
The L/D ratio is inversely proportional to the energy required for a given flightpath, so that doubling the L/D ratio will require only half of the energy for the same distance travelled. This results directly in better fuel economy.
The L/D ratio can also be used for water craft and land vehicles. The L/D ratios for hydrofoil boats and displacement craft are determined similarly to aircraft.
Lift and drag
Lift can be created when an aerofoil-shaped body travels through a viscous fluid such as air. The aerofoil is often cambered and/or set at an angle of attack to the airflow. The lift then increases as the square of the airspeed.
Whenever an aerodynamic body generates lift, this also creates lift-induced drag or induced drag. At low speeds an aircraft has to generate lift with a higher angle of attack, which results in a greater induced drag. This term dominates the low-speed side of the graph of lift versus velocity.
Form drag is caused by movement of the body through air. This type of drag, known also as air resistance or profile drag varies with the square of speed (see drag equation). For this reason profile drag is more pronounced at greater speeds, forming the right side of the lift/velocity graph's U shape. Profile drag is lowered primarily by streamlining and reducing cross section.
The total drag on any aerodynamic body thus has two components, induced drag and form drag.
Lift and drag coefficients
The rates of change of lift and drag with angle of attack (AoA) are called respectively the lift and drag coefficients CL and CD. The varying ratio of lift to drag with AoA is often plotted in terms of these coefficients.
For any given value of lift, the AoA varies with speed. Graphs of CL and CD vs. speed are referred to as drag curves. Speed is shown increasing from left to right. The lift/drag ratio is given by the slope from the origin to some point on the curve and so the maximum L/D ratio does not occur at the point of least drag coefficient, the leftmost point. Instead, it occurs at a slightly greater speed. Designers will typically select a wing design which produces an L/D peak at the chosen cruising speed for a powered fixed-wing aircraft, thereby maximizing economy. Like all things in aeronautical engineering, the lift-to-drag ratio is not the only consideration for wing design. Performance at a high angle of attack and a gentle stall are also important.
Glide ratio
As the aircraft fuselage and control surfaces will also add drag and possibly some lift, it is fair to consider the L/D of the aircraft as a whole. The glide ratio, which is the ratio of an (unpowered) aircraft's forward motion to its descent, is (when flown at constant speed) numerically equal to the aircraft's L/D. This is especially of interest in the design and operation of high performance sailplanes, which can have glide ratios almost 60 to 1 (60 units of distance forward for each unit of descent) in the best cases, but with 30:1 being considered good performance for general recreational use. Achieving a glider's best L/D in practice requires precise control of airspeed and smooth and restrained operation of the controls to reduce drag from deflected control surfaces. In zero wind conditions, L/D will equal distance traveled divided by altitude lost. Achieving the maximum distance for altitude lost in wind conditions requires further modification of the best airspeed, as does alternating cruising and thermaling. To achieve high speed across country, glider pilots anticipating strong thermals often load their gliders (sailplanes) with water ballast: the increased wing loading means optimum glide ratio at greater airspeed, but at the cost of climbing more slowly in thermals. As noted below, the maximum L/D is not dependent on weight or wing loading, but with greater wing loading the maximum L/D occurs at a faster airspeed. Also, the faster airspeed means the aircraft will fly at greater Reynolds number and this will usually bring about a lower zero-lift drag coefficient.
Theory
Subsonic
Mathematically, the maximum lift-to-drag ratio can be estimated as
where AR is the aspect ratio, the span efficiency factor, a number less than but close to unity for long, straight-edged wings, and the zero-lift drag coefficient.
Most importantly, the maximum lift-to-drag ratio is independent of the weight of the aircraft, the area of the wing, or the wing loading.
It can be shown that two main drivers of maximum lift-to-drag ratio for a fixed wing aircraft are wingspan and total wetted area. One method for estimating the zero-lift drag coefficient of an aircraft is the equivalent skin-friction method. For a well designed aircraft, zero-lift drag (or parasite drag) is mostly made up of skin friction drag plus a small percentage of pressure drag caused by flow separation. The method uses the equation
where is the equivalent skin friction coefficient, is the wetted area and is the wing reference area. The equivalent skin friction coefficient accounts for both separation drag and skin friction drag and is a fairly consistent value for aircraft types of the same class. Substituting this into the equation for maximum lift-to-drag ratio, along with the equation for aspect ratio (), yields the equation
where b is wingspan. The term is known as the wetted aspect ratio. The equation demonstrates the importance of wetted aspect ratio in achieving an aerodynamically efficient design.
Supersonic
At supersonic speeds L/D values are lower. Concorde had a lift/drag ratio of about 7 at Mach 2, whereas a 747 has about 17 at about mach 0.85.
Dietrich Küchemann developed an empirical relationship for predicting L/D ratio for high Mach numbers:
where M is the Mach number. Windtunnel tests have shown this to be approximately accurate.
Examples of L/D ratios
House sparrow: 4:1
Herring gull 10:1
Common tern 12:1
Albatross 20:1
Wright Flyer 8.3:1
Boeing 747 in cruise 17.7:1.
Cruising Airbus A380 20:1
Concorde at takeoff and landing 4:1, increasing to 12:1 at Mach 0.95 and 7.5:1 at Mach 2
Helicopter at 4.5:1
Cessna 172 gliding 10.9:1
Cruising Lockheed U-2 25.6:1
Rutan Voyager 27:1
Virgin Atlantic GlobalFlyer 37:1
See also
Gravity drag—rockets can have an effective lift to drag ratio while maintaining altitude.
Inductrack maglev
Lift coefficient
Range (aeronautics) range depends on the lift/drag ratio.
Thrust specific fuel consumption the lift to drag determines the required thrust to maintain altitude (given the aircraft weight), and the SFC permits calculation of the fuel burn rate.
Thrust-to-weight ratio
References
External links
Lift-to-drag ratio calculator
Aircraft aerodynamics
Aircraft performance
Aircraft wing design
Drag (physics)
Engineering ratios
Gliding technology
Wind power | Lift-to-drag ratio | [
"Chemistry",
"Mathematics",
"Engineering"
] | 1,688 | [
"Drag (physics)",
"Metrics",
"Engineering ratios",
"Quantity",
"Fluid dynamics"
] |
502,116 | https://en.wikipedia.org/wiki/Lifting%20bag | A lifting bag is an item of diving equipment consisting of a robust and air-tight bag with straps, which is used to lift heavy objects underwater by means of the bag's buoyancy. The heavy object can either be moved horizontally underwater by the diver or sent unaccompanied to the surface.
Lift bag appropriate capacity should match the task at hand. If the lift bag is grossly oversized a runaway or otherwise out of control ascent may result. Commercially available lifting bags may incorporate dump valves to allow the operator to control the buoyancy during ascent, but this is a hazardous operation with high risk of entanglement in an uncontrolled lift or sinking. If a single bag is insufficient, multiple bags may be used, and should be distributed to suit the load.
There are also lifting bags used on land as short lift jacks for lifting cars or heavy loads or lifting bags which are used in machines as a type of pneumatic actuator which provides load over a large area. These lifting bags of the AS/CR type are for example used in the brake mechanism of rollercoasters.
Physics of buoyant lifting
The volume of the bag determines its lifting capacity: each litre of air inside the bag will lift a weight of 1 kilogram, or each cubic foot will lift about 62 pounds. For example, a bag can lift a underwater object.
A partially filled bag will accelerate as it ascends because the air in the bag expands as the pressure reduces on the ascent, following Boyles law, increasing the bag's buoyancy, whereas a full bag will overflow or blow off excess volume and maintain the same volume and buoyancy providing it does not descend. A bag which leaks sufficiently to start sinking will lose volume to compression and become less buoyant in a positive feedback loop until stopped by the bottom.
Breakout
The force required to lift a submerged object from the bottom can be split into two main components:
Apparent weight, which is the weight of the object less the buoyancy of its displacement.
Breakout forces due to embedment in the bottom, which can be negligible, or in some cases the major part of the load.
Once the object has broken free of the bottom, only the apparent weight remains, and a controlled lift requires a way of managing the sudden decrease of resistance to the lifting force. There are three basic ways this can be done:
Use of mechanical or hydraulic excavation to loosen the sediments holding the load.
Use of a "Dead Man Anchor" - a large heavy weight - and restraining cable to prevent the bag from moving away too far, so that the buoyancy can be corrected to more closely match the load.
Use of shallow bags with long cables to the load to provide breakout, which will only lift a short distance before surfacing, after which the load can be lifted further by staged lifts or direct lift by close-coupled bags.
Stability of the load
Once a load is lifted off the substrate, it will rotate until the centre of gravity is in the position of lowest potential energy. If it is suspended from a single point, the apparent centre of gravity (corrected for inherent buoyancy) will be directly below the lift point. If it is undesirable for the load to rotate by a large angle as it leaves the bottom, the lifting point must be chosen to allow for this effect, and a multi-part sling or spreader bar may be needed, and it may be necessary to secure slings so they do not slip.
Types and construction
Underwater lifting bags are lifting equipment and as such may be required to comply with safety standards.
Open lift bags (parachute lift bags)
Parachute lift bags are open at the bottom. When full any extra or expanding air will spill out. The shape of an open lifting bag should distribute the volume in a vertical rather than a horizontal direction so that the open end of the bag always remains underwater. If the open end reaches the surface, air will escape from the bag and it may sink.
The simplest version are two-sided bags, either joined round the edges or folded and joined along two sides. Webbing straps may be stitched to doubler patches which are then glued or welded to the bag on light duty bags, but on large and heavy duty bags there are usually strips of bag material bonded to the bags which form flat retaining tubes for the webbing which is threaded through the tubes and may be withdrawn for maintenance and inspection. heavy duty open bags are generally conical with a domed top or a reversed truncated cone top, and may have several straps from the lifting point at the bottom, through the guide tubes on the sides, to a crown ring of webbing or steel at the top, to spread the load evenly over the fabric of the bag.
Parachute lift bags cannot be overfilled and are suitable for lifts where there is a large pressure change, and where it may be necessary to capsize (invert) the bag to stop a runaway lift.
Some lift bags can be converted from open to closed by screwing a cover onto the bottom opening.
Propeller lifting bags
Installation and removal of propellers is a specialised application where there is usually very little clearance above the load, which is usually at least partly underneath a vessel. Lift bags for this application partly enclose the propeller when in use, and are required to hold the propeller in the correct alignment for fitting to the shaft.
It may be necessary to ballast the lower blade to keep the propeller upright. and details of the rigging will depend on the precise geometry of the propeller, such as aspect ratio, skew, and number of blades. The propeller lift bag will cover the upper part of the propeller, but cannot project below the line of the top of the shaft. It will support the propeller by at least two blades, for stability, and the propeller will generally by slung with a gap at the top, for maximum tip clearance. The bag may be shaped with a relatively horizontal top or a curve to follow the blade disc. Attachment to the blades must be on both sides for stability, and the slings must not harm the blade surfaces or damage the leading or trailing edge.
Closed lift bags (camels)
Closed lift bags have an overpressure valve to prevent internal pressure from exceeding ambient pressure by more than a set amount (around 10kPa, or 1msw) Closed lift bags are intended for use at or near the surface, as they retain the air even in rough seas. They are available in several configurations, including horizontal cylinder, vertical cylinder, teardrop and pillow.
Rapid deployment
Rapid deployment lift bags have a scuba cylinder mounted on the outside which contains sufficient air to inflate the bag at a specified depth. The bag can be attached to the load and when ready, the valve is opened and the diver swims clear. If the regulator pressure is set to a lower pressure than the over-pressure relief valve, a closed bag will automatically stop filling before the relief valve opens, but will be topped up if it leaks after reaching the surface. The regulator pressure must take into account the hydrostatic pressure difference between the top and bottom of the bag so that the bag will be completely filled.
Dump valves
Dump valves are used to release air from the bag when in the water. They can be operated manually at the valve by a diver or may be remotely operated by a pull-cord, which may be operated by a diver or attached to a weight which will automatically open the valve if the weight is lifted off the bottom. Some dump valves can be operated in both these ways. One system operates by pressing on the top or pulling a line attached to the bottom to actuate the spring-loaded valve. The dump valve may be a screw-in quick change system, and the spring tension may be adjustable.
Use
Dynamic lifts
When the empty lift bag is attached to the load and the lift is made by controlling the volume of inflation air it is referred to as a dynamic lift.
Direct lift
The bag or set of bags is used to lift the load directly to the surface. This is simple, but there is a risk if the lift bag is too large and cannot be vented fast enough the lift may get out of control and ascend so fast that the bag breaks the surface, capsizes and collapses, losing so much air that it then cannot support the weight of the load, and will then sink back to the bottom. If there is a marker buoy attached it will at least not be lost. A lift bag which is only slightly larger than needed to support the load will ascend more slowly, and is less likely to capsize at the surface, as excess air will be spilled continuously during the ascent.
Staged lift
Lift bags are used to bring the load up in stages: a long chain or sling is used to connect the load to a lift bag just below the surface, which is filled to break out the load and lift it until the bag reaches the surface, then a second bag is used to bring the load up further. This procedure continues until the load has been raised sufficiently. Advantages of this method are a more controlled lift, the facility to use a larger capacity for initial breakout without risk of a runaway. Disadvantages include the requirement for divers to work on or near the lifting gear when under load.
Buoyancy assisted lift
The lift is controlled by a line from the surface vessel, and the load is reduced by a lift bag with a volume too small to support the weight of the load when full. This allows a faster lift by the winch. The lifting gear must be capable of supporting the load if the bag fails, or must be arranged to fail safely.
A buoyancy assisted lift is a common procedure for recreational divers to assist the recovery of the shotline or anchor, which would otherwise be pulled up manually. A small lift bag attached to the shot is partially filled by the last diver to leave the bottom, and after surfacing the crew pull up the line and the air in the bag expands as it ascends, providing more assistance to the crew. In this application a runaway lift is not usually a problem, and the bag size is not critical.
Static lifts
Lift bags also can be used for static lifts, where the bag is anchored in place by rigging, and used as a lifting point with very high buoyancy compared to the load, which is then lifted in a controlled manner using a purchase or chain block or other suitable lifting device.
Rigging lift bags
Lift bags can not be over inflated, and can not normally exert a static buoyant force greater than their safe working load (SWL), however the rigging can be subjected to snatch loads, which can be caused by several factors.
When the bag is used in shallow water and surge or wave action causes rapid changes in dynamic loading, by pulling the bag from side to side
When the bag has lifted the load to the surface and the bag is subjected to vertical wave action
When the lift bag is incorrectly rigged
When the lift bag is snagged, then breaks free to be snubbed when the slack is taken up.
When the load is partly supported by a lifting cable and there is a sudden variation on the tension in the cable due to vessel movement or cable slip.
When lifting with more than one bag, allowance should be made for reduced filling capacity if bags are attached in such a way that they press against each other.
Incorrect rigging can cause load concentration on attachment points which may exceed the SWL
An inverter or capsizing line can be attached to the top of the bag. This line should be long enough and strong enough to attach to an independent anchor point so that if the lift bag or rigging fails the bag will be inverted and the air will escape, preventing a runaway lift. This procedure is generally used for short-distance transport near the bottom, as when aligning large components for assembly. With this method the load will generally hold the bag in the upright position, so the tripping line may be subjected to considerable load and the bag may not invert. An inverter line can also be attached to the load, so it will capsize the bag if it breaks free of the load, but this will not stop a runaway lift, and a holdback line is used for this purpose.
A holdback line is used to prevent the lift bag and load from floating away when used for short distance lifts at the bottom. The holdback is attached to the lifting ring of the bag, or to the load, and should be attached by a strong cable to an anchor point it cannot lift. The SWL of the holdback should be at least equal to the lifting capacity of the bag. A holdback and inverter are often used together.
A spreader bar may be used to distribute the lifting load more evenly between bags or along the load.
When filling bags, if each bag is completely filled before starting to fill the next bag, it is less likely that a runaway will be initiated, as only one of the bags can increase lift as it ascends. This will be the last bag, and the divers will be monitoring it most carefully while they are filling it, so will be more likely to react in time to regain control of the lift.
A weighted dump line will automatically start open the valve to start dumping excess air if the weight is lifted off the bottom. This will stop the lift from ascending any further if the dump valve releases air fast enough. When the weight is lowered back to the bottom by the sinking load, the valve will close again, and should hold the load steady. The stability of this system depends on the preload of the valve spring and the size of the valve opening.
Filling lift bags
The amount of air required for a lift bag depends on the apparent weight of the load and the depth of the bag. Approximately 1 m3 of air at ambient pressure is required per tonne of lift. Free air volume follows Boyle's law, and is proportional to absolute ambient pressure in bar or ata.
For example: A 5 tonne lift with the bags to be filled at 20m, requires 5m3 of air at 3 bar, which is 15m3 at surface pressure.
Filling air is usually supplied from the surface from a low pressure compressor, but for small lifts the diver may carry a cylinder of air for the purpose. It is considered bad practice in some jurisdictions to fill a lift bag from the diver's breathing gas cylinder, particularly if the diver has decompression obligations or only one cylinder as the risk of using up too much air and leaving the diver without sufficient air for a safe ascent is considered unacceptable.
Bags can be inflated from a safer distance by use of an air-lance, a rigid pipe which can be inserted into the opening of the bag.
Hazards of use
Snagging of the diver or diver's umbilical or lifeline in the lifting equipment, resulting in an uncontrolled rapid ascent.
Using too much air from a scuba diver's breathing gas supply, resulting in an out of air incident.
Using too much volume in the lift bags, resulting in a positive feedback expansion on ascent and a runaway lift.
Leaks in lift bags, causing loss of buoyancy and sinking of the load after lifting. The load may then sink increasingly rapidly as the air in the bags compresses, and may be a hazard to divers below or working on the load at the surface, or the load may be lost.
Unbalanced attachment of lifting gear may cause the load to be unstable once lifted free of the bottom. subsequent capsize or shifting of the load may break it free of the rigging, or damage the load or the lift bag. Similarly, poorly chosen or inadequate lifting points may result in overstressing the cargo and causing damage.
Marking the lift
If there is a significant risk of the bag losing buoyancy and sinking again with the load, a surface marker can be attached that will remain at the surface and mark the new position of the load so that it can be recovered again. The marker buoy line should be long enough to reach the surface if the load sinks anywhere in the vicinity.
Securing the load at the surface
Once at the surface, the load may be secured by adding more buoyancy, lifting on board a vessel or other method.
Gallery
References
External links
Lift Bag Size/Volume Calculator - Online calculator to determine the lift bag capacity and air volume required to recover an underwater object from fresh or salt water.
Underwater work equipment
Lifting equipment | Lifting bag | [
"Physics",
"Technology"
] | 3,351 | [
"Physical systems",
"Machines",
"Lifting equipment"
] |
502,806 | https://en.wikipedia.org/wiki/Yukawa%20potential | In particle, atomic and condensed matter physics, a Yukawa potential (also called a screened Coulomb potential) is a potential named after the Japanese physicist Hideki Yukawa. The potential is of the form:
where is a magnitude scaling constant, i.e. is the amplitude of potential, is the mass of the particle, is the radial distance to the particle, and is another scaling constant, so that is the approximate range. The potential is monotonically increasing in and it is negative, implying the force is attractive. In the SI system, the unit of the Yukawa potential is the inverse meter.
The Coulomb potential of electromagnetism is an example of a Yukawa potential with the factor equal to 1, everywhere. This can be interpreted as saying that the photon mass is equal to 0. The photon is the force-carrier between interacting, charged particles.
In interactions between a meson field and a fermion field, the constant is equal to the gauge coupling constant between those fields. In the case of the nuclear force, the fermions would be a proton and another proton or a neutron.
History
Prior to Hideki Yukawa's 1935 paper, physicists struggled to explain the results of James Chadwick's atomic model, which consisted of positively charged protons and neutrons packed inside of a small nucleus, with a radius on the order of 10−14 meters. Physicists knew that electromagnetic forces at these lengths would cause these protons to repel each other and for the nucleus to fall apart. Thus came the motivation for further explaining the interactions between elementary particles. In 1932, Werner Heisenberg proposed a "Platzwechsel" (migration) interaction between the neutrons and protons inside the nucleus, in which neutrons were composite particles of protons and electrons. These composite neutrons would emit electrons, creating an attractive force with the protons, and then turn into protons themselves. When, in 1933 at the Solvay Conference, Heisenberg proposed his interaction, physicists suspected it to be of either two forms:
on account of its short-range. However, there were many issues with his theory. For one, it is impossible for an electron of spin and a proton of spin to add up to the neutron spin of . The way Heisenberg treated this issue would go on to form the ideas of isospin.
Heisenberg's idea of an exchange interaction (rather than a Coulombic force) between particles inside the nucleus led Fermi to formulate his ideas on beta-decay in 1934. Fermi's neutron-proton interaction was not based on the "migration" of neutrons and protons between each other. Instead, Fermi proposed the emission and absorption of two light particles: the neutrino and electron, rather than just the electron (as in Heisenberg's theory). While Fermi's interaction solved the issue of the conservation of linear and angular momentum, Soviet physicists Igor Tamm and Dmitri Ivanenko demonstrated that the force associated with the neutrino and electron emission was not strong enough to bind the protons and neutrons in the nucleus.
In his February 1935 paper, Hideki Yukawa combines both the idea of Heisenberg's short-range force interaction and Fermi's idea of an exchange particle in order to fix the issue of the neutron-proton interaction. He deduced a potential which includes an exponential decay term () and an electromagnetic term (). In analogy to quantum field theory, Yukawa knew that the potential and its corresponding field must be a result of an exchange particle. In the case of QED, this exchange particle was a photon of 0 mass. In Yukawa's case, the exchange particle had some mass, which was related to the range of interaction (given by ). Since the range of the nuclear force was known, Yukawa used his equation to predict the mass of the mediating particle as about 200 times the mass of the electron. Physicists called this particle the "meson," as its mass was in the middle of the proton and electron. Yukawa's meson was found in 1947, and came to be known as the pion.
Relation to Coulomb potential
If the particle has no mass (i.e., ), then the Yukawa potential reduces to a Coulomb potential, and the range is said to be infinite. In fact, we have:
Consequently, the equation
simplifies to the form of the Coulomb potential
where we set the scaling constant to be:
A comparison of the long range potential strength for Yukawa and Coulomb is shown in Figure 2. It can be seen that the Coulomb potential has effect over a greater distance whereas the Yukawa potential approaches zero rather quickly. However, any Yukawa potential or Coulomb potential is non-zero for any large .
Fourier transform
The easiest way to understand that the Yukawa potential is associated with a massive field is by examining its Fourier transform. One has
where the integral is performed over all possible values of the 3-vector momenta . In this form, and setting the scaling factor to one, , the fraction is seen to be the propagator or Green's function of the Klein–Gordon equation.
Feynman amplitude
The Yukawa potential can be derived as the lowest order amplitude of the interaction of a pair of fermions. The Yukawa interaction couples the fermion field to the meson field with the coupling term
The scattering amplitude for two fermions, one with initial momentum and the other with momentum , exchanging a meson with momentum , is given by the Feynman diagram on the right.
The Feynman rules for each vertex associate a factor of with the amplitude; since this diagram has two vertices, the total amplitude will have a factor of . The line in the middle, connecting the two fermion lines, represents the exchange of a meson. The Feynman rule for a particle exchange is to use the propagator; the propagator for a massive meson is . Thus, we see that the Feynman amplitude for this graph is nothing more than
From the previous section, this is seen to be the Fourier transform of the Yukawa potential.
Eigenvalues of Schrödinger equation
The radial Schrödinger equation with Yukawa potential can be solved perturbatively. Using the radial Schrödinger equation in the form
and the Yukawa potential in the power-expanded form
and setting , one obtains for the angular momentum the expression
for , where
Setting all coefficients except equal to zero, one obtains the well-known expression for the Schrödinger eigenvalue for the Coulomb potential, and the radial quantum number is a positive integer or zero as a consequence of the boundary conditions which the wave functions of the Coulomb potential have to satisfy. In the case of the Yukawa potential the imposition of boundary conditions is more complicated. Thus in the Yukawa case is only an approximation and the parameter that replaces the integer is really an asymptotic expansion like that above with first approximation the integer value of the corresponding Coulomb case.
The above expansion for the orbital angular momentum or Regge trajectory can be reversed to obtain the energy eigenvalues or equivalently . One obtains:
The above asymptotic expansion of the angular momentum in descending powers of can also be derived with the WKB method. In that case, however, as in the case of the Coulomb potential the expression in the centrifugal term of the Schrödinger equation has to be replaced by , as was argued originally by Langer, the reason being that the singularity is too strong for an unchanged application of the WKB method. That this reasoning is correct follows from the WKB derivation of the correct result in the Coulomb case (with the Langer correction), and even of the above expansion in the Yukawa case with higher order WKB approximations.
Cross section
We can calculate the differential cross section between a proton or neutron and the pion by making use of the Yukawa potential. We use the Born approximation, which tells us that, in a spherically symmetrical potential, we can approximate the outgoing scattered wave function as the sum of incoming plane wave function and a small perturbation:
where is the particle's incoming momentum. The function is given by:
where is the particle's outgoing scattered momentum and is the incoming particles' mass (not to be confused with the pion's mass). We calculate by plugging in :
Evaluating the integral gives
Energy conservation implies
so that
Plugging in, we get:
We thus get a differential cross section of:
Integrating, the total cross section is:
See also
Yukawa interaction
Screened Poisson equation
Bessel potential
References
Sources
Gauge theories
Scattering theory
Quantum mechanical potentials | Yukawa potential | [
"Physics",
"Chemistry"
] | 1,827 | [
"Quantum mechanical potentials",
"Scattering",
"Scattering theory",
"Quantum mechanics"
] |
504,109 | https://en.wikipedia.org/wiki/Hausdorff%20measure | In mathematics, Hausdorff measure is a generalization of the traditional notions of area and volume to non-integer dimensions, specifically fractals and their Hausdorff dimensions. It is a type of outer measure, named for Felix Hausdorff, that assigns a number in [0,∞] to each set in or, more generally, in any metric space.
The zero-dimensional Hausdorff measure is the number of points in the set (if the set is finite) or ∞ if the set is infinite. Likewise, the one-dimensional Hausdorff measure of a simple curve in is equal to the length of the curve, and the two-dimensional Hausdorff measure of a Lebesgue-measurable subset of is proportional to the area of the set. Thus, the concept of the Hausdorff measure generalizes the Lebesgue measure and its notions of counting, length, and area. It also generalizes volume. In fact, there are d-dimensional Hausdorff measures for any d ≥ 0, which is not necessarily an integer. These measures are fundamental in geometric measure theory. They appear naturally in harmonic analysis or potential theory.
Definition
Let be a metric space. For any subset , let denote its diameter, that is
Let be any subset of and a real number. Define
where the infimum is over all countable covers of by sets satisfying .
Note that is monotone nonincreasing in since the larger is, the more collections of sets are permitted, making the infimum not larger. Thus, exists but may be infinite. Let
It can be seen that is an outer measure (more precisely, it is a metric outer measure). By Carathéodory's extension theorem, its restriction to the σ-field of Carathéodory-measurable sets is a measure. It is called the -dimensional Hausdorff measure of . Due to the metric outer measure property, all Borel subsets of are measurable.
In the above definition the sets in the covering are arbitrary. However, we can require the covering sets to be open or closed, or in normed spaces even convex, that will yield the same numbers, hence the same measure. In restricting the covering sets to be balls may change the measures but does not change the dimension of the measured sets.
Properties of Hausdorff measures
Note that if d is a positive integer, the d-dimensional Hausdorff measure of is a rescaling of the usual d-dimensional Lebesgue measure , which is normalized so that the Lebesgue measure of the unit cube [0,1]d is 1. In fact, for any Borel set E,
where αd is the volume of the unit d-ball; it can be expressed using Euler's gamma function
This is
,
where is the volume of the unit diameter d-ball.
Remark. Some authors adopt a definition of Hausdorff measure slightly different from the one chosen here, the difference being that the value defined above is multiplied by the factor , so that Hausdorff d-dimensional measure coincides exactly with Lebesgue measure in the case of Euclidean space.
Relation with Hausdorff dimension
It turns out that may have a finite, nonzero value for at most one . That is, the Hausdorff Measure is zero for any value above a certain dimension and infinity below a certain dimension, analogous to the idea that the area of a line is zero and the length of a 2D shape is in some sense infinity. This leads to one of several possible equivalent definitions of the Hausdorff dimension:
where we take
and
.
Note that it is not guaranteed that the Hausdorff measure must be finite and nonzero for some d, and indeed the measure at the Hausdorff dimension may still be zero; in this case, the Hausdorff dimension still acts as a change point between measures of zero and infinity.
Generalizations
In geometric measure theory and related fields, the Minkowski content is often used to measure the size of a subset of a metric measure space. For suitable domains in Euclidean space, the two notions of size coincide, up to overall normalizations depending on conventions. More precisely, a subset of is said to be -rectifiable if it is the image of a bounded set in under a Lipschitz function. If , then the -dimensional Minkowski content of a closed -rectifiable subset of is equal to times the -dimensional Hausdorff measure .
In fractal geometry, some fractals with Hausdorff dimension have zero or infinite -dimensional Hausdorff measure. For example, almost surely the image of planar Brownian motion has Hausdorff dimension 2 and its two-dimensional Hausdorff measure is zero. In order to "measure" the "size" of such sets, the following variation on the notion of the Hausdorff measure can be considered:
In the definition of the measure is replaced with where is any monotone increasing set function satisfying
This is the Hausdorff measure of with gauge function or -Hausdorff measure. A -dimensional set may satisfy but with an appropriate Examples of gauge functions include
The former gives almost surely positive and -finite measure to the Brownian path in when , and the latter when .
See also
Hausdorff dimension
Geometric measure theory
Measure theory
Outer measure
References
.
.
.
.
.
External links
Hausdorff dimension at Encyclopedia of Mathematics
Hausdorff measure at Encyclopedia of Mathematics
Fractals
Measures (measure theory)
Metric geometry
Dimension theory | Hausdorff measure | [
"Physics",
"Mathematics"
] | 1,125 | [
"Functions and mappings",
"Mathematical analysis",
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Mathematical objects",
"Fractals",
"Size",
"Mathematical relations"
] |
849,543 | https://en.wikipedia.org/wiki/Darcy%27s%20law | Darcy's law is an equation that describes the flow of a fluid flow trough a porous medium and through a Hele-Shaw cell. The law was formulated by Henry Darcy based on results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, a branch of earth sciences. It is analogous to Ohm's law in electrostatics, linearly relating the volume flow rate of the fluid to the hydraulic head difference (which is often just proportional to the pressure difference) via the hydraulic conductivity. In fact, the Darcy's law is a special case of the Stokes equation for the momentum flux, in turn deriving from the momentum Navier-Stokes equation.
Background
Darcy's law was first determined experimentally by Darcy, but has since been derived from the Navier–Stokes equations via homogenization methods. It is analogous to Fourier's law in the field of heat conduction, Ohm's law in the field of electrical networks, and Fick's law in diffusion theory.
One application of Darcy's law is in the analysis of water flow through an aquifer; Darcy's law along with the equation of conservation of mass simplifies to the groundwater flow equation, one of the basic relationships of hydrogeology.
Morris Muskat first refined Darcy's equation for a single-phase flow by including viscosity in the single (fluid) phase equation of Darcy. It can be understood that viscous fluids have more difficulty permeating through a porous medium than less viscous fluids. This change made it suitable for researchers in the petroleum industry. Based on experimental results by his colleagues Wyckoff and Botset, Muskat and Meres also generalized Darcy's law to cover a multiphase flow of water, oil and gas in the porous medium of a petroleum reservoir. The generalized multiphase flow equations by Muskat and others provide the analytical foundation for reservoir engineering that exists to this day.
Description
In the integral form, Darcy's law, as refined by Morris Muskat, in the absence of gravitational forces and in a homogeneously permeable medium, is given by a simple proportionality relationship between the volumetric flow rate , and the pressure drop through a porous medium. The proportionality constant is linked to the permeability of the medium, the dynamic viscosity of the fluid , the given distance over which the pressure drop is computed, and the cross-sectional area , in the form:
Note that the ratio:
can be defined as the Darcy's law hydraulic resistance.
The Darcy's law can be generalised to a local form:
where is the hydraulic gradient and is the volumetric flux which here is called also superficial velocity.
Note that the ratio:
can be thought as the Darcy's law hydraulic conductivity.
In the (less general) integral form, the volumetric flux and the pressure gradient correspond to the ratios:
.
In case of an anisotropic porous media, the permeability is a second order tensor, and in tensor notation one can write the more general law:
Notice that the quantity , often referred to as the Darcy flux or Darcy velocity, is not the velocity at which the fluid is travelling through the pores. It is the specific discharge, or flux per unit area. The flow velocity () is related to the flux () by the porosity () with the following equation:
The Darcy's constitutive equation, for single phase (fluid) flow, is the defining equation for absolute permeability (single phase permeability).
With reference to the diagram to the right, the flow velocity is in SI units , and since the porosity is a nondimensional number, the Darcy flux , or discharge per unit area, is also defined in units ; the permeability in units , the dynamic viscosity in units and the hydraulic gradient is in units .
In the integral form, the total pressure drop is in units , and is the length of the sample in units , the Darcy's volumetric flow rate , or discharge, is also defined in units and the cross-sectional area in units . A number of these parameters are used in alternative definitions below. A negative sign is used in the definition of the flux following the standard physics convention that fluids flow from regions of high pressure to regions of low pressure. Note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative, then the flow will be in the positive direction. There have been several proposals for a constitutive equation for absolute permeability, and the most famous one is probably the Kozeny equation (also called Kozeny–Carman equation).
By considering the relation for static fluid pressure (Stevin's law):
one can decline the integral form also into the equation:
where ν is the kinematic viscosity.
The corresponding hydraulic conductivity is therefore:
Darcy's law is a simple mathematical statement which neatly summarizes several familiar properties that groundwater flowing in aquifers exhibits, including:
if there is no pressure gradient over a distance, no flow occurs (these are hydrostatic conditions),
if there is a pressure gradient, flow will occur from high pressure towards low pressure (opposite the direction of increasing gradient — hence the negative sign in Darcy's law),
the greater the pressure gradient (through the same formation material), the greater the discharge rate, and
the discharge rate of fluid will often be different — through different formation materials (or even through the same material, in a different direction) — even if the same pressure gradient exists in both cases.
A graphical illustration of the use of the steady-state groundwater flow equation (based on Darcy's law and the conservation of mass) is in the construction of flownets, to quantify the amount of groundwater flowing under a dam.
Darcy's law is only valid for slow, viscous flow; however, most groundwater flow cases fall in this category. Typically any flow with a Reynolds number less than one is clearly laminar, and it would be valid to apply Darcy's law. Experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. The Reynolds number (a dimensionless parameter) for porous media flow is typically expressed as
where is the kinematic viscosity of water, is the specific discharge (not the pore velocity — with units of length per time), is a representative grain diameter for the porous media (the standard choice is math|d30, which is the 30% passing size from a grain size analysis using sieves — with units of length).
Derivation
For stationary, creeping, incompressible flow, i.e. , the Navier–Stokes equation simplifies to the Stokes equation, which by neglecting the bulk term is:
where is the viscosity, is the velocity in the direction, and is the pressure. Assuming the viscous resisting force is linear with the velocity we may write:
where is the porosity, and is the second order permeability tensor. This gives the velocity in the direction,
which gives Darcy's law for the volumetric flux density in the direction,
In isotropic porous media the off-diagonal elements in the permeability tensor are zero, for and the diagonal elements are identical, , and the common form is obtained as below, which enables the determination of the liquid flow velocity by solving a set of equations in a given region.
The above equation is a governing equation for single-phase fluid flow in a porous medium.
Use in petroleum engineering
Another derivation of Darcy's law is used extensively in petroleum engineering to determine the flow through permeable media — the most simple of which is for a one-dimensional, homogeneous rock formation with a single fluid phase and constant fluid viscosity.
Almost all oil reservoirs have a water zone below the oil leg, and some also have a gas cap above the oil leg. When the reservoir pressure drops due to oil production, water flows into the oil zone from below, and gas flows into the oil zone from above (if the gas cap exists), and we get a simultaneous flow and immiscible mixing of all fluid phases in the oil zone. The oil field operator may also inject water (or gas) to improve oil production. The petroleum industry is, therefore, using a generalized Darcy equation for multiphase flow developed by Muskat et alios. Because Darcy's name is so widespread and strongly associated with flow in porous media, the multiphase equation is denoted Darcy's law for multiphase flow or generalized Darcy equation (or law) or simply Darcy's equation (or law) or flow equation if the context says that the text is discussing the multiphase equation of Muskat. Multiphase flow in oil and gas reservoirs is a comprehensive topic, and one of many articles about this topic is Darcy's law for multiphase flow.
Use in coffee brewing
A number of papers have utilized Darcy's law to model the physics of brewing in a moka pot, specifically how the hot water percolates through the coffee grinds under pressure, starting with a 2001 paper by Varlamov and Balestrino, and continuing with a 2007 paper by Gianino, a 2008 paper by Navarini et al., and a 2008 paper by W. King. The papers will either take the coffee permeability to be constant as a simplification or will measure change through the brewing process.
Additional forms
Differential expression
Darcy's law can be expressed very generally as:
where q is the volume flux vector of the fluid at a particular point in the medium, h is the total hydraulic head, and K is the hydraulic conductivity tensor, at that point. The hydraulic conductivity can often be approximated as a scalar. (Note the analogy to Ohm's law in electrostatics. The flux vector is analogous to the current density, head is analogous to voltage, and hydraulic conductivity is analogous to electrical conductivity.)
Quadratic law
For flows in porous media with Reynolds numbers greater than about 1 to 10, inertial effects can also become significant. Sometimes an inertial term is added to the Darcy's equation, known as Forchheimer term. This term is able to account for the non-linear behavior of the pressure difference vs flow data.
where the additional term is known as inertial permeability, in units of length .
The flow in the middle of a sandstone reservoir is so slow that Forchheimer's equation is usually not needed, but the gas flow into a gas production well may be high enough to justify using it. In this case, the inflow performance calculations for the well, not the grid cell of the 3D model, are based on the Forchheimer equation. The effect of this is that an additional rate-dependent skin appears in the inflow performance formula.
Some carbonate reservoirs have many fractures, and Darcy's equation for multiphase flow is generalized in order to govern both flow in fractures and flow in the matrix (i.e. the traditional porous rock). The irregular surface of the fracture walls and high flow rate in the fractures may justify the use of Forchheimer's equation.
Correction for gases in fine media (Knudsen diffusion or Klinkenberg effect)
For gas flow in small characteristic dimensions (e.g., very fine sand, nanoporous structures etc.), the particle-wall interactions become more frequent, giving rise to additional wall friction (Knudsen friction). For a flow in this region, where both viscous and Knudsen friction are present, a new formulation needs to be used. Knudsen presented a semi-empirical model for flow in transition regime based on his experiments on small capillaries. For a porous medium, the Knudsen equation can be given as
where is the molar flux, is the gas constant, is the temperature, is the effective Knudsen diffusivity of the porous media. The model can also be derived from the first-principle-based binary friction model (BFM). The differential equation of transition flow in porous media based on BFM is given as
This equation is valid for capillaries as well as porous media. The terminology of the Knudsen effect and Knudsen diffusivity is more common in mechanical and chemical engineering. In geological and petrochemical engineering, this effect is known as the Klinkenberg effect. Using the definition of molar flux, the above equation can be rewritten as
This equation can be rearranged into the following equation
Comparing this equation with conventional Darcy's law, a new formulation can be given as
where
This is equivalent to the effective permeability formulation proposed by Klinkenberg:
where is known as the Klinkenberg parameter, which depends on the gas and the porous medium structure. This is quite evident if we compare the above formulations. The Klinkenberg parameter is dependent on permeability, Knudsen diffusivity and viscosity (i.e., both gas and porous medium properties).
Darcy's law for short time scales
For very short time scales, a time derivative of flux may be added to Darcy's law, which results in valid solutions at very small times (in heat transfer, this is called the modified form of Fourier's law),
where is a very small time constant which causes this equation to reduce to the normal form of Darcy's law at "normal" times (> nanoseconds). The main reason for doing this is that the regular groundwater flow equation (diffusion equation) leads to singularities at constant head boundaries at very small times. This form is more mathematically rigorous but leads to a hyperbolic groundwater flow equation, which is more difficult to solve and is only useful at very small times, typically out of the realm of practical use.
Brinkman form of Darcy's law
Another extension to the traditional form of Darcy's law is the Brinkman term, which is used to account for transitional flow between boundaries (introduced by Brinkman in 1949),
where is an effective viscosity term. This correction term accounts for flow through medium where the grains of the media are porous themselves, but is difficult to use, and is typically neglected.
Validity of Darcy's law
Darcy's law is valid for laminar flow through sediments. In fine-grained sediments, the dimensions of interstices are small; thus, the flow is laminar. Coarse-grained sediments also behave similarly, but in very coarse-grained sediments, the flow may be turbulent. Hence Darcy's law is not always valid in such sediments.
For flow through commercial circular pipes, the flow is laminar when the Reynolds number is less than 2000 and turbulent when it is more than 4000, but in some sediments, it has been found that flow is laminar when the value of the Reynolds number is less than 1.
See also
The darcy, a unit of fluid permeability
Hydrogeology
Groundwater flow equation
Mathematical model
Black-oil equations
Fick's law
Ergun equation
References
Water
Civil engineering
Soil mechanics
Soil physics
Hydrology
Transport phenomena | Darcy's law | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,163 | [
"Transport phenomena",
"Physical phenomena",
"Hydrology",
"Applied and interdisciplinary physics",
"Chemical engineering",
"Soil mechanics",
"Soil physics",
"Construction",
"Civil engineering",
"Environmental engineering",
"Water"
] |
849,843 | https://en.wikipedia.org/wiki/Nano-RAM | Nano-RAM is a proprietary computer memory technology from the company Nantero. It is a type of nonvolatile random-access memory based on the position of carbon nanotubes deposited on a chip-like substrate. In theory, the small size of the nanotubes allows for very high density memories. Nantero also refers to it as NRAM.
Technology
The first generation Nantero NRAM technology was based on a three-terminal semiconductor device where a third terminal is used to switch the memory cell between memory states. The second generation NRAM technology is based on a two-terminal memory cell. The two-terminal cell has advantages such as a smaller cell size, better scalability to sub-20 nm nodes (see semiconductor device fabrication), and the ability to passivate the memory cell during fabrication.
In a non-woven fabric matrix of carbon nanotubes (CNTs), crossed nanotubes can either be touching or slightly separated depending on their position. When touching, the carbon nanotubes are held together by Van der Waals forces. Each NRAM "cell" consists of an interlinked network of CNTs located between two electrodes as illustrated in Figure 1. The CNT fabric is located between two metal electrodes, which is defined and etched by photolithography, and forms the NRAM cell.
The NRAM acts as a resistive non-volatile random-access memory (RAM) and can be placed in two or more resistive modes depending on the resistive state of the CNT fabric. When the CNTs are not in contact the resistance state of the fabric is high and represents an "off" or "0" state. When the CNTs are brought into contact, the resistance state of the fabric is low and represents an "on" or "1" state. NRAM acts as a memory because the two resistive states are very stable. In the 0 state, the CNTs (or a portion of them) are not in contact and remain in a separated state due to the stiffness of the CNTs resulting in a high resistance or low current measurement state between the top and bottom electrodes. In the 1 state, the CNTs (or a portion of them) are in contact and remain contacted due to Van der Waals forces between the CNTs, resulting in a low resistance or high current measurement state between the top and bottom electrodes. Note that other sources of resistance such as contact resistance between electrode and CNT can be significant and also need to be considered.
To switch the NRAM between states, a small voltage greater than the read voltage is applied between top and bottom electrodes. If the NRAM is in the 0 state, the voltage applied will cause an electrostatic attraction between the CNTs close to each other causing a SET operation. After the applied voltage is removed, the CNTs remain in a 1 or low resistance state due to physical adhesion (Van der Waals force) with an activation energy (Ea) of approximately 5eV. If the NRAM cell is in the 1 state, applying a voltage greater than the read voltage will generate CNT phonon excitations with sufficient energy to separate the CNT junctions. This is the phonon driven RESET operation. The CNTs remain in the OFF or high resistance state due to the high mechanical stiffness (Young's Modulus 1 TPa) with an activation energy (Ea) much greater than 5 eV. Figure 2 illustrates both states of an individual pair of CNTs involved in the switch operation. Due to the high activation energy (> 5eV) required for switching between states, the NRAM switch resists outside interference like radiation and operating temperature that can erase or flip conventional memories like DRAM.
NRAMs are fabricated by depositing a uniform layer of CNTs onto a prefabricated array of drivers such as transistors as shown in Figure 1. The bottom electrode of the NRAM cell is in contact with the underlying via (electronics) connecting the cell to the driver. The bottom electrode may be fabricated as part of the underlying via or it may be fabricated simultaneously with the NRAM cell, when the cell is photolithographically defined and etched. Before the cell is photolithographically defined and etched, the top electrode is deposited as a metal film onto the CNT layer so that the top metal electrode is patterned and etched during the definition of the NRAM cell. Following the dielectric passivation and fill of the array, the top metal electrode is exposed by etching back the overlying dielectric using a smoothing process such as chemical-mechanical planarization. With the top electrode exposed, the next level of metal wiring interconnect is fabricated to complete the NRAM array. Figure 3 illustrates one circuit method to select a single cell for writing and reading. Using a cross-grid interconnect arrangement, the NRAM and driver, (the cell), forms a memory array similar to other memory arrays. A single cell can be selected by applying the proper voltages to the word line (WL), bit line (BL), and select lines (SL) without disturbing the other cells in the array. Alternatively between the bottom electrode and top metal layer they may be two layers of CNTs: one with uniformly arranged CNTs, and another with randomly arranged CNTs. The uniformly arranged CNTs are used to protect the randomly arranged CNTs from the top metal layer.
Characteristics
NRAM has a density, at least in theory, similar to that of DRAM. DRAM includes capacitors, which are essentially two small metal plates with a thin insulator between them. NRAM has terminals and electrodes roughly the same size as the plates in a DRAM, the nanotubes between them being so much smaller they add nothing to the overall size. However it seems there is a minimum size at which a DRAM can be built, below which there is simply not enough charge being stored on the plates. NRAM appears to be limited only by lithography. This means that NRAM may be able to become much denser than DRAM, perhaps also less expensive. Unlike DRAM, NRAM does not require power to "refresh" it, and will retain its memory even after power is removed. Thus the power needed to write and retain the memory state of the device is much lower than DRAM, which has to build up charge on the cell plates. This means that NRAM might compete with DRAM in terms of cost, but also require less power, and as a result also be much faster because write performance is largely determined by the total charge needed. NRAM can theoretically reach performance similar to SRAM, which is faster than DRAM but much less dense, and thus much more expensive.
Comparison with other non-volatile memory
Compared with other non-volatile random-access memory (NVRAM) technologies, NRAM has several advantages. In flash memory, the common form of NVRAM, each cell resembles a MOSFET transistor with a control gate (CG) modulated by a floating gate (FG) interposed between the CG and the FG. The FG is surrounded by an insulating dielectric, typically an oxide. Since the FG is electrically isolated by the surrounding dielectric, any electrons placed on the FG will be trapped on the FG which screens the CG from the channel of the transistor and modifies the threshold voltage (VT) of the transistor. By writing and controlling the amount of charge placed on the FG, the FG controls the conduction state of the MOSFET flash device depending on the VT of the cell selected. The current flowing through the MOSFET channel is sensed to determine the state of the cell forming a binary code where a 1 state (current flow) when an appropriate CG voltage is applied and a 0 state (no current flow) when the CG voltage is applied.
After being written to, the insulator traps electrons on the FG, locking it into the 0 state. However, in order to change that bit, the insulator has to be "overcharged" to erase any charge already stored in it. This requires higher voltage, about 10 volts, much more than a battery can provide. Flash systems include a "charge pump" that slowly builds up power and releases it at higher voltage. This process is not only slow, but degrades the insulators. For this reason flash has a limited number of writes before the device will no longer operate effectively.
NRAM reads and writes are both "low energy" in comparison to flash (or DRAM for that matter due to "refresh"), meaning NRAM could have longer battery life. It may also be much faster to write than either, meaning it may be used to replace both. Modern phones include flash memory for storing phone numbers, DRAM for higher performance working memory because flash is too slow, and some SRAM for even higher performance. Some NRAM could be placed on the CPU to act as the CPU cache, and more in other chips replacing both the DRAM and flash.
NRAM is one of a variety of new memory systems, many of which claim to be "universal" in the same fashion as NRAM – replacing everything from flash to DRAM to SRAM.
An alternative memory ready for use is ferroelectric RAM (FRAM or FeRAM). FeRAM adds a small amount of a ferro-electric material to a DRAM cell. The state of the field in the material encodes the bit in a non-destructive format. FeRAM has advantages of NRAM, although the smallest possible cell size is much larger than for NRAM. FeRAM is used in applications where the limited number of writes of flash is an issue. FeRAM read operations are destructive, requiring a restoring write operation afterwards.
Other more speculative memory systems include magnetoresistive random-access memory (MRAM) and phase-change memory (PRAM). MRAM is based on a grid of magnetic tunnel junctions. MRAM's reads the memory using the tunnel magnetoresistance effect, allowing it to read the memory both non-destructively and with very little power. Early MRAM used field induced writing, reached a limit in terms of size, which kept it much larger than flash devices. However, new MRAM techniques might overcome the size limitation to make MRAM competitive even with flash memory. The techniques are Thermal Assisted Switching (TAS), developed by Crocus Technology, and Spin-transfer torque on which Crocus, Hynix, IBM, and other companies were working in 2009.
PRAM is based on a technology similar to that in a writable CD or DVD, using a phase-change material that changes its magnetic or electrical properties instead of its optical ones. The PRAM material itself is scalable but requires a larger current source.
History
Nantero was founded in 2001, and headquartered in Woburn, Massachusetts. Due to the massive investment in flash semiconductor fabrication plants, no alternative memory has replaced flash in the marketplace, despite predictions as early as 2003 of the impending speed and density of NRAM.
In 2005, NRAM was promoted as universal memory, and Nantero predicted it would be in production by the end of 2006.
In August 2008, Lockheed Martin acquired an exclusive license for government applications of Nantero's intellectual property.
By early 2009, Nantero had 30 US patents and 47 employees, but was still in the engineering phase. In May 2009, a radiation-resistant version of NRAM was tested on the STS-125 mission of the US .
The company was quiet until another round of funding and collaboration with the Belgian research center imec was announced in November 2012.
Nantero raised a total of over $42 million through the November 2012 series D round.
Investors included Charles River Ventures, Draper Fisher Jurvetson, Globespan Capital Partners, Stata Venture Partners and Harris & Harris Group.
In May 2013, Nantero completed series D with an investment by Schlumberger.
EE Times listed Nantero as one of "10 top startups to watch in 2013".
31 Aug 2016: Two Fujitsu semiconductor businesses are licensing Nantero NRAM technology with joint Nantero–Fujitsu development to produce chips, announced in 2018. They are announced to have several thousand times faster rewrites and many thousands of times more rewrite cycles than embedded flash memory. As of 2024, these products are still announced but have not reached the market.
See also
RAM
Magnetoresistive random-access memory
Phase-change memory
Ferroelectric RAM
References
External links
Nantero's NRAM page
Non-volatile random-access memory
Nanomaterials | Nano-RAM | [
"Materials_science"
] | 2,627 | [
"Nanotechnology",
"Nanomaterials"
] |
849,891 | https://en.wikipedia.org/wiki/NLGI%20consistency%20number | The NLGI consistency number or NLGI grade expresses a measure of the relative hardness of a grease used for lubrication, as specified by the standard classification of lubricating grease established by the National Lubricating Grease Institute (NLGI). Reproduced in standards
(“standard classification and specification of automotive service greases”) and (“automotive lubricating greases”), NLGI's classification is widely used. The NLGI consistency number is also a component of the code specified in standard “lubricants, industrial oils and related products (class L) — classification — part 9: family X (greases)”.
The NLGI consistency number alone is not sufficient for specifying the grease required by a particular application. However, it complements other classifications (such as and ). Besides consistency, other properties (such as structural and mechanical stability, apparent viscosity, resistance to oxidation, etc.) can be tested to determine the suitability of a grease to a specific application.
Test method
NLGI's classification defines nine grades, each associated to a range of ASTM worked penetration values, measured using the test defined by standard “cone penetration of lubricating grease”. This involves two test apparatus. The first apparatus consists of a closed container and a piston-like plunger. The face of the plunger is perforated to allow grease to flow from one side of the plunger to another as the plunger is worked up and down. The test grease is inserted into the container and the plunger is stroked while the test apparatus and grease are maintained at a temperature of .
Once worked, the grease is placed in a penetration test apparatus. This apparatus consists of a container, a specially-configured cone and a dial indicator. The container is filled with the grease and the top surface of the grease is smoothed over. The cone is placed so that its tip just touches the grease surface and the dial indicator is set to zero at this position. When the test starts, the weight of the cone will cause it to penetrate into the grease. After a specific time interval the depth of penetration is measured.
Classification
The following table shows the NLGI classification and compares each grade with household products of similar consistency.
Common greases are in the range 1 through 3. Those with a NLGI No. of 000 to 1 are used in low viscosity applications. Examples include enclosed gear drives operating at low speeds and open gearing. Grades 0, 1 and 2 are used in highly loaded gearing. Grades 1 through 4 are often used in rolling contact bearings. Greases with a higher number are firmer, tend to stay in place and are a good choice when leakage is a concern.
References
Lubricants
Tribology
ASTM standards
Automotive standards | NLGI consistency number | [
"Chemistry",
"Materials_science",
"Engineering"
] | 565 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
850,009 | https://en.wikipedia.org/wiki/Micrographia | Micrographia: or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses. With Observations and Inquiries Thereupon is a historically significant book by Robert Hooke about his observations through various lenses. It was the first book to include illustrations of insects and plants as seen through microscopes.
Published in January 1665, the first major publication of the Royal Society, it became the first scientific best-seller, inspiring a wide public interest in the new science of microscopy. The book originated the biological term cell.
Observations
Hooke most famously describes a fly's eye and a plant cell (where he coined that term because plant cells, which are walled, reminded him of the cells in a honeycomb). Known for its spectacular copperplate of the miniature world, particularly its fold-out plates of insects, the text itself reinforces the tremendous power of the new microscope. The plates of insects fold out to be larger than the large folio itself, the engraving of the louse in particular folding out to four times the size of the book. Although the book is best known for demonstrating the power of the microscope, Micrographia also describes distant planetary bodies, the wave theory of light, the organic origin of fossils, and other philosophical and scientific interests of its author.
Hooke also selected several objects of human origin; among these objects were the jagged edge of a honed razor and the point of a needle, seeming blunt under the microscope. His goal may well have been to contrast the flawed products of mankind with the perfection of nature (and hence, in the spirit of the times, of biblical creation).
Reception
Published under the aegis of the Royal Society, the popularity of the book helped further the society's image and mission of being England's leading scientific organization. Micrographia illustrations of the miniature world captured the public's imagination in a radically new way; Samuel Pepys called it "the most ingenious book that ever I read in my life".
Methods
In 2007, Janice Neri, a professor of art history and visual culture, studied Hooke's artistic influences and processes with the help of some newly rediscovered notes and drawings that appear to show some of his work leading up to Micrographia. She observes, "Hooke's use of the term "schema" to identify his plates indicates that he approached his images in a diagrammatic manner and implies the study or visual dissection of the objects portrayed." Identifying Hooke's schema as 'organization tools,' she emphasizes:
Additionally: "Hooke often enclosed the objects he presented within a round frame, thus offering viewers an evocation of the experience of looking through the lens of a microscope."
Bibliography
Robert Hooke. Micrographia: or, Some physiological descriptions of minute bodies made by magnifying glasses. London: J. Martyn and J. Allestry, 1665. (first edition).
References
External links
Engraved copperplate illustrations from a first edition of Micrographia: or Some physiological descriptions of minute bodies made by magnifying glasses. With observations and inquiries thereupon (all images freely available for download in a variety of formats from the Science History Institute's Digital Collections)
Project Gutenberg Micrographia text
Turning the Pages - virtual copy of the book from the National Library of Medicine
Micrographia - full digital facsimile at Linda Hall Library
Transcribing the Hooke Folio
English non-fiction literature
Biology books
1665 books
Microscopes
Microscopy
Cell imaging
Royal Society
1665 in science | Micrographia | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 716 | [
"Microscopes",
"Cell imaging",
"Measuring instruments",
"Microscopy"
] |
850,048 | https://en.wikipedia.org/wiki/Gas%20lighting | Gas lighting is the production of artificial light from combustion of a fuel gas such as methane, propane, butane, acetylene, ethylene, hydrogen, carbon monoxide, coal gas (town gas) or natural gas. The light is produced either directly by the flame, generally by using special mixes (typically propane or butane) of illuminating gas to increase brightness, or indirectly with other components such as the gas mantle or the limelight, with the gas primarily functioning to heat the mantle or the lime to incandescence .
Before electricity became sufficiently widespread and economical to allow for general public use, gas lighting was prevalent for outdoor and indoor use in cities and suburbs where the infrastructure for distribution of gas was practical. At that time, the most common fuels for gas lighting were wood gas, coal gas and, in limited cases, water gas. Early gas lights were ignited manually by lamplighters, although many later designs are self-igniting.
Gas lighting now is frequently used for camping, for which the high energy density of the hydrocarbon fuel, and the modular canisters on which camping lights are built, brings bright and long lasting light without complex equipment. In addition, some urban historical districts retain gas street lighting, and gas lighting is used indoors or outdoors to create or preserve a nostalgic effect.
History of gas lighting
Background
Prior to use of gaseous fuels for lighting, the early lighting fuels consisted of olive oil, beeswax, fish oil, whale oil, sesame oil, nut oil, or other similar substances, which were all liquid fuels. These were the most commonly used fuels until the late 18th century. Whale oil was especially widely used for lighting in European cities such as London through the early 19th century.
Chinese records dating back 1,700 years indicate the use of natural gas in homes for lighting and heating. The natural gas was transported by means of bamboo pipes to homes. The ancient Chinese of the Spring and Autumn period made the first practical use of natural gas for lighting purposes around 500 B.C. in which they used bamboo pipelines to transport both brine and natural gas for many miles, such as the ones in Zigong salt mines.
Public illumination preceded by centuries the development and widespread adoption of gas lighting. In 1417, Sir Henry Barton, Lord Mayor of London, ordained "Lanthornes with lights to bee hanged out on the Winter evening betwixt Hallowtide and Candlemassee." Paris was first illuminated by an order issued in 1524, and, in the beginning of the 16th century, the inhabitants were ordered to keep lights burning in the windows of all houses that faced streets. In 1668, when some regulations were made for improving the streets of London, the residents were reminded to hang out their lanterns at the usual time, and, in 1690, an order was issued to hang out a light, or lamp, every night at nightfall, from Michaelmas to Christmas. By an Act of the Common Council in 1716, all housekeepers, whose houses faced any street, lane, or passage, were required to hang out, every dark night, one or more lights, to burn from six to eleven o'clock, under the penalty of one shilling as a fine for failing to do so.
Accumulating and escaping gases were known originally among coal miners for their adverse effects rather than their useful characteristics. Coal miners described two types of gases, one called the choke damp and the other fire damp. In 1667, a paper detailing the effects of these gases was entitled, "A Description of a Well and Earth in Lancashire taking Fire, by a Candle approaching to it. Imparted by Thomas Shirley, Esq an eye-witness."
British clergyman and scientist Stephen Hales experimented with the actual distillation of coal, thereby obtaining a flammable liquid. He reported his results in the first volume of his Vegetable Statics, published in 1726. From the distillation of "one hundred and fifty-eight grains [10.2 g] of Newcastle coal, he stated that he obtained 180 cubic inches [2.9 L] of gas, which weighed 51 grains [3.3 g], being nearly one third of the whole." Hales's results garnered attention decades later as the unique chemical properties of various gases became understood through the work of Joseph Black, Henry Cavendish, Alessandro Volta, and others.
A 1733 publication by Sir James Lowther in the Philosophical Transactions of the Royal Society detailed some properties of coal gas, including its flammability. Lowther demonstrated the principal properties of coal gas to different members of the Royal Society. He showed that the gas retained its flammability after storage for some time. The demonstration did not result in identification of utility.
Minister and experimentalist John Clayton referred to coal gas as the "spirit" of coal. He discovered its flammability by an accident. The "spirit" he isolated from coal caught fire by coming in contact with a candle as it escaped from a fracture in one of his distillation vessels. He stored the coal gas in bladders, and at times he entertained his friends by demonstrating the flammability of the gas. Clayton published his findings in Philosophical Transactions.
Early technology
It took nearly 200 years for gas to become accessible for commercial use. A Flemish alchemist, Jan Baptista van Helmont, was the first person to formally recognize gas as a state of matter. He would go on to identify several types of gases, including carbon dioxide. Over one hundred years later in 1733, Sir James Lowther had some of his miners working on a water pit for his mine. While digging the pit they hit a pocket of gas. Lowther took a sample of the gas and took it home to do some experiments. He noted, "The said air being put into a bladder … and tied close, may be carried away, and kept some days, and being afterwards pressed gently through a small pipe into the flame of a candle, will take fire, and burn at the end of the pipe as long as the bladder is gently pressed to feed the flame, and when taken from the candle after it is so lighted, it will continue burning till there is no more air left in the bladder to supply the flame." Lowther had basically discovered the principle behind gas lighting.
Later in the 18th century William Murdoch (sometimes spelled "Murdock") stated: "the gas obtained by distillation from coal, peat, wood and other inflammable substances burnt with great brilliancy upon being set fire to … by conducting it through tubes, it might be employed as an economical substitute for lamps and candles." Murdoch's first invention was a lantern with a gas-filled bladder attached to a jet. He would use this to walk home at night. After seeing how well this worked he decided to light his home with gas. In 1797, Murdoch installed gas lighting in his new home as well as the workshop in which he worked. “This work was of a large scale, and he next experimented to find better ways of producing, purifying, and burning the gas.” The foundation had been laid for companies to start producing gas and other inventors to start playing with ways of using the new technology.
Murdoch was the first to exploit the flammability of gas for the practical application of lighting. He worked for Matthew Boulton and James Watt at their Soho Foundry steam engine works in Birmingham, England. In the early 1790s, while overseeing the use of his company's steam engines in tin mining in Cornwall, Murdoch began experimenting with various types of gas, finally settling on coal gas as the most effective. He first lit his own house in Redruth, Cornwall in 1792. In 1798, he used gas to light the main building of the Soho Foundry and in 1802 lit the outside in a public display of gas lighting, the lights astonishing the local population. One of the employees at the Soho Foundry, Samuel Clegg, saw the potential of this new form of lighting. Clegg left his job to set up his own gas lighting business, the Gas Light and Coke Company.
A "thermolampe" using gas distilled from wood was patented in 1799, while German inventor Friedrich Winzer (Frederick Albert Winsor) was the first person to patent coal-gas lighting in 1804.
In 1801, Phillipe Lebon of Paris had also used gas lights to illuminate his house and gardens, and was considering how to light all of Paris. In 1820, Paris adopted gas street lighting.
In 1804, Dr Henry delivered a course of lectures on chemistry, at Manchester, in which he showed the mode of producing gas from coal, and the facility and advantage of its use. Dr Henry analysed the composition and investigated the properties of carburetted hydrogen gas (i.e. methane). His experiments were numerous and accurate and made upon a variety of substances; having obtained the gas from wood, peat, different kinds of coal, oil, wax, etc., he quantified the intensity of the light from each source.
In 1806 The Philips and Lee factory and a portion of Chapel Street in Salford, Lancashire were lit by gas, thought to be the first use of gas street lighting in the world.
Josiah Pemberton, an inventor, had for some time been experimenting on the nature of gas. A resident of Birmingham, his attention may have been roused by the exhibition at Soho. About 1806, he exhibited gas lights in a variety of forms and with great brilliance at the front of his factory in Birmingham. In 1808 he constructed an apparatus, applicable for several uses, for Benjamin Cooke, a manufacturer of brass tubes, gilt toys, and other articles.
In 1808, Murdoch presented to the Royal Society a paper entitled "Account of the Application of Gas from Coal to Economical Purposes" in which he described his successful application of coal gas to light the extensive establishment of Messrs. Phillips and Lea. For this paper he was awarded Count Rumford's gold medal. Murdoch's statements threw great light on the comparative advantage of gas and candles, and contained much useful information on the expenses of production and management.
Although the history is uncertain, David Melville has been credited with the first house and street lighting in the United States, in either 1805 or 1806 in Newport, Rhode Island.
In 1809, accordingly, the first application was made to Parliament to incorporate a company in order to accelerate the process, but the bill failed to pass. In 1810, however, the application was renewed by the same parties, and though some opposition was encountered and considerable expense incurred, the bill passed, but not without great alterations; and the London and Westminster Gas Light and Coke Company was established. Less than two years later, on 31 December 1813, Westminster Bridge was lit by gas.
By 1816, Samuel Clegg obtained the patent for his horizontal rotative retort, his apparatus for purifying coal gas with cream of lime, and for his rotative gas meter and self-acting governor.
Widespread use
Among the economic impacts of gas lighting was much longer work hours in factories. This was particularly important in Great Britain during the winter months when nights are significantly longer. Factories could even work continuously over 24 hours, resulting in increased production. Following successful commercialization, gas lighting spread to other countries.
In England, the first place outside London to have gas lighting was Preston, Lancashire, in 1816; this was due to the Preston Gaslight Company run by revolutionary Joseph Dunn, who found the most improved way of brighter gas lighting. The parish church there was the first religious building to be lit by gas lighting.
In Bristol, a Gas Light Company was founded on 15 December 1815. Under the supervision of the engineer, John Brelliat, extensive works were conducted in 1816-17 to build a gasholder, mains and street lights. Many of the principal streets in the centre of the city, as well as nearby houses, had switched to gas lighting by the end of 1817.
In America, Seth Bemis lit his factory with gas illumination from 1812 to 1813. The use of gas lights in Rembrandt Peale's Museum in Baltimore in 1816 was a great success. Baltimore was the first American city with gas street lights; Peale's Gas Light Company of Baltimore on 7 February 1817 lit its first street lamp at Market and Lemon Streets (currently Baltimore and Holliday Streets). The first private residence in the US illuminated by gas has been variously identified as that of David Melville (c. 1806), as described above, or of William Henry, a coppersmith, at 200 Lombard Street, Philadelphia, Pennsylvania, in 1816.
In 1817, at the three stations of the Chartered Gas Company in London, 25 chaldrons (24 m3) of coal were carbonized daily, producing 300,000 cubic feet (8,500 m3) of gas. This supplied gas lamps equal to 75,000 Argand lamps each yielding the light of six candles. At the City Gas Works, in Dorset Street, Blackfriars, three chaldrons of coal were carbonized each day, providing the gas equivalent of 9,000 Argand lamps. So 28 chaldrons of coal were carbonized daily, and 84,000 lights supplied by those two companies only.
At this period the principal difficulty in gas manufacture was purification. Mr. D. Wilson, of Dublin, patented a method for purifying coal gas by means of the chemical action of ammoniacal gas. Another plan was devised by Reuben Phillips, of Exeter, who patented the purification of coal gas by the use of dry lime. G. Holworthy, in 1818, patented a method of purifying it by passing the gas, in a highly condensed state, through iron retorts heated to a dark red.
In 1820, Swedish inventor Johan Patrik Ljungström had developed a gas lighting with copper apparatuses and chandeliers of ink, brass and crystal, reportedly one of the first such public installations of gas lighting in the region, enhanced as a triumphal arch for the city gate for a royal visit of Charles XIV John of Sweden in 1820.
By 1823, numerous towns and cities throughout Britain were lit by gas. Gas light cost up to 75% less than oil lamps or candles, which helped to accelerate its development and deployment. By 1859, gas lighting was to be found all over Britain and about a thousand gas works had sprung up to meet the demand for the new fuel. The brighter lighting which gas provided allowed people to read more easily and for longer. This helped to stimulate literacy and learning, speeding up the second Industrial Revolution.
In 1824 the English Association for Gas Lighting on the Continent, a sizeable business producing gas for several cities in mainland, Europe, including Berlin, was established, with Sir William Congreve, 2nd Baronet as general manager.
The 1839 invention, the Bude-Light, provided a brighter and more economical lamp.
Oil-gas appeared in the field as a rival of coal gas. In 1815, John Taylor patented an apparatus for the decomposition of "oil" and other animal substances. Public attention was attracted to "oil-gas" by the display of the patent apparatus at Apothecary's Hall, by Taylor & Martineau.
In 1891 the gas mantle was invented by the Austrian chemist Carl Auer von Welsbach. This eliminated the need for special illuminating gas (a synthetic mixture of hydrogen and hydrocarbon gases produced by destructive distillation of bituminous coal or peat) to get bright shining flames. Acetylene was also used from about 1898 for gas lighting on a smaller scale.
Illuminating gas was used for gas lighting, as it produces a much brighter light than natural gas or water gas. Illuminating gas was much less toxic than other forms of coal gas, but less could be produced from a given quantity of coal. The experiments with distilling coal were described by John Clayton in 1684. George Dixon's pilot plant exploded in 1760, setting back the production of illuminating gas a few years. The first commercial application was in a Manchester cotton mill in 1806. In 1901, studies of the defoliant effect of leaking gas pipes led to the discovery that ethylene is a plant hormone.
Throughout the 19th century and into the first decades of the 20th, the gas was manufactured by the gasification of coal. Later in the 19th century, natural gas began to replace coal gas, first in the US, and then in other parts of the world. In the United Kingdom, coal gas was used until the early 1970s.
Russia
The history of the Russian gas industry began with retired Lieutenant Pyotr Sobolevsky (1782–1841), who improved Philippe le Bon's design for a "thermolamp" and presented it to Emperor Alexander I in 1811; in January 1812, Sobolevsky was instructed to draw up a plan for gas street-lighting for St. Petersburg. The French invasion of Russia delayed implementation, but St. Petersburg's Governor General Mikhail Miloradovich, who had seen the gas lighting of Vienna, Paris and other European cities, initiated experimental work on gas lighting for the capital, using British apparatus for obtaining gas from pit coal, and by the autumn of 1819, Russia's first gas street light was lit on one of the streets on Aptekarsky Island.
In February 1835, the Company for Gas Lighting St. Petersburg was founded; towards the end of that year, a factory for the production of lighting gas was constructed near the Obvodny Canal, using pit coal brought in by ship from Cardiff; and 204 gas lamps were ceremonially lit in St. Petersburg on 27 September 1839.
Over the next 10 years, their numbers almost quadrupled, to reach 800. By the middle of the 19th century, the central streets and buildings of the capital were illuminated: the Palace Square, Bolshaya and Malaya Morskaya streets, Nevsky and Tsarskoselsky Avenues, Passage Arcade, Noblemen's Assembly, the Technical Institute and Peter and Paul Fortress.
Theatrical use
It took many years of development and testing before gas lighting for the stage was commercially available. Gas technology was then installed in just about every major theatre in the world. But gas lighting was short-lived because the electric light bulb soon followed.
In the 19th century, gas stage lighting went from a crude experiment to the most popular way of lighting theatrical stages. In 1804, Frederick Albert Winsor first demonstrated the way to use gas to light the stage in London at the Lyceum Theatre. Although the demonstration and all the lead research were being done in London, "in 1816 at the Chestnut Street Theatre in Philadelphia was the earliest gas lit theatre in world". In 1817 the Lyceum, Drury Lane, and Covent Garden theatres were all lit by gas. Gas was brought into the building by "miles of rubber tubing from outlets in the floor called 'water joints'" which "carried the gas to border-lights and wing lights". But before it was distributed, the gas came through a central distribution point called a "gas table", which varied the brightness by regulating the gas supply, and the gas table, which allowed control of separate parts of the stage. Thus it became the first stage 'switchboard'.
By the 1850s, gas lighting in theatres had spread practically all over the United States and Europe. Some of the largest installations of gas lighting were in large auditoriums, like the Théâtre du Chatelet, built in 1862. In 1875, the new Paris Opera was constructed. "Its lighting system contained more than twenty-eight miles [] of gas piping, and its gas table had no fewer than eighty-eight stopcocks, which controlled nine hundred and sixty gas jets." The theatre that used the most gas lighting was Astley's Equestrian Amphitheatre in London. According to the Illustrated London News, "Everywhere white and gold meets the eye, and about 200,000 gas jets add to the glittering effect of the auditorium … such a blaze of light and splendour has scarcely ever been witnessed, even in dreams."
Theatres switched to gas lighting because it was more economical than using candles and also required less labour to operate. With gas lighting, theatres would no longer need to have people tending to candles during a performance, or having to light each candle individually. "It was easier to light a row of gas jets than a greater quantity of candles high in the air." Theatres also no longer needed to worry about wax dripping on the actors during a show.
Gas lighting also had an effect on the actors. As the stage was brighter, they could now use less make-up and their motions did not have to be as exaggerated. Half-lit stages had become fully lit stages. Production companies were so impressed with the new technology that one said, "This light is perfect for the stage. One can obtain gradation of brightness that is really magical."
The best result was the improved respect from the audience. There was no more shouting or riots. The light pushed the actors more up stage behind the proscenium, helping the audience concentrate more on the action that was taking place on stage rather than what was going on in the house. Management had more authority on what went on during the show because they could see. Gaslight was the leading cause of behaviour change in theatres. They were no longer places for mingling and orange selling, but places of respected entertainment.
Types of lighting instruments
There were six types of burners, but four burners were really experimented with:
The first burner used was the single-jet burner, which produced a small flame. The tip of the burner was made out of lead, which absorbed heat, causing the flame to be smaller in size. It was discovered that the flame would burn brighter if the metal was mixed with other components, such as porcelain.
Flat burners were invented mainly to distribute gas and light evenly to the systems.
The fishtail burner was similar to the flat burner, but it produced a brighter flame and conducted less heat.
The last burner that was experimented with was the Welsbach burner. Around this time the Bunsen burner was in use along with some forms of electricity. The Welsbach was based on the idea of the Bunsen burner, still using gas. A cotton mesh with cerium and thorium was imbedded into the Welsbach. This source of light was named the gas mantle; it produced three times more light than the naked flame.
Several different instruments were used for stage lighting in the 19th century fell; these included footlights, border lights, groundrows, lengths, bunch lights, conical reflector floods, and limelight spots. These mechanisms sat directly on the stage, blinding the eyes of the audience.
Footlights caused the actors' costumes to catch fire if they got too close. These lights also caused bothersome heat that affected both audience members and actors. Again, the actors had to adapt to these changes. They started fireproofing their costumes and placing wire mesh in front of the footlights.
Border lights, also known as striplights, were a row of lights that hung horizontally in the flies. Color was added later by dying cotton, wool, and silk cloth.
Lengths were constructed the same way as border lights, but mounted vertically in the rear where the wings were.
Bunch lights were a cluster of burners that sat on a vertical base that was fuelled directly from the gas line.
The conical reflector can be related to the Fresnel lens used today. This adjustable box of light reflected a beam whose size could be altered by a barndoor.
Limelight spots are similar to today's current spotlighting system. This instrument was used in scene shops, as well as the stage.
Gas lighting did have some disadvantages. "Several hundred theatres are said to have burned down in America and Europe between 1800 and the introduction of electricity in the late 1800s. The increased heat was objectionable, and the border lights and wing lights had to be lighted by a long stick with a flaming wad of cotton at the end. For many years, an attendant or gas boy moved along the long row of jets, lighting them individually while gas was escaping from the whole row. Both actors and audiences complained of the escaping gas, and explosions sometimes resulted from its accumulation."
These problems with gas lighting led to the rapid adoption of electric lighting. By 1881, the Savoy Theatre in London was using incandescent lighting. While electric lighting was introduced to theatre stages, the gas mantle was developed in 1885 for gas-lit theatres. "This was a beehive-shaped mesh of knitted thread impregnated with lime that, in miniature, converted the naked gas flame into in effect, a lime-light." Electric lighting slowly took over in theatres. In the 20th century, it enabled better and safer theatre productions, with no smell, relatively very little heat, and more freedom for designers.
Decline
In the early 20th century, most cities in North America and Europe had gaslit streets, and most railway station platforms had gas lights too. However, around 1880 gas lighting for streets and train stations began giving way to high voltage (3,000–6,000 volt) direct current and alternating current arc lighting systems. This time period also saw the development of the first electric power utility designed for indoor use. The new system by inventor Thomas Edison was designed to function similar to gas lighting. For reasons of safety and simplicity it used direct current (DC) at a relatively low 110 volts to light incandescent light bulbs. Voltage in wires steadily declines as distance increases, and at this low voltage power plants needed to be within about of the lamps. This voltage drop problem made DC distribution relatively expensive and gas lighting retained widespread usage with new buildings sometimes constructed with dual systems of gas piping and electrical wiring connected to each room, to diversify the power sources for lighting.
The development of new alternating current power transmission systems in the 1880s and 90s by companies such as Ganz and AEG in Europe and Westinghouse Electric and Thomson-Houston in the US solved the voltage and distance problem by using high transmission line voltages, and transformers to drop the voltage for distribution for indoor lighting. Alternating current technology overcame many of the limitations of direct current, enabling the rapid growth of reliable, low-cost electrical power networks which finally spelled the end of widespread usage of gas lighting.
Modern usage
Outdoors
In some cities, gas lighting is preserved or restored as a vintage nostalgic feature to support the historic atmosphere of their historic centres.
In the 20th century, most cities with gas streetlights replaced them with new electric streetlights. For example, Baltimore, the first US city to install gas streetlights, removed nearly all of them. A sole, token gas lamp is located at N. Holliday Street and E. Baltimore Street as a monument to the first gas lamp in America, erected at that location.
However, gas lighting of streets has not disappeared completely from some cities, and the few municipalities that retained gas lighting now find that it provides a pleasing nostalgic effect. Gas lighting is also seeing a resurgence in the luxury home market for those in search of historical authenticity.
The largest gas lighting network in the world is that of Berlin. With about 23,000 lamps (2022), it holds more than half of all working gas street lamps in the world, followed by Düsseldorf with 14,000 lamps (2020), of which at least 10,000 are to be retained
In London there were about 1,500 working gas street lamps although there were plans to replace 299 of those in Westminster (the first city in the world lit by gas) with LED lighting by 2023, which sparked public opposition.
In the United States, more than 2800 gas lights in Boston operate in the historic districts of Beacon Hill, Back Bay, Bay Village, Charlestown, and parts of other neighbourhoods. In Cincinnati, Ohio, more than 1100 gas lights operate in areas that have been named historic districts. Gas lights also operate in parts of the famed French Quarter and outside historic homes throughout the city in New Orleans.
Zagreb, a capital of Croatia is using gas candelabers since 1863. At the time, Zagreb was illuminated by 60 000 lamps, but as of 1987, only 248 street lamps illuminate old parts of the city. Zagreb gas lamps are manually managed by limplighters in historic uniforms ("nažigači").
Prague, where gas lighting was introduced on 15 September 1847, had about 10,000 gas streetlamps in the 1940s. The last historic gas candelabras become electrified in 1985. However, in 2002–2014, streetlamps along the Royal Route and some other streets in the centre were rebuilt to use gas (using replicas of the historic poles and lanterns), several historic candelabras (Hradčanské náměstí, Loretánská street, Dražického náměstí etc.) were also converted back to gas lamps, and five new gas lamps were installed in the Michle Gasworks as a promotion. In 2018, there were 417 points (about 650 lanterns) of street gas lighting in Prague. During Advent and Christmas, lanterns on the Charles Bridge are managed manually by a lamplighter in historic uniform. The plan to reintroduce gas lights in Old Prague was proposed in 2002, and adopted by the Municipality of Prague in January 2004.
Indoors
The use of natural gas (methane) for indoor lighting is nearly extinct. Besides producing a lot of heat, the combustion of methane tends to release significant amounts of carbon monoxide, a colourless and odourless gas that is more readily absorbed by the blood than oxygen, and can be deadly. Historically, the use of lamps of all types was of shorter duration than we are accustomed to with electric lights, and in the far more draughty buildings, it was of less concern and danger. There are suppliers of new mantle gas lamps set up for use with natural gas; and, some old homes still have fixtures installed, and some period restorations have salvaged fixtures installed, more for decoration than use.
New fixtures are still made and available for propane (sometimes called "bottle(d) gas"), a product of oil refining, which under most circumstances burns more completely to carbon dioxide and water vapour. In some locations where public utility electricity or kerosene are not readily accessible or desirable, propane gas mantle lamps are still used, although the increased availability of alternative energy sources, such as solar panels and small scale wind turbines, combined with increasing efficiency of lighting products, such as compact fluorescent lamps and LEDs are also in use.
Other uses
Perforated tubes bent into the shape of letters were used to form gas lit advertising signs, prior to the introduction of neon lights, as early as 1857 in Grand Rapids, Michigan. Gas lighting is still in common use for camping lights. Small portable gas lamps, connected to a portable gas cylinder, are a common item on camping trips. Mantle lamps powered by vaporized petrol, such as the Coleman lantern, are also available.
Image gallery
See also
Carbochemistry
Gaslaternen-Freilichtmuseum Berlin, an outdoor gas lantern museum in Berlin
History of manufactured fuel gases
List of light sources
Thomas Thorp
References
Notes
Bibliography
Further reading
External links
Pro Gaslicht e.V. : Association for the Preservation of the European Gas-light Culture (German). Listing of the cities with gaslight.
Gaslaternen-Freilichtmuseum Berlin Open-air museum on gas lighting in Berlin (German).
Chinese inventions
English inventions
Gas technologies
Infrastructure
Industrial gases
Lighting
Scottish inventions
Types of lamp | Gas lighting | [
"Chemistry",
"Engineering"
] | 6,465 | [
"Chemical process engineering",
"Industrial gases",
"Construction",
"Infrastructure"
] |
850,783 | https://en.wikipedia.org/wiki/Hypericin | Hypericin is a naphthodianthrone, an anthraquinone derivative which, together with hyperforin, is one of the principal active constituents of Hypericum (Saint John's wort). Hypericin is believed to act as an antibiotic, antiviral and non-specific kinase inhibitor. Hypericin may inhibit the action of the enzyme dopamine β-hydroxylase, leading to increased dopamine levels, although thus possibly decreasing norepinephrine and epinephrine.
It was initially believed that the anti-depressant pharmacological activity of hypericin was due to inhibition of monoamine oxidase enzyme. The crude extract of Hypericum is a weak inhibitor of MAO-A and MAO-B. Isolated hypericin does not display this activity, but does have some affinity for NMDA receptors. This points in the direction that other constituents are responsible for the MAOI effect. The current belief is that the mechanism of antidepressant activity is due to the inhibition of re-uptake of certain neurotransmitters.
The large chromophore system in the molecule means that it can cause photosensitivity when ingested beyond threshold amounts. Photosensitivity is often seen in animals that have been allowed to graze on St. John's Wort. Because hypericin accumulates preferentially in cancerous tissues, it is also used as an indicator of cancerous cells. In addition, hypericin is under research as an agent in photodynamic therapy, whereby a biochemical is absorbed by an organism to be later activated with spectrum-specific light from specialized lamps or laser sources, for therapeutic purposes. The antibacterial and antiviral effects of hypericin are also believed to arise from its ability for photo-oxidation of cells and viral particles.
Hypericin derives from cyclisation of polyketides.
The biosynthesis of hypericins is through the polyketide pathway where an octaketide chain goes through successive cyclizations and decarboxylations to form emodin anthrone which is believed to be the precursor of hypericin. Oxidization reactions yield protoforms which then are converted into hypericin and pseudohypericin. These reactions are photosensitive and take place under exposure to light and using the enzyme Hyp-1.
References
Virucides
Polyketides
Polyols
Chemicals in Hypericum
Biological pigments
3-Hydroxypropenals within hydroxyquinones | Hypericin | [
"Chemistry",
"Biology"
] | 531 | [
"Biomolecules by chemical classification",
"Natural products",
"Virucides",
"Polyketides",
"Biological pigments",
"Biocides",
"Pigmentation"
] |
850,807 | https://en.wikipedia.org/wiki/Sidewinding | Sidewinding is a type of locomotion unique to snakes, used to move across loose or slippery substrates. It is most often used by the Saharan horned viper, Cerastes cerastes, the Mojave sidewinder rattlesnake, Crotalus cerastes, and the Namib desert sidewinding adder, Bitis peringueyi, to move across loose desert sands, and also by Homalopsine snakes in Southeast Asia to move across tidal mud flats. Any number of caenophidian snakes can be induced to sidewind on smooth surfaces, though the difficulty in getting them to do so and their proficiency at it vary greatly.
The method of movement is derived from lateral undulation, and is very similar, in spite of appearances. A picture of a snake performing lateral undulation would show something like a sine wave, with straight segments of the body having either a positive or negative slope. Sidewinding is accomplished by undulating vertically as well as laterally, with the head tracing out an ellipse in a vertical plane nearly perpendicular to the direction of movement and with all the segments that have a significantly non-zero slope (and alternating segments that have a zero slope) lifted off the ground.
The ventral scales of sidewinding snakes are short and have small, microscopic holes in them to reduce friction, as opposed to the more spike-shaped ones of other snakes. These are more prominent in the African horned viper and sand vipers than the American sidewinder, theorised to do with the formers' environments being older by millions of years.
In the resultant movement, the snake's body is always in static (as opposed to sliding) contact when touching the ground. The head seems to be "thrown" forward, and the body follows, being lifted from the prior position and moved forward to lie on the ground ahead of where it was originally. Meanwhile, the head is being thrown forward again. In this way, the snake slowly progresses at an angle, leaving a series of mostly straight, J-shaped tracks. Because the snake's body is in static contact with the ground, without slip, imprints of the belly scales can be seen in the tracks, and each track is almost exactly as long as the snake.
Sidewinder rattlesnakes can use sidewinding to ascend sandy slopes by increasing the portion of the body in contact with the sand to match the reduced yielding force of the inclined sand, allowing them to ascend up to the maximum possible sand slope without slip. Implementing this control scheme in a snakebot capable of sidewinding allowed the robot to replicate the success of the snakes.
One can determine the line of movement of the snake by drawing a line connecting either the right or left tips of the tracks.
References
Terrestrial locomotion
Wave mechanics | Sidewinding | [
"Physics"
] | 578 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
850,859 | https://en.wikipedia.org/wiki/Benzoyl%20peroxide | Benzoyl peroxide is a chemical compound (specifically, an organic peroxide) with structural formula , often abbreviated as (BzO)2. In terms of its structure, the molecule can be described as two benzoyl (, Bz) groups connected by a peroxide (). It is a white granular solid with a faint odour of benzaldehyde, poorly soluble in water but soluble in acetone, ethanol, and many other organic solvents. Benzoyl peroxide is an oxidizer, which is principally used in the production of polymers.
Benzoyl peroxide is mainly used in production of plastics and for bleaching flour, hair, plastics and textiles.
As a bleach, it has been used as a medication and a water disinfectant.
As a medication, benzoyl peroxide is mostly used to treat acne, either alone or in combination with other treatments. Some versions are sold mixed with antibiotics such as clindamycin. It is on the World Health Organization's List of Essential Medicines. It is available as an over-the-counter and generic medication. It is also used in dentistry for teeth whitening. In 2021, it was the 284th most commonly prescribed medication in the United States, with more than 700,000 prescriptions.
History
Benzoyl peroxide was first prepared and described by Justus von Liebig in 1858.
Donald Holroyde Hey FRS (12 September 1904 – 21 January 1987) was a Welsh organic chemist who inferred that the decomposition of benzoyl peroxide generated free phenyl radicals.
Structure and reactivity
The original 1858 synthesis by Liebig reacted benzoyl chloride with barium peroxide, a reaction that probably follows this equation:
2 C6H5C(O)Cl + BaO2 → (C6H5CO)2O2 + BaCl2
Benzoyl peroxide is usually prepared by treating hydrogen peroxide with benzoyl chloride under alkaline conditions.
2 C6H5COCl + H2O2 + 2 NaOH → (C6H5CO)2O2 + 2 NaCl + 2 H2O
The oxygen–oxygen bond in peroxides is weak. Thus, benzoyl peroxide readily undergoes homolysis (symmetrical fission), forming free radicals:
(C6H5CO)2O2 → 2
The symbol • indicates that the products are radicals; i.e., they contain at least one unpaired electron. Such species are highly reactive. The homolysis is usually induced by heating. The half-life of benzoyl peroxide is one hour at 92 °C. At 131 °C, the half-life is one minute.
In 1901, it was observed that the compound made the tincture of guaiacum tincture turn blue, a sign of oxygen being released. Around 1905, Loevenhart reported on the successful use of benzoyl peroxide to treat various skin conditions, including burns, chronic varicose leg tumors, and tinea sycosis. He also reported animal experiments that showed the relatively low toxicity of the compound.
Treatment with benzoyl peroxide was proposed for wounds in 1929, and for sycosis vulgaris and acne varioliformis in 1934. However, preparations were often of questionable quality. It was officially approved for the treatment of acne in the US in 1960.
Polymerization
Benzoyl peroxide is mainly used as a radical initiator to induce chain-growth polymerization reactions, such as for polyester and poly(methyl methacrylate) (PMMA) resins and dental cements and restoratives. It is the most important among the various organic peroxides used for this purpose, a relatively safe alternative to the much more hazardous methyl ethyl ketone peroxide. It is also used in rubber curing and as a finishing agent for some acetate yarns.
Other uses
Benzoyl peroxide is effective for treating acne lesions. It does not induce antibiotic resistance. It may be combined with salicylic acid, sulfur, erythromycin or clindamycin (antibiotics), or adapalene (a synthetic retinoid). Two common combination drugs include benzoyl peroxide/clindamycin and adapalene/benzoyl peroxide, adapalene being a chemically stable retinoid that can be combined with benzoyl peroxide unlike tezarotene and tretinoin. Combination products such as benzoyl peroxide/clindamycin and benzoyl peroxide/salicylic acid appear to be slightly more effective than benzoyl peroxide alone for the treatment of acne lesions. The combination tretinoin/benzoyl peroxide was approved for medical use in the United States in 2021.
Benzoyl peroxide for acne treatment is typically applied to the affected areas in gel, cream, or liquid, in concentrations of 2.5% increasing through 5.0%, and up to 10%. No strong evidence supports the idea that higher concentrations of benzoyl peroxide are more effective than lower concentrations.
Mechanism of action
Classically, benzoyl peroxide is thought to have a three-fold activity in treating acne. It is sebostatic, comedolytic, and inhibits growth of Cutibacterium acnes, the main bacterium associated with acne. In general, acne vulgaris is a hormone-mediated inflammation of sebaceous glands and hair follicles. Hormone changes cause an increase in keratin and sebum production, leading to blocked drainage. C. acnes has many lytic enzymes that break down the proteins and lipids in the sebum, leading to an inflammatory response. The free-radical reaction of benzoyl peroxide can break down the keratin, therefore unblocking the drainage of sebum (comedolytic). It can cause nonspecific peroxidation of C. acnes, making it bactericidal, and it was thought to decrease sebum production, but disagreement exists within the literature on this.
Some evidence suggests that benzoyl peroxide has an anti-inflammatory effect as well. In micromolar concentrations it prevents neutrophils from releasing reactive oxygen species, part of the inflammatory response in acne.
Side effects
Application of benzoyl peroxide to the skin may result in redness, burning, and irritation. This side effect is dose-dependent.
Because of these possible side effects, it is recommended to start with a low concentration and build up as appropriate, as the skin gradually develops tolerance to the medication. Skin sensitivity typically resolves after a few weeks of continuous use. Irritation can also be reduced by avoiding harsh facial cleansers and wearing sunscreen prior to sun exposure.
One in 500 people experience hypersensitivity to benzoyl peroxide and are liable to experience burning, itching, crusting, and possibly swelling. About one-third of people experience phototoxicity under exposure to ultraviolet (UVB) light.
Dosage
In the US, the typical concentration for benzoyl peroxide is 2.5% to 10% for both prescription and over-the-counter drug preparations that are used in treatment for acne.
Other medical uses
Benzoyl peroxide is used in dentistry as a tooth whitening product.
Safety
Explosion hazard
Benzoyl peroxide is potentially explosive like other organic peroxides, and can cause fires without external ignition. The hazard is acute for the pure material, so the compound is generally used as a solution or a paste. For example, cosmetics contain only a small percentage of benzoyl peroxide and pose no explosion risk.
Toxicity
Benzoyl peroxide breaks down in contact with skin, producing benzoic acid and oxygen, neither of which is very toxic.
The carcinogenic potential of benzoyl peroxide has been investigated. A 1981 study published in the journal Science found that although benzoyl peroxide is not a carcinogen, it does promote cell growth when applied to an initiated tumor. The study concluded, "caution should be recommended in the use of this and other free radical-generating compounds".
A 1999 IARC review of carcinogenicity studies found no convincing evidence linking benzoyl peroxide acne medication to skin cancers in humans. However, some animal studies found that the compound could act as a carcinogen and enhance the effect of known carcinogens.
Benzoyl peroxide can break down into carcinogen benzene at temperatures above 50 °C.
Skin irritation
In a 1977, study using a human maximization test, 76% of subjects acquired a contact sensitization to benzoyl peroxide. Formulations of 5% and 10% were used.
The US National Institute for Occupational Safety and Health has developed criteria for a recommended standard for occupational exposure to benzoyl peroxide.
Cloth staining
Contact with fabrics or hair, such as from still-moist acne medication, can cause permanent color dampening almost immediately. Even secondary contact can cause bleaching, for example, contact with a towel that has been used to wash off benzoyl peroxide-containing hygiene products.
See also
Clearasil
References
External links
Organic Peroxide Producers Safety Division (OPPSD)
Acne treatments
Anti-acne preparations
Benzene derivatives
IARC Group 3 carcinogens
Organic peroxides
Radical initiators
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Benzoyl peroxide | [
"Chemistry",
"Materials_science"
] | 1,970 | [
"Radical initiators",
"Organic compounds",
"Polymer chemistry",
"Reagents for organic chemistry",
"Organic peroxides"
] |
851,008 | https://en.wikipedia.org/wiki/Black%20hole%20information%20paradox | The black hole information paradox is a paradox that appears when the predictions of quantum mechanics and general relativity are combined. The theory of general relativity predicts the existence of black holes that are regions of spacetime from which nothing—not even light—can escape. In the 1970s, Stephen Hawking applied the semiclassical approach of quantum field theory in curved spacetime to such systems and found that an isolated black hole would emit a form of radiation (now called Hawking radiation in his honor). He also argued that the detailed form of the radiation would be independent of the initial state of the black hole, and depend only on its mass, electric charge and angular momentum.
The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. Hawking's calculation suggests that the final state of radiation would retain information only about the total mass, electric charge and angular momentum of the initial state. Since many different states can have the same mass, charge and angular momentum, this suggests that many initial physical states could evolve into the same final state. Therefore, information about the details of the initial state would be permanently lost; however, this violates a core precept of both classical and quantum physics: that, in principle only, the state of a system at one point in time should determine its state at any other time. Specifically, in quantum mechanics the state of the system is encoded by its wave function. The evolution of the wave function is determined by a unitary operator, and unitarity implies that the wave function at any instant of time can be used to determine the wave function either in the past or the future. In 1993, Don Page argued that if a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. This is called the Page curve.
It is now generally believed that information is preserved in black-hole evaporation. For many researchers, deriving the Page curve is synonymous with solving the black hole information puzzle. But views differ as to precisely how Hawking's original semiclassical calculation should be corrected. In recent years, several extensions of the original paradox have been explored. Taken together, these puzzles about black hole evaporation have implications for how gravity and quantum mechanics must be combined. The information paradox remains an active field of research in quantum gravity.
Relevant principles
In quantum mechanics, the evolution of the state is governed by the Schrödinger equation. The Schrödinger equation obeys two principles that are relevant to the paradox—quantum determinism, which means that given a present wave function, its future changes are uniquely determined by the evolution operator, and reversibility, which refers to the fact that the evolution operator has an inverse, meaning that the past wave functions are similarly unique. The combination of the two means that information must always be preserved. In this context "information" means all the details of the state, and the statement that information must be preserved means that details corresponding to an earlier time can always be reconstructed at a later time.
Mathematically, the Schrödinger equation implies that the wavefunction at a time t1 can be related to the wavefunction at a time t2 by means of a unitary operator.
Since the unitary operator is bijective, the wavefunction at t2 can be obtained from the wavefunction at t1 and vice versa.
The reversibility of time evolution described above applies only at the microscopic level, since the wavefunction provides a complete description of the state. It should not be conflated with thermodynamic irreversibility. A process may appear irreversible if one keeps track only of the system's coarse-grained features and not of its microscopic details, as is usually done in thermodynamics. But at the microscopic level, the principles of quantum mechanics imply that every process is completely reversible.
Starting in the mid-1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments that suggested that black-hole evaporation loses information, and is therefore inconsistent with unitarity. Crucially, these arguments were meant to apply at the microscopic level and suggested that black-hole evaporation is not only thermodynamically but microscopically irreversible. This contradicts the principle of unitarity described above and leads to the information paradox. Since the paradox suggested that quantum mechanics would be violated by black-hole formation and evaporation, Hawking framed the paradox in terms of the "breakdown of predictability in gravitational collapse".
The arguments for microscopic irreversibility were backed by Hawking's calculation of the spectrum of radiation that isolated black holes emit. This calculation utilized the framework of general relativity and quantum field theory. The calculation of Hawking radiation is performed at the black hole horizon and does not account for the backreaction of spacetime geometry; for a large enough black hole the curvature at the horizon is small and therefore both these theories should be valid. Hawking relied on the no-hair theorem to arrive at the conclusion that radiation emitted by black holes would depend only on a few macroscopic parameters, such as the black hole's mass, charge, and spin, but not on the details of the initial state that led to the formation of the black hole. In addition, the argument for information loss relied on the causal structure of the black hole spacetime, which suggests that information in the interior should not affect any observation in the exterior, including observations performed on the radiation the black hole emits. If so, the region of spacetime outside the black hole would lose information about the state of the interior after black-hole evaporation, leading to the loss of information.
Today, some physicists believe that the holographic principle (specifically the AdS/CFT duality) demonstrates that Hawking's conclusion was incorrect, and that information is in fact preserved. Moreover, recent analyses indicate that in semiclassical gravity the information loss paradox cannot be formulated in a self-consistent manner due to the impossibility of simultaneously realizing all of the necessary assumptions required for its formulation.
Black hole evaporation
Hawking radiation
In 1973–1975, Stephen Hawking showed that black holes should slowly radiate away energy, and he later argued that this leads to a contradiction with unitarity. Hawking used the classical no-hair theorem to argue that the form of this radiation—called Hawking radiation—would be completely independent of the initial state of the star or matter that collapsed to form the black hole. He argued that the process of radiation would continue until the black hole had evaporated completely. At the end of this process, all the initial energy in the black hole would have been transferred to the radiation. But, according to Hawking's argument, the radiation would retain no information about the initial state and therefore information about the initial state would be lost.
More specifically, Hawking argued that the pattern of radiation emitted from the black hole would be random, with a probability distribution controlled only by the black hole's initial temperature, charge, and angular momentum, not by the initial state of the collapse. The state produced by such a probabilistic process is called a mixed state in quantum mechanics. Therefore, Hawking argued that if the star or material that collapsed to form the black hole started in a specific pure quantum state, the process of evaporation would transform the pure state into a mixed state. This is inconsistent with the unitarity of quantum-mechanical evolution discussed above.
The loss of information can be quantified in terms of the change in the fine-grained von Neumann entropy of the state. A pure state is assigned a von Neumann entropy of 0, whereas a mixed state has a finite entropy. The unitary evolution of a state according to Schrödinger's equation preserves the entropy. Therefore Hawking's argument suggests that the process of black-hole evaporation cannot be described within the framework of unitary evolution. Although this paradox is often phrased in terms of quantum mechanics, the evolution from a pure state to a mixed state is also inconsistent with Liouville's theorem in classical physics (see e.g.).
In equations, Hawking showed that if one denotes the creation and annihilation operators at a frequency for a quantum field propagating in the black-hole background by and then the expectation value of the product of these operators in the state formed by the collapse of a black hole would satisfy where is the Boltzmann constant and is the temperature of the black hole. (See, for example, section 2.2 of.) This formula has two important aspects. The first is that the form of the radiation depends only on a single parameter, temperature, even though the initial state of the black hole cannot be characterized by one parameter. Second, the formula implies that the black hole radiates mass at a rate given by where is constant related to fundamental constants, including the Stefan–Boltzmann constant and certain properties of the black hole spacetime called its greybody factors.
The temperature of the black hole is in turn dependent on its mass, charge, and angular momentum. For a Schwarzschild Black Hole the temperature is given by
This means that if the black hole starts out with an initial mass , it evaporates completely in a time proportional to .
The important aspect of these formulas is that they suggest that the final gas of radiation formed through this process depends only on the black hole's temperature and is independent of other details of the initial state. This leads to the following paradox. Consider two distinct initial states that collapse to form a Schwarzschild black hole of the same mass. Even though the states were distinct at first, since the mass (and hence the temperature) of the black holes is the same, they will emit the same Hawking radiation. Once they evaporate completely, in both cases, one will be left with a featureless gas of radiation. This gas cannot be used to distinguish between the two initial states, and therefore information has been lost.
Page curve
During the same time period in the 1970s, Don Page was a doctoral student of Stephen Hawking. He objected to Hawking's reasoning leading to the paradox above, initially on the basis of violation of CPT symmetry. In 1993, Page focused on the combined system of a black hole with its Hawking radiation as one entangled system, a bipartite system, evolving over the lifetime of the black hole evaporation. Lacking the ability to make a full quantum analysis, he nonetheless made a powerful observation: If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy or entanglement entropy of the Hawking radiation initially increases from zero and then must decrease back to zero when the black hole to which the radiation is entangled has totally evaporated. This is known as the Page curve; and the time corresponding to the maximum or turnover point of the curve, which occurs at about half the black-hole lifetime, is called the Page time. In short, if black hole evaporation is unitary, then the radiation entanglement entropy follows the Page curve. After the Page time, correlations appear and the radiation becomes increasingly information rich.
Recent progress in deriving the Page curve for unitary black hole evaporation is a significant step towards finding both a resolution to the information paradox and a more general understanding of unitarity in quantum gravity. Many researchers consider deriving the Page curve as synonymous with solving the black hole information paradox.
Popular culture
The information paradox has received coverage in the popular media and has been described in popular-science books. Some of this coverage resulted from a widely publicized bet made in 1997 between John Preskill on the one hand with Hawking and Kip Thorne on the other that information was not lost in black holes. The scientific debate on the paradox was described in Leonard Susskind's 2008 book The Black Hole War. (The book carefully notes that the 'war' was purely a scientific one, and that, at a personal level, the participants remained friends.) Susskind writes that Hawking was eventually persuaded that black-hole evaporation was unitary by the holographic principle, which was first proposed by 't Hooft, further developed by Susskind, and later given a precise string theory interpretation by the AdS/CFT correspondence. In 2004, Hawking also conceded the 1997 bet, paying Preskill with a baseball encyclopedia "from which information can be retrieved at will". Thorne refused to concede.
Solutions
Since the 1997 proposal of the AdS/CFT correspondence, the predominant belief among physicists is that information is indeed preserved in black hole evaporation. There are broadly two main streams of thought about how this happens. Within what might broadly be termed the "string theory community", the dominant idea is that Hawking radiation is not precisely thermal but receives quantum correlations that encode information about the black hole's interior. This viewpoint has been the subject of extensive recent research and received further support in 2019 when researchers amended the computation of the entropy of the Hawking radiation in certain models and showed that the radiation is in fact dual to the black hole interior at late times. Hawking himself was influenced by this view and in 2004 published a paper that assumed the AdS/CFT correspondence and argued that quantum perturbations of the event horizon could allow information to escape from a black hole, which would resolve the information paradox. In this perspective, it is the event horizon of the black hole that is important and not the black-hole singularity. The GISR (Gravity Induced Spontaneous Radiation) mechanism of references can be considered an implementation of this idea but with the quantum perturbations of the event horizon replaced by the microscopic states of the black hole.
On the other hand, within what might broadly be termed the "loop quantum gravity community", the dominant belief is that to resolve the information paradox, it is important to understand how the black-hole singularity is resolved. These scenarios are broadly called remnant scenarios since information does not emerge gradually but remains in the black-hole interior only to emerge at the end of black-hole evaporation.
Researchers also study other possibilities, including a modification of the laws of quantum mechanics to allow for non-unitary time evolution.
Some of these solutions are described at greater length below.
GISR mechanism resolution to the paradox
This resolution takes GISR as the underlying mechanism for Hawking radiation, considering the latter only as a resultant effect. The physics ingredients of GISR are reflected in the following explicitly hermitian hamiltonian
The first term of is a diagonal matrix representing the microscopic state of black holes no heavier than the initial one. The second term describes vacuum fluctuations of particles around the black hole and is represented by many harmonic oscillators. The third term couples the vacuum fluctuation modes to the black hole, such that for each mode whose energy matches the difference between two states of the black hole, the latter transitions with an amplitude proportional to the similarity factor of their microscopic wave functions. Transitions between higher energy state to lower energy state and vice versa, are equally permitted at the Hamiltonian level. This coupling mimics the photon-atom coupling in the Jaynes–Cummings model of atomic physics, replacing the photon's vector potential with the binding energy of particles to be radiated in the black hole case, and the dipole moment of initial-to-final state transitions in atoms with the similarity factor of the initial and final states' wave functions in black holes. Despite its ad hoc nature, this coupling introduces no new interactions beyond gravity, and it is deemed necessary irrespective of the future development of quantum gravitational theories.
From the hamiltonian of GISR and the standard Schrodinger equation controlling the evolution of wave function of the system
here is the index of the radiated particles set with total energy . In the case of short time evolution or single quantum emission, Wigner-Wiesskopf approximation allows one to show that the power spectrum of GISR is exactly of thermal type and the corresponding temperature equals that of Hawking radiation. However, in the case of long time evolution or continuous quantum emission, the process is off-equilibrium and is characterised by an initial state dependent black hole mass or temperature vs. time curve. The observers far away can retrieve the information stored in the initial black hole from this mass or temperature versus time curve.
The hamiltonian and wave function description of GISR allows one to calculate the entanglement entropy between the black hole and its Hawking particles explicitly.
Since the Hamiltonian of GISR is explicitly Hermitian, the resulting Page curve is naturally expected, except for some late-time Rabi-type oscillations. These oscillations arise from the equal probability of emission and absorption transitions as the black hole approaches the vanishing stage. The most important lesson from this calculation is that the intermediate state of an evaporating black hole cannot be considered a semiclassical object with a time-dependent mass. Instead, it must be viewed as a superposition of many different mass ratio combinations of the black hole and Hawking particles. References designed a Schrödinger cat-type thought experiment to illustrate this fact, where an initial black hole is bounded with a group of living cats and each Hawking particle kills on from the group. In the quantum description, because the exact timing and number of particles radiated by a black hole cannot be determined definitively, the intermediate state of the evaporating black hole must be considered a superposition of many cat groups, each with a different ratio of dead members. The biggest flaw in the argument for the information loss paradox is ignoring this superposition.
Small-corrections resolution to the paradox
This idea suggests that Hawking's computation fails to keep track of small corrections that are eventually sufficient to preserve information about the initial state. This can be thought of as analogous to what happens during the mundane process of "burning": the radiation produced appears to be thermal, but its fine-grained features encode the precise details of the object that was burnt. This idea is consistent with reversibility, as required by quantum mechanics. It is the dominant idea in what might broadly be termed the string-theory approach to quantum gravity.
More precisely, this line of resolution suggests that Hawking's computation is corrected so that the two point correlator computed by Hawking and described above becomes
and higher-point correlators are similarly corrected
The equations above utilize a concise notation and the correction factors may depend on the temperature, the frequencies of the operators that enter the correlation function and other details of the black hole.
Maldacena initially explored such corrections in a simple version of the paradox. They were then analyzed by Papadodimas and Raju, who showed that corrections to low-point correlators (such as above ) that were exponentially suppressed in the black-hole entropy were sufficient to preserve unitarity, and significant corrections were required only for very high-point correlators. The mechanism that allowed the right small corrections to form was initially postulated in terms of a loss of exact locality in quantum gravity so that the black-hole interior and the radiation were described by the same degrees of freedom. Recent developments suggest that such a mechanism can be realized precisely within semiclassical gravity and allows information to escape. See § Recent developments.
Fuzzball resolution to the paradox
Some researchers, most notably Samir Mathur, have argued that the small corrections required to preserve information cannot be obtained while preserving the semiclassical form of the black-hole interior and instead require a modification of the black-hole geometry to a fuzzball.
The defining characteristic of the fuzzball is that it has structure at the horizon scale. This should be contrasted with the conventional picture of the black-hole interior as a largely featureless region of space. For a large enough black hole, tidal effects are very small at the black-hole horizon and remain small in the interior until one approaches the black-hole singularity. Therefore, in the conventional picture, an observer who crosses the horizon may not even realize they have done so until they start approaching the singularity. In contrast, the fuzzball proposal suggests that the black hole horizon is not empty. Consequently, it is also not information-free, since the details of the structure at the surface of the horizon preserve information about the black hole's initial state. This structure also affects the outgoing Hawking radiation and thereby allows information to escape from the fuzzball.
The fuzzball proposal is supported by the existence of a large number of gravitational solutions called microstate geometries.
The firewall proposal can be thought of as a variant of the fuzzball proposal that posits that the black-hole interior is replaced by a firewall rather than a fuzzball. Operationally, the difference between the fuzzball and the firewall proposals has to do with whether an observer crossing the horizon of the black hole encounters high-energy matter, suggested by the firewall proposal, or merely low-energy structure, suggested by the fuzzball proposal. The firewall proposal also originated with an exploration of Mathur's argument that small corrections are insufficient to resolve the information paradox.
The fuzzball and firewall proposals have been questioned for lacking an appropriate mechanism that can generate structure at the horizon scale.
Strong-quantum-effects resolution to the paradox
In the final stages of black-hole evaporation, quantum effects become important and cannot be ignored. The precise understanding of this phase of black-hole evaporation requires a complete theory of quantum gravity. Within what might be termed the loop-quantum-gravity approach to black holes, it is believed that understanding this phase of evaporation is crucial to resolving the information paradox.
This perspective holds that Hawking's computation is reliable until the final stages of black-hole evaporation, when information suddenly escapes. Another possibility along the same lines is that black-hole evaporation simply stops when the black hole becomes Planck-sized. Such scenarios are called "remnant scenarios".
An appealing aspect of this perspective is that a significant deviation from classical and semiclassical gravity is needed only in the regime in which the effects of quantum gravity are expected to dominate. On the other hand, this idea implies that just before the sudden escape of information, a very small black hole must be able to store an arbitrary amount of information and have a very large number of internal states. Therefore, researchers who follow this idea must take care to avoid the common criticism of remnant-type scenarios, which is that they might may violate the Bekenstein bound and lead to a violation of effective field theory due to the production of remnants as virtual particles in ordinary scattering events.
Soft-hair resolution to the paradox
In 2016, Hawking, Perry and Strominger noted that black holes must contain "soft hair". Particles that have no rest mass, like photons and gravitons, can exist with arbitrarily low-energy and are called soft particles. The soft-hair resolution posits that information about the initial state is stored in such soft particles. The existence of such soft hair is a peculiarity of four-dimensional asymptotically flat space and therefore this resolution to the paradox does not carry over to black holes in Anti-de Sitter space or black holes in other dimensions.
Information is irretrievably lost
A minority view in the theoretical physics community is that information is genuinely lost when black holes form and evaporate. This conclusion follows if one assumes that the predictions of semiclassical gravity and the causal structure of the black-hole spacetime are exact.
But this conclusion leads to the loss of unitarity. Banks, Susskind and Peskin argue that, in some cases, loss of unitarity also implies violation of energy–momentum conservation or locality, but this argument may possibly be evaded in systems with a large number of degrees of freedom. According to Roger Penrose, loss of unitarity in quantum systems is not a problem: quantum measurements are by themselves already non-unitary. Penrose claims that quantum systems will in fact no longer evolve unitarily as soon as gravitation comes into play, precisely as in black holes. The Conformal Cyclic Cosmology Penrose advocates critically depends on the condition that information is in fact lost in black holes. This new cosmological model might be tested experimentally by detailed analysis of the cosmic microwave background radiation (CMB): if true, the CMB should exhibit circular patterns with slightly lower or slightly higher temperatures. In November 2010, Penrose and V. G. Gurzadyan announced they had found evidence of such circular patterns in data from the Wilkinson Microwave Anisotropy Probe (WMAP), corroborated by data from the BOOMERanG experiment. The significance of these findings was debated.
Along similar lines, Modak, Ortíz, Peña, and Sudarsky have argued that the paradox can be dissolved by invoking foundational issues of quantum theory often called the measurement problem of quantum mechanics. This work built on an earlier proposal by Okon and Sudarsky on the benefits of objective collapse theory in a much broader context. The original motivation of these studies was Penrose's long-standing proposal wherein collapse of the wave-function is said to be inevitable in the presence of black holes (and even under the influence of gravitational field). Experimental verification of collapse theories is an ongoing effort.
Other proposed resolutions
Some other resolutions to the paradox have also been explored. These are listed briefly below.
Information is stored in a large remnantThis idea suggests that Hawking radiation stops before the black hole reaches the Planck size. Since the black hole never evaporates, information about its initial state can remain inside the black hole and the paradox disappears. But there is no accepted mechanism that would allow Hawking radiation to stop while the black hole remains macroscopic.
Information is stored in a baby universe that separates from our own universe.Some models of gravity, such as the Einstein–Cartan theory of gravity, which extends general relativity to matter with intrinsic angular momentum (spin), predict the formation of such baby universes. No violation of known general principles of physics is needed. There are no physical constraints on the number of the universes, even though only one remains observable.The Einstein–Cartan theory is difficult to test because its predictions are significantly different from general-relativistic ones only at extremely high densities.
Information is encoded in the correlations between future and pastThe final-state proposal suggests that boundary conditions must be imposed at the black-hole singularity, which, from a causal perspective, is to the future of all events in the black-hole interior. This helps reconcile black-hole evaporation with unitarity but contradicts the intuitive idea of causality and locality of time-evolution.
Quantum-channel theoryIn 2014, Chris Adami argued that analysis using quantum channel theory causes any apparent paradox to disappear; Adami rejects black hole complementarity, arguing instead that no space-like surface contains duplicated quantum information.
Topological Invariants and Recursive DynamicsThe K-Line Theory offers a novel resolution to the black hole information paradox through a framework based on recursive state evolution, topological invariant dynamics, and probabilistic equivalence. By employing symmetry indices and dynamic constants, the theory ensures that information is preserved throughout the black hole's lifecycle, including during its evaporation via Hawking radiation. Unlike previous proposals, which often fail to provide a mechanism for information preservation without introducing confounding drawbacks, K-Line Theory addresses these issues by maintaining a hierarchical structure of states and encoding quantum correlations coherently. Numerical simulations within the K-Line framework produce entropy behaviours consistent with the Page curve, demonstrating information retrieval and coherence. This approach bridges quantum mechanics and general relativity without violating established physical principles, providing a comprehensive mechanism for information conservation in black hole physics.
Recent developments
Significant progress was made in 2019, when, starting with work by Penington and Almheiri, Engelhardt, Marolf and Maxfield, researchers were able to compute the von Neumann entropy of the radiation black holes emit in specific models of quantum gravity. These calculations showed that, in these models, the entropy of this radiation first rises and then falls back to zero. As explained above, one way to frame the information paradox is that Hawking's calculation appears to show that the von Neumann entropy of Hawking radiation increases throughout the lifetime of the black hole. But if the black hole formed from a pure state with zero entropy, unitarity implies that the entropy of the Hawking radiation must decrease back to zero once the black hole evaporates completely, i.e., the Page curve. Therefore, the results above provide a resolution to the information paradox, at least in the specific models of gravity considered in these models.
These calculations compute the entropy by first analytically continuing the spacetime to a Euclidean spacetime and then using the replica trick. The path integral that computes the entropy receives contributions from novel Euclidean configurations called "replica wormholes". (These wormholes exist in a Wick rotated spacetime and should not be conflated with wormholes in the original spacetime.) The inclusion of these wormhole geometries in the computation prevents the entropy from increasing indefinitely.
These calculations also imply that for sufficiently old black holes, one can perform operations on the Hawking radiation that affect the black hole interior. This result has implications for the related firewall paradox, and provides evidence for the physical picture suggested by the ER=EPR proposal, black hole complementarity, and the Papadodimas–Raju proposal.
It has been noted that the models used to perform the Page curve computations above have consistently involved theories where the graviton has mass, unlike the real world, where the graviton is massless. These models have also involved a "nongravitational bath", which can be thought of as an artificial interface where gravity ceases to act. It has also been argued that a key technique used in the Page-curve computations, the "island proposal", is inconsistent in standard theories of gravity with a Gauss law. This would suggest that the Page curve computations are inapplicable to realistic black holes and work only in special toy models of gravity. The validity of these criticisms remains under investigation; there is no consensus in the research community.
In 2020, Laddha, Prabhu, Raju, and Shrivastava argued that, as a result of the effects of quantum gravity, information should always be available outside the black hole. This would imply that the von Neumann entropy of the region outside the black hole always remains zero, as opposed to the proposal above, where the von Neumann entropy first rises and then falls. Extending this, Raju argued that Hawking's error was to assume that the region outside the black hole would have no information about its interior.
Hawking formalized this assumption in terms of a "principle of ignorance". The principle of ignorance is correct in classical gravity, when quantum-mechanical effects are neglected, by virtue of the no-hair theorem. It is also correct when only quantum-mechanical effects are considered and gravitational effects are neglected. But Raju argued that when both quantum mechanical and gravitational effects are accounted for, the principle of ignorance should be replaced by a "principle of holography of information" that would imply just the opposite: all the information about the interior can be regained from the exterior through suitably precise measurements.
The two recent resolutions of the information paradox described above—via replica wormholes and the holography of information—share the feature that observables in the black-hole interior also describe observables far from the black hole. This implies a loss of exact locality in quantum gravity. Although this loss of locality is very small, it persists over large distance scales. This feature has been challenged by some researchers.
See also
AdS/CFT correspondence
Beyond black holes
Black hole complementarity
Cosmic censorship hypothesis
Firewall (physics)
Fuzzball (string theory)
Holographic principle
List of paradoxes
Maxwell's demon
No-hair theorem
No-hiding theorem
Thorne–Hawking–Preskill bet
References
External links
Black Hole Information Loss Problem, a USENET physics FAQ page
. Discusses methods of attack on the problem, and their apparent shortcomings.
Report on Hawking's 2004 theory in Nature.
Stephen Hawking's purported solution to the black hole unitarity paradox.
Hawking and unitarity: a July 2005 discussion of the information loss paradox and Stephen Hawking's role in it
The Hawking Paradox - BBC Horizon documentary (2005)
A Black Hole Mystery Wrapped in a Firewall Paradox
Black holes
Physical paradoxes
Relativistic paradoxes | Black hole information paradox | [
"Physics",
"Astronomy"
] | 6,778 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
851,547 | https://en.wikipedia.org/wiki/Pressure%20coefficient | In fluid dynamics, the pressure coefficient is a dimensionless number which describes the relative pressures throughout a flow field. The pressure coefficient is used in aerodynamics and hydrodynamics. Every point in a fluid flow field has its own unique pressure coefficient, .
In many situations in aerodynamics and hydrodynamics, the pressure coefficient at a point near a body is independent of body size. Consequently, an engineering model can be tested in a wind tunnel or water tunnel, pressure coefficients can be determined at critical locations around the model, and these pressure coefficients can be used with confidence to predict the fluid pressure at those critical locations around a full-size aircraft or boat.
Definition
The pressure coefficient is a parameter for studying both incompressible/compressible fluids such as water and air. The relationship between the dimensionless coefficient and the dimensional numbers is
where:
is the static pressure at the point at which pressure coefficient is being evaluated
is the static pressure in the freestream (i.e. remote from any disturbance)
is the freestream fluid density (Air at sea level and 15 °C is 1.225 )
is the freestream velocity of the fluid, or the velocity of the body through the fluid
Incompressible flow
Using Bernoulli's equation, the pressure coefficient can be further simplified for potential flows (inviscid, and steady):
where:
is the flow speed at the point at which pressure coefficient is being evaluated
is the Mach number, which is taken in the limit of zero
is the flow's stagnation pressure
This relationship is valid for the flow of incompressible fluids where variations in speed and pressure are sufficiently small that variations in fluid density can be neglected. This assumption is commonly made in engineering practice when the Mach number is less than about 0.3.
of zero indicates the pressure is the same as the freestream pressure.
of one corresponds to the stagnation pressure and indicates a stagnation point.
the most negative values of in a liquid flow can be summed to the cavitation number to give the cavitation margin. If this margin is positive, the flow is locally fully liquid, while if it is zero or negative the flow is cavitating or gas.
Locations where are significant in the design of gliders because this indicates a suitable location for a "Total energy" port for supply of signal pressure to the Variometer, a special Vertical Speed Indicator which reacts to vertical movements of the atmosphere but does not react to vertical maneuvering of the glider.
In an incompressible fluid flow field around a body, there will be points having positive pressure coefficients up to one, and negative pressure coefficients including coefficients less than minus one.
Compressible flow
In the flow of compressible fluids such as air, and particularly the high-speed flow of compressible fluids, (the dynamic pressure) is no longer an accurate measure of the difference between stagnation pressure and static pressure. Also, the familiar relationship that stagnation pressure is equal to total pressure does not always hold true. (It is always true in isentropic flow, but the presence of shock waves can cause the flow to depart from isentropic.) As a result, pressure coefficients can be greater than one in compressible flow.
Perturbation theory
The pressure coefficient can be estimated for irrotational and isentropic flow by introducing the potential and the perturbation potential , normalized by the free-stream velocity
Using Bernoulli's equation,
which can be rewritten as
where is the sound speed.
The pressure coefficient becomes
where is the far-field sound speed.
Local piston theory
The classical piston theory is a powerful aerodynamic tool. From the use of the momentum equation and the assumption of isentropic perturbations, one obtains the following basic piston theory formula for the surface pressure:
where is the downwash speed and is the sound speed.
The surface is defined as
The slip velocity boundary condition leads to
The downwash speed is approximated as
Hypersonic flow
In hypersonic flow, the pressure coefficient can be accurately calculated for a vehicle using Newton's corpuscular theory of fluid motion, which is inaccurate for low-speed flow and relies on three assumptions:
The flow can be modeled as a stream of particles in rectilinear motion
Upon impact with a surface, all normal momentum is lost
All tangential momentum is conserved, and flow follows the body
For a freestream velocity impacting a surface of area , which is inclined at an angle relative to the freestream, the change in normal momentum is and the mass flux incident on the surface is , with being the freestream air density. Then the momentum flux, equal to the force exerted on the surface , from Newton's second law is equal to:
Dividing by the surface area, it is clear that the force per unit area is equal to the pressure difference between the surface pressure and the freestream pressure , leading to the relation:
The last equation may be identified as the pressure coefficient, meaning that Newtonian theory predicts that the pressure coefficient in hypersonic flow is:
For very high speed flows, and vehicles with sharp surfaces, the Newtonian theory works very well.
Modified Newtonian law
A modification to the Newtonian theory, specifically for blunt bodies, was proposed by Lester Lees:
where is the maximum value of the pressure coefficient at the stagnation point behind a normal shock wave:
where is the stagnation pressure and is the ratio of specific heats. The last relation is obtained from the ideal gas law , Mach number , and speed of sound . The Rayleigh pitot tube formula for a calorically perfect normal shock says that the ratio of the stagnation and freestream pressure is:
Therefore, it follows that the maximum pressure coefficient for the Modified Newtonian law is:
In the limit when , the maximum pressure coefficient becomes:
And as , , recovering the pressure coefficient from Newtonian theory at very high speeds. The modified Newtonian theory is substantially more accurate than the Newtonian model for calculating the pressure distribution over blunt bodies.
Pressure distribution
An airfoil at a given angle of attack will have what is called a pressure distribution. This pressure distribution is simply the pressure at all points around an airfoil. Typically, graphs of these distributions are drawn so that negative numbers are higher on the graph, as the for the upper surface of the airfoil will usually be farther below zero and will hence be the top line on the graph.
Relationship with aerodynamic coefficients
All the three aerodynamic coefficients are integrals of the pressure coefficient curve along the chord.
The coefficient of lift for a two-dimensional airfoil section with strictly horizontal surfaces can be calculated from the coefficient of pressure distribution by integration, or calculating the area between the lines on the distribution. This expression is not suitable for direct numeric integration using the panel method of lift approximation, as it does not take into account the direction of pressure-induced lift. This equation is true only for zero angle of attack.
where:
is pressure coefficient on the lower surface
is pressure coefficient on the upper surface
is the leading edge location
is the trailing edge location
When the lower surface is higher (more negative) on the distribution it counts as a negative area as this will be producing down force rather than lift.
See also
Lift coefficient
Drag coefficient
Pitching moment coefficient
References
Further reading
Abbott, I.H. and Von Doenhoff, A.E. (1959) Theory of Wing Sections, Dover Publications, Inc. New York, Standard Book No. 486-60586-8
Anderson, John D (2001) Fundamentals of Aerodynamic 3rd Edition, McGraw-Hill.
Aerospace engineering
Aircraft aerodynamics
Dimensionless numbers of fluid mechanics
Fluid dynamics | Pressure coefficient | [
"Chemistry",
"Engineering"
] | 1,563 | [
"Piping",
"Aerospace engineering",
"Chemical engineering",
"Fluid dynamics"
] |
852,089 | https://en.wikipedia.org/wiki/Gravitational%20time%20dilation | Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events, as measured by observers situated at varying distances from a gravitating mass. The lower the gravitational potential (the closer the clock is to the source of gravitation), the slower time passes, speeding up as the gravitational potential increases (the clock moving away from the source of gravitation). Albert Einstein originally predicted this in his theory of relativity, and it has since been confirmed by tests of general relativity.
This effect has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds. Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface. Demonstrating larger effects would require measurements at greater distances from the Earth, or a larger gravitational source.
Gravitational time dilation was first described by Albert Einstein in 1907 as a consequence of special relativity in accelerated frames of reference. In general relativity, it is considered to be a difference in the passage of proper time at different positions as described by a metric tensor of spacetime. The existence of gravitational time dilation was first confirmed directly by the Pound–Rebka experiment in 1959, and later refined by Gravity Probe A and other experiments.
Gravitational time dilation is closely related to gravitational redshift, in which the closer a body emitting light of constant frequency is to a gravitating body, the more its time is slowed by gravitational time dilation, and the lower (more "redshifted") would seem to be the frequency of the emitted light, as measured by a fixed observer.
Definition
Clocks that are far from massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set in a geostationary position at an altitude of 9,000 meters above sea level, such as perhaps at the top of Mount Everest (prominence 8,848m), would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects.
According to general relativity, inertial mass and gravitational mass are the same, and all accelerated reference frames (such as a uniformly rotating reference frame with its proper time dilation) are physically equivalent to a gravitational field of the same strength.
Consider a family of observers along a straight "vertical" line, each of whom experiences a distinct constant g-force directed along this line (e.g., a long accelerating spacecraft, a skyscraper, a shaft on a planet). Let be the dependence of g-force on "height", a coordinate along the aforementioned line. The equation with respect to a base observer at is
where is the total time dilation at a distant position , is the dependence of g-force on "height" , is the speed of light, and denotes exponentiation by e.
For simplicity, in a Rindler's family of observers in a flat spacetime, the dependence would be
with constant , which yields
.
On the other hand, when is nearly constant and is much smaller than , the linear "weak field" approximation can also be used.
See Ehrenfest paradox for application of the same formula to a rotating reference frame in flat spacetime.
Outside a non-rotating sphere
A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object. The equation is
where
is the proper time between two events for an observer close to the massive sphere, i.e. deep within the gravitational field
is the coordinate time between the events for an observer at an arbitrarily large distance from the massive object (this assumes the far-away observer is using Schwarzschild coordinates, a coordinate system where a clock at infinite distance from the massive sphere would tick at one second per second of coordinate time, while closer clocks would tick at less than that rate),
is the gravitational constant,
is the mass of the object creating the gravitational field,
is the radial coordinate of the observer within the gravitational field (this coordinate is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate; the equation in this form has real solutions for ),
is the speed of light,
is the Schwarzschild radius of ,
is the escape velocity, and
is the escape velocity, expressed as a fraction of the speed of light c.
To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the Sun will accumulate around 66.4 fewer seconds in one year.
Circular orbits
In the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than (the radius of the photon sphere). The formula for a clock at rest is given above; the formula below gives the general relativistic time dilation for a clock in a circular orbit:
Both dilations are shown in the figure below.
Important features of gravitational time dilation
According to the general theory of relativity, gravitational time dilation is copresent with the existence of an accelerated reference frame. Additionally, all physical phenomena in similar circumstances undergo time dilation equally according to the equivalence principle used in the general theory of relativity.
The speed of light in a locale is always equal to c according to the observer who is there. That is, every infinitesimal region of spacetime may be assigned its own proper time, and the speed of light according to the proper time at that region is always c. This is the case whether or not a given region is occupied by an observer. A time delay can be measured for photons which are emitted from Earth, bend near the Sun, travel to Venus, and then return to Earth along a similar path. There is no violation of the constancy of the speed of light here, as any observer observing the speed of photons in their region will find the speed of those photons to be c, while the speed at which we observe light travel finite distances in the vicinity of the Sun will differ from c.
If an observer is able to track the light in a remote, distant locale which intercepts a remote, time dilated observer nearer to a more massive body, that first observer tracks that both the remote light and that remote time dilated observer have a slower time clock than other light which is coming to the first observer at c, like all other light the first observer really can observe (at their own location). If the other, remote light eventually intercepts the first observer, it too will be measured at c by the first observer.
Gravitational time dilation in a gravitational well is equal to the velocity time dilation for a speed that is needed to escape that gravitational well (given that the metric is of the form , i. e. it is time invariant and there are no "movement" terms ). To show that, one can apply Noether's theorem to a body that freely falls into the well from infinity. Then the time invariance of the metric implies conservation of the quantity , where is the time component of the 4-velocity of the body. At the infinity , so , or, in coordinates adjusted to the local time dilation, ; that is, time dilation due to acquired velocity (as measured at the falling body's position) equals to the gravitational time dilation in the well the body fell into. Applying this argument more generally one gets that (under the same assumptions on the metric) the relative gravitational time dilation between two points equals to the time dilation due to velocity needed to climb from the lower point to the higher.
Experimental confirmation
Gravitational time dilation has been experimentally measured using atomic clocks on airplanes, such as the Hafele–Keating experiment. The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites need to have their clocks corrected.
Additionally, time dilations due to height differences of less than one metre have been experimentally verified in the laboratory.
Gravitational time dilation in the form of gravitational redshift has also been confirmed by the Pound–Rebka experiment and observations of the spectra of the white dwarf Sirius B.
Gravitational time dilation has been measured in experiments with time signals sent to and from the Viking 1 Mars lander.
See also
Clock hypothesis
Gravitational redshift
Hafele–Keating experiment
Relative velocity time dilation
Twin paradox
Barycentric Coordinate Time
References
Further reading
Effects of gravity
Theory of relativity | Gravitational time dilation | [
"Physics"
] | 1,878 | [
"Theory of relativity"
] |
852,522 | https://en.wikipedia.org/wiki/Representation%20of%20a%20Lie%20superalgebra | In the mathematical field of representation theory, a representation of a Lie superalgebra is an action of Lie superalgebra L on a Z2-graded vector space V, such that if A and B are any two pure elements of L and X and Y are any two pure elements of V, then
Equivalently, a representation of L is a Z2-graded representation of the universal enveloping algebra of L which respects the third equation above.
Unitary representation of a star Lie superalgebra
A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and
[a,b]*=[b*,a*].
A unitary representation of such a Lie algebra is a Z2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations.
This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1)LaL*[a*]) and H is the unitary rep and also, H is a unitary representation of A.
These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra,
L[a|ψ>)]=(L[a])|ψ>+(-1)Laa(L[|ψ>]).
Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to
L[a]=La-(-1)LaaL.
This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers.
See also
Graded vector space
Lie algebra representation
Representation theory of Hopf algebras
Representation theory of Lie algebras
Supersymmetry | Representation of a Lie superalgebra | [
"Physics",
"Mathematics"
] | 468 | [
"Algebra stubs",
"Algebra",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum physics stubs",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
852,588 | https://en.wikipedia.org/wiki/List%20of%20quantum%20field%20theories | This is a list of quantum field theories. The first few sections are organized according to their matter content, that is, the types of fields appearing in the theory. This is just one of many ways to organize quantum field theories, but reflects the way the subject is taught pedagogically.
Scalar field theory
Theories whose matter content consists of only scalar fields
Klein-Gordon: free scalar field theory
φ4 theory
Sine-Gordon
Toda field theory
Spinor field theory
Theories whose matter content consists only of spinor fields
Dirac theory: free spinor field theory
Thirring model
Nambu–Jona-Lasinio model
Gross–Neveu model
Gauge field theory
Theories whose matter content consists only of gauge fields
Yang–Mills theory
Proca theory
Chern–Simons theory
Interacting theories
Spinor and scalar
Yukawa model
Scalar and gauge
Scalar electrodynamics
Scalar chromodynamics
Yang–Mills–Higgs
Spinor and gauge
Quantum electrodynamics (QED)
Schwinger model (1+1D case of QED)
Quantum chromodynamics (QCD)
Scalar, spinor and gauge
Standard Model
Sigma models
Chiral model
Non-linear sigma model
Wess–Zumino–Witten model
Supersymmetric quantum field theories
Wess–Zumino model
Supersymmetric Yang–Mills
4D N = 1 global supersymmetry
Seiberg–Witten theory
Super QCD (sQCD)
Superconformal quantum field theories
N = 4 supersymmetric Yang–Mills theory
ABJM superconformal field theory
6D (2,0) superconformal field theory
Supergravity quantum field theories
Pure 4D N = 1 supergravity
4D N = 1 supergravity
Type I supergravity
Type IIA supergravity
Type IIB supergravity
Eleven-dimensional supergravity
String theories
Theories studied in the branch of quantum field theory known as string theory. These theories are without supersymmetry.
Polyakov action
Nambu-Goto action
Bosonic string theory
Other quantum field theories
Kondo model (s-d model)
Minimal model (Virasoro minimal model)
Branches of quantum field theory
String theory
Conformal field theory
Supersymmetry
Topological quantum field theory
Noncommutative quantum field theory
Local quantum field theory (also known as Algebraic quantum field theory or AQFT)
Quantum field theory
Supersymmetric quantum field theory
String theory | List of quantum field theories | [
"Physics",
"Astronomy"
] | 510 | [
"Quantum field theory",
"Astronomical hypotheses",
"Supersymmetric quantum field theory",
"Quantum mechanics",
"String theory",
"Supersymmetry",
"Symmetry"
] |
24,038,774 | https://en.wikipedia.org/wiki/1964%20PRL%20symmetry%20breaking%20papers | The 1964 PRL symmetry breaking papers were written by three teams who proposed related but different approaches to explain how mass could arise in local gauge theories. These three papers were written by: Robert Brout and François Englert; Peter Higgs; and Gerald Guralnik, C. Richard Hagen, and Tom Kibble (GHK). They are credited with the theory of the Higgs mechanism and the prediction of the Higgs field and Higgs boson. Together, these provide a theoretical means by which Goldstone's theorem (a problematic limitation affecting early modern particle physics theories) can be avoided. They showed how gauge bosons can acquire non-zero masses as a result of spontaneous symmetry breaking within gauge invariant models of the universe.
As such, these form the key element of the electroweak theory that forms part of the Standard Model of particle physics, and of many models, such as the Grand Unified Theory, that go beyond it. The papers that introduce this mechanism were published in Physical Review Letters (PRL) and were each recognized as milestone papers by PRLs 50th anniversary celebration. All of the six physicists were awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work; Brout, Englert and Higgs received the 2004 Wolf Prize in Physics; and in 2013 Englert and Higgs received the Nobel Prize in Physics.
On 4 July 2012, the two main experiments at the Large Hadron Collider (ATLAS and CMS) at CERN confirmed independently the existence of a previously unknown particle with a mass of about (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson" and widely believed to be the Higgs boson.
Introduction
A gauge theory of elementary particles is a very attractive potential framework for constructing the Grand Unified Theory of physics. Such a theory has the very desirable property of being potentially renormalizable—shorthand for saying that all calculational infinities encountered can be consistently absorbed into a few parameters of the theory. However, as soon as one gives mass to the gauge fields, renormalizability is lost, and the theory rendered useless. Spontaneous symmetry breaking is a promising mechanism, which could be used to give mass to the vector gauge particles. A significant difficulty which one encounters, however, is Goldstone's theorem, which states that in any quantum field theory which has a spontaneously broken symmetry there must occur a zero-mass particle. So the problem arises—how can one break a symmetry and at the same time not introduce unwanted zero-mass particles. The resolution of this dilemma lies in the observation that in the case of gauge theories, the Goldstone theorem can be avoided by working in the so-called radiation gauge. This is because the proof of Goldstone's theorem requires manifest Lorentz covariance, a property not possessed by the radiation gauge.
History
Particle physicists study matter made from fundamental particles whose interactions are mediated by exchange particles known as force carriers. At the beginning of the 1960s a number of these particles had been discovered or proposed, along with theories suggesting how they relate to each other, some of which had already been reformulated as field theories in which the objects of study are not particles and forces, but quantum fields and their symmetries. However, attempts to unify known fundamental forces such as the electromagnetic force and the weak nuclear force were known to be incomplete. One known omission was that gauge invariant approaches, including non-abelian models such as Yang–Mills theory (1954), which held great promise for unified theories, also seemed to predict known massive particles as massless. Goldstone's theorem, relating to continuous symmetries within some theories, also appeared to rule out many obvious solutions, since it appeared to show that zero-mass particles would have to also exist that were "simply not seen". According to Gerald Guralnik, physicists had "no understanding" how these problems could be overcome in 1964. In 2014, Guralnik and Carl Hagen wrote a paper that contended that even after 50 years there is still widespread misunderstanding, by physicists and the Nobel Committee, of the Goldstone boson role. This paper, published in Modern Physics Letters A, turned out to be Guralnik's last published work.
Particle physicist and mathematician Peter Woit summarised the state of research at the time:
"Yang and Mills work on non-abelian gauge theory had one huge problem: in perturbation theory it has massless particles which don't correspond to anything we see. One way of getting rid of this problem is now fairly well-understood, the phenomenon of confinement realized in QCD, where the strong interactions get rid of the massless "gluon" states at long distances. By the very early sixties, people had begun to understand another source of massless particles: spontaneous symmetry breaking of a continuous symmetry. What Philip Anderson realized and worked out in the summer of 1962 was that, when you have both gauge symmetry and spontaneous symmetry breaking, the Nambu–Goldstone massless mode can combine with the massless gauge field modes to produce a physical massive vector field. This is what happens in superconductivity, a subject about which Anderson was (and is) one of the leading experts." [text condensed]
The Higgs mechanism is a process by which vector bosons can get rest mass without explicitly breaking gauge invariance, as a byproduct of spontaneous symmetry breaking. The mathematical theory behind spontaneous symmetry breaking was initially conceived and published within particle physics by Yoichiro Nambu in 1960, the concept that such a mechanism could offer a possible solution for the "mass problem" was originally suggested in 1962 by Philip Anderson, and Abraham Klein and Benjamin Lee showed in March 1964 that Goldstone's theorem could be avoided this way in at least some non-relativistic cases and speculated it might be possible in truly relativistic cases.
These approaches were quickly developed into a full relativistic model, independently and almost simultaneously, by three groups of physicists: by François Englert and Robert Brout in August 1964; by Peter Higgs in October 1964; and by Gerald Guralnik, Carl Hagen, and Tom Kibble (GHK) in November 1964. Higgs also wrote a response published in September 1964 to an objection by Walter Gilbert, which showed that if calculating within the radiation gauge, Goldstone's theorem and Gilbert's objection would become inapplicable. (Higgs later described Gilbert's objection as prompting his own paper.) Properties of the model were further considered by Guralnik in 1965, by Higgs in 1966, by Kibble in 1967, and further by GHK in 1967. The original three 1964 papers showed that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry, the gauge bosons can consistently acquire a finite mass. In 1967, Steven Weinberg and Abdus Salam independently showed how a Higgs mechanism could be used to break the electroweak symmetry of Sheldon Glashow's unified model for the weak and electromagnetic interactions (itself an extension of work by Julian Schwinger), forming what became the Standard Model of particle physics. Weinberg was the first to observe that this would also provide mass terms for the fermions.
However, the seminal papers on spontaneous breaking of gauge symmetries were at first largely ignored, because it was widely believed that the (non-Abelian gauge) theories in question were a dead-end, and in particular that they could not be renormalised. In 1971–1972, Martinus J, G, Veltman and Gerard 't Hooft proved renormalisation of Yang–Mills was possible in two papers covering massless, and then massive, fields. Their contribution, and others' work on the renormalization group, was eventually "enormously profound and influential", but even with all key elements of the eventual theory published there was still almost no wider interest. For example, Sidney Coleman found in a study that "essentially no-one paid any attention" to Weinberg's paper prior to 1971 – now the most cited in particle physics – and even in 1970 according to David Politzer, Glashow's teaching of the weak interaction contained no mention of Weinberg's, Salam's, or Glashow's own work. In practice, Politzer states, almost everyone learned of the theory due to physicist Benjamin Lee, who combined the work of Veltman and 't Hooft with insights by others, and popularised the completed theory. In this way, from 1971, interest and acceptance "exploded" and the ideas were quickly absorbed in the mainstream.
The significance of requiring manifest covariance
Most students who have taken a course in electromagnetism have encountered the Coulomb potential. It basically states that two charged particles attract or repel each other by a force which varies according to the inverse square of their separation. This is fairly unambiguous for particles at rest, but if one or the other is following an arbitrary trajectory the question arises whether one should compute the force using the instantaneous positions of the particles or the so-called retarded positions. The latter recognizes that information cannot propagate instantaneously, rather it propagates at the speed of light. However, the radiation gauge says that one uses the instantaneous positions of the particles, but doesn't violate causality because there are compensating terms in the force equation. In contrast, the Lorenz gauge imposes manifest covariance (and thus causality) at all stages of a calculation. Predictions of observable quantities are identical in the two gauges, but the radiation gauge formulation of quantum field theory avoids Goldstone's theorem.
Summary and impact of the PRL papers
The three papers written in 1964 were each recognised as milestone papers during Physical Review Letters 50th anniversary celebration. Their six authors were also awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. (A controversy also arose the same year, because in the event of a Nobel Prize only up to three scientists could be recognised, with six being credited for the papers. ) Two of the three PRL papers (by Higgs and by GHK) contained equations for the hypothetical field that eventually would become known as the Higgs field and its hypothetical quantum, the Higgs boson. Higgs's subsequent 1966 paper showed the decay mechanism of the boson; only a massive boson can decay and the decays can prove the mechanism.
Each of these papers is unique and demonstrates different approaches to showing how mass arise in gauge particles. Over the years, the differences between these papers are no longer widely understood, due to the passage of time and acceptance of end-results by the particle physics community. A study of citation indices is interesting—more than 40 years after the 1964 publication in Physical Review Letters there is little noticeable pattern of preference among them, with the vast majority of researchers in the field mentioning all three milestone papers.
In the paper by Higgs the boson is massive, and in a closing sentence Higgs writes that "an essential feature" of the theory "is the prediction of incomplete multiplets of scalar and vector bosons". (Frank Close comments that 1960s gauge theorists were focused on the problem of massless vector bosons, and the implied existence of a massive scalar boson was not seen as important; only Higgs directly addressed it.) In the paper by GHK the boson is massless and decoupled from the massive states. In reviews dated 2009 and 2011, Guralnik states that in the GHK model the boson is massless only in a lowest-order approximation, but it is not subject to any constraint and acquires mass at higher orders, and adds that the GHK paper was the only one to show that there are no massless Goldstone bosons in the model and to give a complete analysis of the general Higgs mechanism. All three reached similar conclusions, despite their very different approaches: Higgs' paper essentially used classical techniques, Englert and Brout's involved calculating vacuum polarization in perturbation theory around an assumed symmetry-breaking vacuum state, and GHK used operator formalism and conservation laws to explore in depth the ways in which Goldstone's theorem explicitly fails.
In addition to explaining how mass is acquired by vector bosons, the Higgs mechanism also predicts the ratio between the W boson and Z boson masses as well as their couplings with each other and with the Standard Model quarks and leptons. Subsequently, many of these predictions have been verified by precise measurements performed at the Large Electron-Positron Collider (LEP) and the Standford Linear Collider (SLC) colliders, thus overwhelmingly confirming that some kind of Higgs mechanism does take place in nature, but the exact manner by which it happens has not yet been discovered. The results of searching for the Higgs boson are expected to provide evidence about how this is realized in nature.
Consequences of the papers
The resulting electroweak theory and Standard Model have correctly predicted (among other discoveries) weak neutral currents, three bosons, the top and charm quarks, and with great precision, the mass and other properties of some of these. Many of those involved eventually won Nobel Prizes or other renowned awards. A 1974 paper in Reviews of Modern Physics commented that "while no one doubted the [mathematical] correctness of these arguments, no one quite believed that nature was diabolically clever enough to take advantage of them". By 1986 and again in the 1990s it became possible to write that understanding and proving the Higgs sector of the Standard Model was "the central problem today in particle physics."
See also
Notes
References
Further reading
External links
The Hunt for the Higgs at Tevatron
CERN Courier Letter from GHK – December 2008
In CERN Courier, Steven Weinberg reflects on spontaneous symmetry breaking
Blog Not Even Wrong, Review of Massive by Ian Sample
Blog Not Even Wrong, Anderson-Higgs Mechanism
Ian Sample on Controversy and Nobel Reform
PRL symmetry breaking papers
Physics papers
Works originally published in American magazines
1964 documents
Works originally published in science and technology magazines
Standard Model | 1964 PRL symmetry breaking papers | [
"Physics"
] | 2,948 | [
"Standard Model",
"Particle physics"
] |
24,039,902 | https://en.wikipedia.org/wiki/Ectopic%20recombination | Ectopic recombination is an atypical form of recombination in which a crossing over takes place between two homologous DNA sequences located at non-allelic chromosomal positions. Such recombination often results in dramatic chromosomal rearrangement, which is generally harmful to the organism. Some research, however, has suggested that ectopic recombination can result in mutated chromosomes that benefit the organism. Ectopic recombination can occur during both meiosis and mitosis, although it is more likely occur during meiosis. It occurs relatively frequently—in at least one yeast species (Saccharomyces cerevisiae) the frequency of ectopic recombination is roughly on par with that of allelic (or traditional) recombination. If the alleles at two loci are heterozygous, then ectopic recombination is relatively likely to occur, whereas if the alleles are homozygous, they will almost certainly undergo allelic recombination. Ectopic recombination does not require loci involved to be close to one another; it can occur between loci that are widely separated on a single chromosome, and has even been known to occur across chromosomes. Neither does it require high levels of homology between sequences—the lower limit required for it to occur has been estimated at as low as 2.2 kb of homologous stretches of DNA nucleotides.
In tobacco plant somatic cells, DNA double-strand break-induced recombination between ectopic homologous sequences appears to serve as a minor DNA repair pathway for double-strand breaks.
The role of transposable elements in ectopic recombination is an area of active inquiry. Transposable elements—repetitious sequences of DNA that can insert themselves into any part of the genome—can encourage ectopic recombination at repeated homologous sequences of nucleotides. However, according to one proposed model, ectopic recombination might serve as an inhibitor of high transposable element copy numbers. The frequency of ectopic recombination of transposable elements has been linked to both higher copy numbers of transposable elements and the longer lengths of those elements. Since ectopic recombination is generally deleterious, anything that increases its odds of occurring is selected against, including the aforementioned higher copy numbers and longer lengths. This model, however, can only be applied to single families of transposable elements in the genome, as the probability of ectopic recombination occurring in one TE family is independent of it occurring in another. It follows that transposable elements that are shorter, transpose themselves less often, and have mutation rates high enough to disrupt the homology between transposable element sequences sufficiently to prevent ectopic recombination from occurring are selected for.
References
Cellular processes
Modification of genetic information
Molecular genetics | Ectopic recombination | [
"Chemistry",
"Biology"
] | 625 | [
"Modification of genetic information",
"Molecular genetics",
"Cellular processes",
"Molecular biology"
] |
1,307,911 | https://en.wikipedia.org/wiki/Bootstrap%20aggregating | Bootstrap aggregating, also called bagging (from bootstrap aggregating) or bootstrapping, is a machine learning (ML) ensemble meta-algorithm designed to improve the stability and accuracy of ML classification and regression algorithms. It also reduces variance and overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the ensemble averaging approach.
Description of the technique
Given a standard training set of size , bagging generates new training sets , each of size , by sampling from uniformly and with replacement. By sampling with replacement, some observations may be repeated in each . If , then for large the set is expected to have the fraction (1 - 1/e) (~63.2%) of the unique samples of , the rest being duplicates. This kind of sample is known as a bootstrap sample. Sampling with replacement ensures each bootstrap is independent from its peers, as it does not depend on previous chosen samples when sampling. Then, models are fitted using the above bootstrap samples and combined by averaging the output (for regression) or voting (for classification).
Bagging leads to "improvements for unstable procedures", which include, for example, artificial neural networks, classification and regression trees, and subset selection in linear regression. Bagging was shown to improve preimage learning. On the other hand, it can mildly degrade the performance of stable methods such as k-nearest neighbors.
Process of the algorithm
Key Terms
There are three types of datasets in bootstrap aggregating. These are the original, bootstrap, and out-of-bag datasets. Each section below will explain how each dataset is made except for the original dataset. The original dataset is whatever information is given.
Creating the bootstrap dataset
The bootstrap dataset is made by randomly picking objects from the original dataset. Also, it must be the same size as the original dataset. However, the difference is that the bootstrap dataset can have duplicate objects. Here is a simple example to demonstrate how it works along with the illustration below:
Suppose the original dataset is a group of 12 people. Their names are Emily, Jessie, George, Constantine, Lexi, Theodore, John, James, Rachel, Anthony, Ellie, and Jamal.
By randomly picking a group of names, let us say our bootstrap dataset had James, Ellie, Constantine, Lexi, John, Constantine, Theodore, Constantine, Anthony, Lexi, Constantine, and Theodore. In this case, the bootstrap sample contained four duplicates for Constantine, and two duplicates for Lexi, and Theodore.
Creating the out-of-bag dataset
The out-of-bag dataset represents the remaining people who were not in the bootstrap dataset. It can be calculated by taking the difference between the original and the bootstrap datasets. In this case, the remaining samples who were not selected are Emily, Jessie, George, Rachel, and Jamal. Keep in mind that since both datasets are sets, when taking the difference the duplicate names are ignored in the bootstrap dataset. The illustration below shows how the math is done:
Importance
Creating the bootstrap and out-of-bag datasets is crucial since it is used to test the accuracy of a random forest algorithm. For example, a model that produces 50 trees using the bootstrap/out-of-bag datasets will have a better accuracy than if it produced 10 trees. Since the algorithm generates multiple trees and therefore multiple datasets the chance that an object is left out of the bootstrap dataset is low. The next few sections talk about how the random forest algorithm works in more detail.
Creation of Decision Trees
The next step of the algorithm involves the generation of decision trees from the bootstrapped dataset. To achieve this, the process examines each gene/feature and determines for how many samples the feature's presence or absence yields a positive or negative result. This information is then used to compute a confusion matrix, which lists the true positives, false positives, true negatives, and false negatives of the feature when used as a classifier. These features are then ranked according to various classification metrics based on their confusion matrices. Some common metrics include estimate of positive correctness (calculated by subtracting false positives from true positives), measure of "goodness", and information gain. These features are then used to partition the samples into two sets: those who possess the top feature, and those who do not.
The diagram below shows a decision tree of depth two being used to classify data. For example, a data point that exhibits Feature 1, but not Feature 2, will be given a "No". Another point that does not exhibit Feature 1, but does exhibit Feature 3, will be given a "Yes".
This process is repeated recursively for successive levels of the tree until the desired depth is reached. At the very bottom of the tree, samples that test positive for the final feature are generally classified as positive, while those that lack the feature are classified as negative. These trees are then used as predictors to classify new data.
Random Forests
The next part of the algorithm involves introducing yet another element of variability amongst the bootstrapped trees. In addition to each tree only examining a bootstrapped set of samples, only a small but consistent number of unique features are considered when ranking them as classifiers. This means that each tree only knows about the data pertaining to a small constant number of features, and a variable number of samples that is less than or equal to that of the original dataset. Consequently, the trees are more likely to return a wider array of answers, derived from more diverse knowledge. This results in a random forest, which possesses numerous benefits over a single decision tree generated without randomness. In a random forest, each tree "votes" on whether or not to classify a sample as positive based on its features. The sample is then classified based on majority vote. An example of this is given in the diagram below, where the four trees in a random forest vote on whether or not a patient with mutations A, B, F, and G has cancer. Since three out of four trees vote yes, the patient is then classified as cancer positive.
Because of their properties, random forests are considered one of the most accurate data mining algorithms, are less likely to overfit their data, and run quickly and efficiently even for large datasets. They are primarily useful for classification as opposed to regression, which attempts to draw observed connections between statistical variables in a dataset. This makes random forests particularly useful in such fields as banking, healthcare, the stock market, and e-commerce where it is important to be able to predict future results based on past data. One of their applications would be as a useful tool for predicting cancer based on genetic factors, as seen in the above example.
There are several important factors to consider when designing a random forest. If the trees in the random forests are too deep, overfitting can still occur due to over-specificity. If the forest is too large, the algorithm may become less efficient due to an increased runtime. Random forests also do not generally perform well when given sparse data with little variability. However, they still have numerous advantages over similar data classification algorithms such as neural networks, as they are much easier to interpret and generally require less data for training. As an integral component of random forests, bootstrap aggregating is very important to classification algorithms, and provides a critical element of variability that allows for increased accuracy when analyzing new data, as discussed below.
Improving Random Forests and Bagging
While the techniques described above utilize random forests and bagging (otherwise known as bootstrapping), there are certain techniques that can be used in order to improve their execution and voting time, their prediction accuracy, and their overall performance. The following are key steps in creating an efficient random forest:
Specify the maximum depth of trees: Instead of allowing the random forest to continue until all nodes are pure, it is better to cut it off at a certain point in order to further decrease chances of overfitting.
Prune the dataset: Using an extremely large dataset may create results that are less indicative of the data provided than a smaller set that more accurately represents what is being focused on.
Continue pruning the data at each node split rather than just in the original bagging process.
Decide on accuracy or speed: Depending on the desired results, increasing or decreasing the number of trees within the forest can help. Increasing the number of trees generally provides more accurate results while decreasing the number of trees will provide quicker results.
Algorithm (classification)
For classification, use a training set , Inducer and the number of bootstrap samples as input. Generate a classifier as output
Create new training sets , from with replacement
Classifier is built from each set using to determine the classification of set
Finally classifier is generated by using the previously created set of classifiers on the original dataset , the classification predicted most often by the sub-classifiers is the final classification
for i = 1 to m {
D' = bootstrap sample from D (sample with replacement)
Ci = I(D')
}
C*(x) = argmax #{i:Ci(x)=y} (most often predicted label y)
y∈Y
Example: ozone data
To illustrate the basic principles of bagging, below is an analysis on the relationship between ozone and temperature (data from Rousseeuw and Leroy (1986), analysis done in R).
The relationship between temperature and ozone appears to be nonlinear in this dataset, based on the scatter plot. To mathematically describe this relationship, LOESS smoothers (with bandwidth 0.5) are used. Rather than building a single smoother for the complete dataset, 100 bootstrap samples were drawn. Each sample is composed of a random subset of the original data and maintains a semblance of the master set's distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The black lines represent these initial predictions. The lines lack agreement in their predictions and tend to overfit their data points: evident by the wobbly flow of the lines.
By taking the average of 100 smoothers, each corresponding to a subset of the original dataset, we arrive at one bagged predictor (red line). The red line's flow is stable and does not overly conform to any data point(s).
Advantages and disadvantages
Advantages:
Many weak learners aggregated typically outperform a single learner over the entire set, and have less overfit
Reduces variance in high-variance low-bias weak learner, which can improve efficiency (statistics)
Can be performed in parallel, as each separate bootstrap can be processed on its own before aggregation.
Disadvantages:
For a weak learner with high bias, bagging will also carry high bias into its aggregate
Loss of interpretability of a model.
Can be computationally expensive depending on the dataset.
History
The concept of bootstrap aggregating is derived from the concept of bootstrapping which was developed by Bradley Efron.
Bootstrap aggregating was proposed by Leo Breiman who also coined the abbreviated term "bagging" (bootstrap aggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, "If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy".
See also
Boosting (machine learning)
Bootstrapping (statistics)
Cross-validation (statistics)
Out-of-bag error
Random forest
Random subspace method (attribute bagging)
Resampled efficient frontier
Predictive analysis: Classification and regression trees
References
Further reading
Ensemble learning
Machine learning algorithms
Computational statistics | Bootstrap aggregating | [
"Mathematics"
] | 2,488 | [
"Computational statistics",
"Computational mathematics"
] |
1,309,860 | https://en.wikipedia.org/wiki/Carbonatation | Carbonatation is a chemical reaction in which calcium hydroxide reacts with carbon dioxide and forms insoluble calcium carbonate:
Ca(OH)2{+}CO2->CaCO3{+}H_2O
The process of forming a carbonate is sometimes referred to as "carbonation", although this term usually refers to the process of dissolving carbon dioxide in water.
Concrete
Carbonatation is a slow process that occurs in concrete where lime (CaO, or Ca(OH)2(aq)) in the cement reacts with carbon dioxide (CO2) from the air and forms calcium carbonate.
The water in the pores of Portland cement concrete is normally alkaline with a pH in the range of 12.5 to 13.5. This highly alkaline environment is one in which the steel rebar is passivated and is protected from corrosion. According to the Pourbaix diagram for iron, the metal is passive when the pH is above 9.5.
The carbon dioxide in the air reacts with the alkali in the cement and makes the pore water more acidic, thus lowering the pH. Carbon dioxide will start to carbonatate the cement in the concrete from the moment the object is made. This carbonatation process will start at the surface, then slowly moves deeper and deeper into the concrete. The rate of carbonatation is dependent on the relative humidity of the concrete - a 50% relative humidity being optimal. If the object is cracked, the carbon dioxide in the air will be better able to penetrate into the concrete.
Eventually this may lead to corrosion of the rebar and structural damage or failure.
Sugar refining
The carbonatation process is used in the production of sugar from sugar beets.
It involves the introduction of limewater (milk of lime - calcium hydroxide suspension) and carbon dioxide enriched gas into the "raw juice" (the sugar rich liquid prepared from the diffusion stage of the process) to form calcium carbonate and precipitate impurities that are then removed. The whole process takes place in "carbonatation tanks" and processing time varies from 20 minutes to an hour.
Carbonatation involves the following effects:
The increase in alkalinity coagulates proteins in the juice.
Calcium carbonate absorbs colourants
Alkalinity destroys some monosaccharide sugars, mostly glucose and fructose
The target is a large particle that naturally settles rapidly to leave a clear juice. The juice at the end is approximately 15 °Bx and 90% sucrose. The pH of the thin juice produced is a balance between removing as much calcium from the solution and the expected pH drop across later processing. If the juice goes acidic in the crystallisation stages then sucrose rapidly breaks down to glucose and fructose; not only do glucose and fructose affect crystallisation but they are 'molassagenic' taking equivalent amounts of sucrose on to the molasses stage.
The carbon dioxide gas bubbled through the mixture forms calcium carbonate. The non-sugar solids are incorporated into the calcium carbonate particles and removed by natural (or assisted) sedimentation in tanks or clarifiers.
There are several systems of carbonatation, named from the companies that first developed them. They differ in how the lime is introduced, the temperature and duration of each stage, and the separation of the solids from the liquid.
Dorr (also Dorr-Oliver) - a continuous process using two tanks with recycling ("1st carbonatation") to build up particle size for natural flocculation. The recycling ratio is about 7:1. The particles are separated under gravity in a thickening stage in a clarifier. The clear juice is then gassed further in another tank ("2nd carbonatation") and filtered. The concentrated mud (underflow) from the clarifier is filtered and/or pressed to recover more liquid. The Dorr process is low in maintenance and man-power but susceptible to filtration problems when frost damaged beets are processed. It is favoured in the UK and the USA.
DDS (Det Danske Sukkerfabrik - "The Danish Sugarfactory") -- multistage process involving pre-liming where the pH of the juice is gradually increased to start precipitation of proteins, followed by addition of further lime and CO2 gas. The particles are removed at each stage by filtration.
RT (Raffinerie Tirlemontoise - "Sugar refinery of Tienen") - another multistage process with a pre-liming stage. Particles also removed by filtration.
Both DDS and RT processes are favoured by European factories. The carbonatation system is generally matched to the diffusion scheme; juice from RT diffusers being processed by the RT carbonatation.
The clear juice from carbonatation is generally known as "thin juice". it may undergo pH adjustment with soda ash and addition of sulfur ("sulfitation") prior to the next stage which is concentration by multiple effect evaporation.
Water softening
The carbonatation reaction takes place during lime softening (Clark's process) in water softening.
See also
Alkali–silica reaction
Concrete degradation
Phosphatation — a similar process used in sugarcane processing.
References
External links
Sugar Process at the Crystal Sugar website
How Beet Sugar is Made
Processing technology
Chemistry of construction methods
Food industry
Inorganic reactions
Concrete | Carbonatation | [
"Chemistry",
"Engineering"
] | 1,114 | [
"Structural engineering",
"Concrete",
"Inorganic reactions"
] |
138,677 | https://en.wikipedia.org/wiki/Landau%27s%20function | In mathematics, Landau's function g(n), named after Edmund Landau, is defined for every natural number n to be the largest order of an element of the symmetric group Sn. Equivalently, g(n) is the largest least common multiple (lcm) of any partition of n, or the maximum number of times a permutation of n elements can be recursively applied to itself before it returns to its starting sequence.
For instance, 5 = 2 + 3 and lcm(2,3) = 6. No other partition of 5 yields a bigger lcm, so g(5) = 6. An element of order 6 in the group S5 can be written in cycle notation as (1 2) (3 4 5). Note that the same argument applies to the number 6, that is, g(6) = 6. There are arbitrarily long sequences of consecutive numbers n, n + 1, ..., n + m on which the function g is constant.
The integer sequence g(0) = 1, g(1) = 1, g(2) = 2, g(3) = 3, g(4) = 4, g(5) = 6, g(6) = 6, g(7) = 12, g(8) = 15, ... is named after Edmund Landau, who proved in 1902 that
(where ln denotes the natural logarithm). Equivalently (using little-o notation), .
More precisely,
If , where denotes the prime counting function, the logarithmic integral function with inverse , and we may take for some constant c > 0 by Ford, then
The statement that
for all sufficiently large n is equivalent to the Riemann hypothesis.
It can be shown that
with the only equality between the functions at n = 0, and indeed
Notes
References
E. Landau, "Über die Maximalordnung der Permutationen gegebenen Grades [On the maximal order of permutations of given degree]", Arch. Math. Phys. Ser. 3, vol. 5, 1903.
W. Miller, "The maximum order of an element of a finite symmetric group", American Mathematical Monthly, vol. 94, 1987, pp. 497–506.
J.-L. Nicolas, "On Landau's function g(n)", in The Mathematics of Paul Erdős, vol. 1, Springer-Verlag, 1997, pp. 228–240.
External links
Group theory
Permutations
Arithmetic functions | Landau's function | [
"Mathematics"
] | 529 | [
"Functions and mappings",
"Permutations",
"Arithmetic functions",
"Mathematical objects",
"Combinatorics",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Number theory"
] |
139,229 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet%20theorem | In the mathematical field of differential geometry, the Gauss–Bonnet theorem (or Gauss–Bonnet formula) is a fundamental formula which links the curvature of a surface to its underlying topology.
In the simplest application, the case of a triangle on a plane, the sum of its angles is 180 degrees. The Gauss–Bonnet theorem extends this to more complicated shapes and curved surfaces, connecting the local and global geometries.
The theorem is named after Carl Friedrich Gauss, who developed a version but never published it, and Pierre Ossian Bonnet, who published a special case in 1848.
Statement
Suppose is a compact two-dimensional Riemannian manifold with boundary . Let be the Gaussian curvature of , and let be the geodesic curvature of . Then
where is the element of area of the surface, and is the line element along the boundary of . Here, is the Euler characteristic of .
If the boundary is piecewise smooth, then we interpret the integral as the sum of the corresponding integrals along the smooth portions of the boundary, plus the sum of the angles by which the smooth portions turn at the corners of the boundary.
Many standard proofs use the theorem of turning tangents, which states roughly that the winding number of a Jordan curve is exactly ±1.
A simple example
Suppose is the northern hemisphere cut out from a sphere of radius . Its Euler characteristic is 1. On the left hand side of the theorem, we have and , because the boundary is the equator and the equator is a geodesic of the sphere. Then .
On the other hand, suppose we flatten the hemisphere to make it into a disk. This transformation is a homeomorphism, so the Euler characteristic is still 1. However, on the left hand side of the theorem we now have and , because a circumference is not a geodesic of the plane. Then .
Finally, take a sphere octant, also homeomorphic to the previous cases. Then . Now almost everywhere along the border, which is a geodesic triangle. But we have three right-angle corners, so .
Interpretation and significance
The theorem applies in particular to compact surfaces without boundary, in which case the integral
can be omitted. It states that the total Gaussian curvature of such a closed surface is equal to 2 times the Euler characteristic of the surface. Note that for orientable compact surfaces without boundary, the Euler characteristic equals , where is the genus of the surface: Any orientable compact surface without boundary is topologically equivalent to a sphere with some handles attached, and counts the number of handles.
If one bends and deforms the surface , its Euler characteristic, being a topological invariant, will not change, while the curvatures at some points will. The theorem states, somewhat surprisingly, that the total integral of all curvatures will remain the same, no matter how the deforming is done. So for instance if you have a sphere with a "dent", then its total curvature is 4 (the Euler characteristic of a sphere being 2), no matter how big or deep the dent.
Compactness of the surface is of crucial importance. Consider for instance the open unit disc, a non-compact Riemann surface without boundary, with curvature 0 and with Euler characteristic 1: the Gauss–Bonnet formula does not work. It holds true however for the compact closed unit disc, which also has Euler characteristic 1, because of the added boundary integral with value 2.
As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. If the torus carries the ordinary Riemannian metric from its embedding in , then the inside has negative Gaussian curvature, the outside has positive Gaussian curvature, and the total curvature is indeed 0. It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0. It is not possible to specify a Riemannian metric on the torus with everywhere positive or everywhere negative Gaussian curvature.
For triangles
Sometimes the Gauss–Bonnet formula is stated as
where is a geodesic triangle. Here we define a "triangle" on to be a simply connected region whose boundary consists of three geodesics. We can then apply GB to the surface formed by the inside of that triangle and the piecewise boundary of the triangle.
The geodesic curvature the bordering geodesics is 0, and the Euler characteristic of being 1.
Hence the sum of the turning angles of the geodesic triangle is equal to 2 minus the total curvature within the triangle. Since the turning angle at a corner is equal to minus the interior angle, we can rephrase this as follows:
The sum of interior angles of a geodesic triangle is equal to plus the total curvature enclosed by the triangle:
In the case of the plane (where the Gaussian curvature is 0 and geodesics are straight lines), we recover the familiar formula for the sum of angles in an ordinary triangle. On the standard sphere, where the curvature is everywhere 1, we see that the angle sum of geodesic triangles is always bigger than .
Special cases
A number of earlier results in spherical geometry and hyperbolic geometry, discovered over the preceding centuries, were subsumed as special cases of Gauss–Bonnet.
Triangles
In spherical trigonometry and hyperbolic trigonometry, the area of a triangle is proportional to the amount by which its interior angles fail to add up to 180°, or equivalently by the (inverse) amount by which its exterior angles fail to add up to 360°.
The area of a spherical triangle is proportional to its excess, by Girard's theorem – the amount by which its interior angles add up to more than 180°, which is equal to the amount by which its exterior angles add up to less than 360°.
The area of a hyperbolic triangle, conversely is proportional to its defect, as established by Johann Heinrich Lambert.
Polyhedra
Descartes' theorem on total angular defect of a polyhedron is the piecewise-linear analog:
it states that the sum of the defect at all the vertices of a polyhedron which is homeomorphic to the sphere is 4. More generally, if the polyhedron has Euler characteristic (where is the genus, the "number of holes"), then the sum of the defect is .
This is the special case of Gauss–Bonnet in which the curvature is concentrated at discrete points (the vertices). Thinking of curvature as a measure rather than a function, Descartes' theorem is Gauss–Bonnet where the curvature is a discrete measure, and Gauss–Bonnet for measures generalizes both Gauss–Bonnet for smooth manifolds and Descartes' theorem.
Combinatorial analog
There are several combinatorial analogs of the Gauss–Bonnet theorem. We state the following one. Let be a finite 2-dimensional pseudo-manifold. Let denote the number of triangles containing the vertex . Then
where the first sum ranges over the vertices in the interior of , the second sum is over the boundary vertices, and is the Euler characteristic of .
Similar formulas can be obtained for 2-dimensional pseudo-manifold when we replace triangles with higher polygons. For polygons of vertices, we must replace 3 and 6 in the formula above with and , respectively.
For example, for quadrilaterals we must replace 3 and 6 in the formula above with 2 and 4, respectively. More specifically, if is a closed 2-dimensional digital manifold, the genus turns out
where indicates the number of surface-points each of which has adjacent points on the surface. This is the simplest formula of Gauss–Bonnet theorem in three-dimensional digital space.
Generalizations
The Chern theorem (after Shiing-Shen Chern 1945) is the -dimensional generalization of GB (also see Chern–Weil homomorphism).
The Riemann–Roch theorem can also be seen as a generalization of GB to complex manifolds.
A far-reaching generalization that includes all the abovementioned theorems is the Atiyah–Singer index theorem.
A generalization to 2-manifolds that need not be compact is Cohn-Vossen's inequality.
In popular culture
In Greg Egan's novel Diaspora, two characters discuss the derivation of this theorem.
The theorem can be used directly as a system to control sculpture - for example, in work by Edmund Harriss in the collection of the University of Arkansas Honors College.
See also
Chern–Gauss–Bonnet theorem
Atiyah–Singer index theorem
References
Further reading
External links
Gauss–Bonnet Theorem at Wolfram Mathworld
Theorems in differential geometry
Riemann surfaces | Gauss–Bonnet theorem | [
"Mathematics"
] | 1,805 | [
"Theorems in differential geometry",
"Theorems in geometry"
] |
139,410 | https://en.wikipedia.org/wiki/Homological%20algebra | Homological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert.
Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of category theory. A central concept is that of chain complexes, which can be studied through their homology and cohomology.
Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other "tangible" mathematical objects. A spectral sequence is a powerful tool for this.
It has played an enormous role in algebraic topology. Its influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. K-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes.
History
Homological algebra began to be studied in its most basic form in the 1800s as a branch of topology and in the 1940s became an independent subject with the study of objects such as the ext functor and the tor functor, among others.
Chain complexes and homology
The notion of chain complex is central in homological algebra. An abstract chain complex is a sequence of abelian groups and group homomorphisms,
with the property that the composition of any two consecutive maps is zero:
The elements of Cn are called n-chains and the homomorphisms dn are called the boundary maps or differentials. The chain groups Cn may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring R. The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of R-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles Zn = Ker dn and the boundaries Bn = Im dn+1, where Ker d and Im d denote the kernel and the image of d. Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as
Subgroups of abelian groups are automatically normal; therefore we can define the nth homology group Hn(C) as the factor group of the n-cycles by the n-boundaries,
A chain complex is called acyclic or an exact sequence if all its homology groups are zero.
Chain complexes arise in abundance in algebra and algebraic topology. For example, if X is a topological space then the singular chains Cn(X) are formal linear combinations of continuous maps from the standard n-simplex into X; if K is a simplicial complex then the simplicial chains Cn(K) are formal linear combinations of the n-simplices of K; if A = F/R is a presentation of an abelian group A by generators and relations, where F is a free abelian group spanned by the generators and R is the subgroup of relations, then letting C1(A) = R, C0(A) = F, and Cn(A) = 0 for all other n defines a sequence of abelian groups. In all these cases, there are natural differentials dn making Cn into a chain complex, whose homology reflects the structure of the topological space X, the simplicial complex K, or the abelian group A. In the case of topological spaces, we arrive at the notion of singular homology, which plays a fundamental role in investigating the properties of such spaces, for example, manifolds.
On a philosophical level, homological algebra teaches us that certain chain complexes associated with algebraic or geometric objects (topological spaces, simplicial complexes, R-modules) contain a lot of valuable algebraic information about them, with the homology being only the most readily available part. On a technical level, homological algebra provides the tools for manipulating complexes and extracting this information. Here are two general illustrations.
Two objects X and Y are connected by a map f between them. Homological algebra studies the relation, induced by the map f, between chain complexes associated with X and Y and their homology. This is generalized to the case of several objects and maps connecting them. Phrased in the language of category theory, homological algebra studies the functorial properties of various constructions of chain complexes and of the homology of these complexes.
An object X admits multiple descriptions (for example, as a topological space and as a simplicial complex) or the complex is constructed using some 'presentation' of X, which involves non-canonical choices. It is important to know the effect of change in the description of X on chain complexes associated with X. Typically, the complex and its homology are functorial with respect to the presentation; and the homology (although not the complex itself) is actually independent of the presentation chosen, thus it is an invariant of X.
Standard tools
Exact sequences
In the context of group theory, a sequence
of groups and group homomorphisms is called exact if the image of each homomorphism is equal to the kernel of the next:
Note that the sequence of groups and homomorphisms may be either finite or infinite.
A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels.
Short
The most common type of exact sequence is the short exact sequence. This is an exact sequence of the form
where ƒ is a monomorphism and g is an epimorphism. In this case, A is a subobject of B, and the corresponding quotient is isomorphic to C:
(where f(A) = im(f)).
A short exact sequence of abelian groups may also be written as an exact sequence with five terms:
where 0 represents the zero object, such as the trivial group or a zero-dimensional vector space. The placement of the 0's forces ƒ to be a monomorphism and g to be an epimorphism (see below).
Long
A long exact sequence is an exact sequence indexed by the natural numbers.
Five lemma
Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups.
The five lemma states that, if the rows are exact, m and p are isomorphisms, l is an epimorphism, and q is a monomorphism, then n is also an isomorphism.
Snake lemma
In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram:
where the rows are exact sequences and 0 is the zero object.
Then there is an exact sequence relating the kernels and cokernels of a, b, and c:
Furthermore, if the morphism f is a monomorphism, then so is the morphism ker a → ker b, and if g is an epimorphism, then so is coker b → coker c.
Abelian categories
In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototype example of an abelian category is the category of abelian groups, Ab. The theory originated in a tentative attempt to unify several cohomology theories by Alexander Grothendieck. Abelian categories are very stable categories, for example they are regular and they satisfy the snake lemma. The class of Abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an Abelian category, or the category of functors from a small category to an Abelian category are Abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory. Abelian categories are named after Niels Henrik Abel.
More concretely, a category is abelian if
it has a zero object,
it has all binary products and binary coproducts, and
it has all kernels and cokernels.
all monomorphisms and epimorphisms are normal.
Derived functor
Suppose we are given a covariant left exact functor F : A → B between two abelian categories A and B. If 0 → A → B → C → 0 is a short exact sequence in A, then applying F yields the exact sequence 0 → F(A) → F(B) → F(C) and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of F. For every i≥1, there is a functor RiF: A → B, and the above sequence continues like so: 0 → F(A) → F(B) → F(C) → R1F(A) → R1F(B) → R1F(C) → R2F(A) → R2F(B) → ... . From this we see that F is an exact functor if and only if R1F = 0; so in a sense the right derived functors of F measure "how far" F is from being exact.
Ext functor
Let R be a ring and let ModR be the category of modules over R. Let B be in ModR and set T(B) = HomR(A,B), for fixed A in ModR. This is a left exact functor and thus has right derived functors RnT. The Ext functor is defined by
This can be calculated by taking any injective resolution
and computing
Then (RnT)(B) is the cohomology of this complex. Note that HomR(A,B) is excluded from the complex.
An alternative definition is given using the functor G(A)=HomR(A,B). For a fixed module B, this is a contravariant left exact functor, and thus we also have right derived functors RnG, and can define
This can be calculated by choosing any projective resolution
and proceeding dually by computing
Then (RnG)(A) is the cohomology of this complex. Again note that HomR(A,B) is excluded.
These two constructions turn out to yield isomorphic results, and so both may be used to calculate the Ext functor.
Tor functor
Suppose R is a ring, and denoted by R-Mod the category of left R-modules and by Mod-R the category of right R-modules (if R is commutative, the two categories coincide). Fix a module B in R-Mod. For A in Mod-R, set T(A) = A⊗RB. Then T is a right exact functor from Mod-R to the category of abelian groups Ab (in the case when R is commutative, it is a right exact functor from Mod-R to Mod-R) and its left derived functors LnT are defined. We set
i.e., we take a projective resolution
then remove the A term and tensor the projective resolution with B to get the complex
(note that A⊗RB does not appear and the last arrow is just the zero map) and take the homology of this complex.
Spectral sequence
Fix an abelian category, such as a category of modules over a ring. A spectral sequence is a choice of a nonnegative integer r0 and a collection of three sequences:
For all integers r ≥ r0, an object Er, called a sheet (as in a sheet of paper), or sometimes a page or a term,
Endomorphisms dr : Er → Er satisfying dr o dr = 0, called boundary maps or differentials,
Isomorphisms of Er+1 with H(Er), the homology of Er with respect to dr.
A doubly graded spectral sequence has a tremendous amount of data to keep track of, but there is a common visualization technique which makes the structure of the spectral sequence clearer. We have three indices, r, p, and q. For each r, imagine that we have a sheet of graph paper. On this sheet, we will take p to be the horizontal direction and q to be the vertical direction. At each lattice point we have the object .
It is very common for n = p + q to be another natural index in the spectral sequence. n runs diagonally, northwest to southeast, across each sheet. In the homological case, the differentials have bidegree (−r, r − 1), so they decrease n by one. In the cohomological case, n is increased by one. When r is zero, the differential moves objects one space down or up. This is similar to the differential on a chain complex. When r is one, the differential moves objects one space to the left or right. When r is two, the differential moves objects just like a knight's move in chess. For higher r, the differential acts like a generalized knight's move.
Functoriality
A continuous map of topological spaces gives rise to a homomorphism between their nth homology groups for all n. This basic fact of algebraic topology finds a natural explanation through certain properties of chain complexes. Since it is very common to study
several topological spaces simultaneously, in homological algebra one is led to simultaneous consideration of multiple chain complexes.
A morphism between two chain complexes, is a family of homomorphisms of abelian groups that commute with the differentials, in the sense that for all n. A morphism of chain complexes induces a morphism of their homology groups, consisting of the homomorphisms for all n. A morphism F is called a quasi-isomorphism if it induces an isomorphism on the nth homology for all n.
Many constructions of chain complexes arising in algebra and geometry, including singular homology, have the following functoriality property: if two objects X and Y are connected by a map f, then the associated chain complexes are connected by a morphism and moreover, the composition of maps f: X → Y and g: Y → Z induces the morphism that coincides with the composition It follows that the homology groups are functorial as well, so that morphisms between algebraic or topological objects give rise to compatible maps between their homology.
The following definition arises from a typical situation in algebra and topology. A triple consisting of three chain complexes and two morphisms between them, is called an exact triple, or a short exact sequence of complexes, and written as
if for any n, the sequence
is a short exact sequence of abelian groups. By definition, this means that fn is an injection, gn is a surjection, and Im fn = Ker gn. One of the most basic theorems of homological algebra, sometimes known as the zig-zag lemma, states that, in this case, there is a long exact sequence in homology
where the homology groups of L, M, and N cyclically follow each other, and δn are certain homomorphisms determined by f and g, called the connecting homomorphisms'''. Topological manifestations of this theorem include the Mayer–Vietoris sequence and the long exact sequence for relative homology.
Foundational aspects
Cohomology theories have been defined for many different objects such as topological spaces, sheaves, groups, rings, Lie algebras, and C*-algebras. The study of modern algebraic geometry would be almost unthinkable without sheaf cohomology.
Central to homological algebra is the notion of exact sequence; these can be used to perform actual calculations. A classical tool of homological algebra is that of derived functor; the most basic examples are functors Ext and Tor.
With a diverse set of applications in mind, it was natural to try to put the whole subject on a uniform basis. There were several attempts before the subject settled down. An approximate history can be stated as follows:
Cartan-Eilenberg: In their 1956 book "Homological Algebra", these authors used projective and injective module resolutions.
'Tohoku': The approach in a celebrated paper by Alexander Grothendieck which appeared in the Second Series of the Tohoku Mathematical Journal in 1957, using the abelian category concept (to include sheaves of abelian groups).
The derived category of Grothendieck and Verdier. Derived categories date back to Verdier's 1967 thesis. They are examples of triangulated categories used in a number of modern theories.
These move from computability to generality.
The computational sledgehammer par excellence is the spectral sequence; these are essential in the Cartan-Eilenberg and Tohoku approaches where they are needed, for instance, to compute the derived functors of a composition of two functors. Spectral sequences are less essential in the derived category approach, but still play a role whenever concrete computations are necessary.
There have been attempts at 'non-commutative' theories which extend first cohomology as torsors (important in Galois cohomology).
See also
Abstract nonsense, a term for homological algebra and category theory
Derivator
Homotopical algebra
List of homological algebra topics
References
Henri Cartan, Samuel Eilenberg, Homological Algebra. With an appendix by David A. Buchsbaum. Reprint of the 1956 original. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1999. xvi+390 pp.
Saunders Mac Lane, Homology. Reprint of the 1975 edition. Classics in Mathematics. Springer-Verlag, Berlin, 1995. x+422 pp.
Peter Hilton; Stammbach, U. A Course in Homological Algebra. Second edition. Graduate Texts in Mathematics, 4. Springer-Verlag, New York, 1997. xii+364 pp.
Gelfand, Sergei I.; Yuri Manin, Methods of Homological Algebra. Translated from Russian 1988 edition. Second edition. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2003. xx+372 pp.
Gelfand, Sergei I.; Yuri Manin, Homological Algebra. Translated from the 1989 Russian original by the authors. Reprint of the original English edition from the series Encyclopaedia of Mathematical Sciences (Algebra'', V, Encyclopaedia Math. Sci., 38, Springer, Berlin, 1994). Springer-Verlag, Berlin, 1999. iv+222 pp. | Homological algebra | [
"Mathematics"
] | 4,066 | [
"Fields of abstract algebra",
"Mathematical structures",
"Category theory",
"Homological algebra"
] |
13,079,232 | https://en.wikipedia.org/wiki/Lithium%20iron%20phosphate | Lithium iron phosphate or lithium ferro-phosphate (LFP) is an inorganic compound with the formula . It is a gray, red-grey, brown or black solid that is insoluble in water. The material has attracted attention as a component of lithium iron phosphate batteries, a type of Li-ion battery. This battery chemistry is targeted for use in power tools, electric vehicles, solar energy installations and more recently large grid-scale energy storage.
Most lithium batteries (Li-ion) used in consumer electronics products use cathodes made of lithium compounds such as lithium cobalt oxide (), lithium manganese oxide (), and lithium nickel oxide (). The anodes are generally made of graphite.
Lithium iron phosphate exists naturally in the form of the mineral triphylite, but this material has insufficient purity for use in batteries.
With general chemical formula of , compounds in the family adopt the olivine structure. M includes not only Fe but also Co, Mn and Ti. As the first commercial was C/, the whole group of is informally called “lithium iron phosphate” or “”. However, more than one olivine-type phase may be used as a battery's cathode material. Olivine compounds such as , , and have the same crystal structures as , and may replace it in a cathode. All may be referred to as “LFP”.
Manganese, phosphate, iron, and lithium also form an olivine structure. This structure is a useful contributor to the cathode of lithium rechargeable batteries. This is due to the olivine structure created when lithium is combined with manganese, iron, and phosphate (as described above). The olivine structures of lithium rechargeable batteries are significant, for they are affordable, stable, and can be safely used to store energy.
History and production
Arumugam Manthiram and John B. Goodenough first identified the polyanion class of cathode materials for lithium ion batteries. was then identified as a cathode material belonging to the polyanion class for use in batteries in 1996 by Padhi et al. Reversible extraction of lithium from and insertion of lithium into was demonstrated. Neutron diffraction confirmed that LFP was able to ensure the security of large input/output current of lithium batteries.
The material can be produced by heating a variety of iron and lithium salts with phosphates or phosphoric acid. Many related routes have been described including those that use hydrothermal synthesis.
Physical and chemical properties
In , lithium has a +1 charge, iron +2 charge balancing the −3 charge for phosphate. Upon removal of Li, the material converts to the ferric form .
The iron atom and 6 oxygen atoms form an octahedral coordination sphere, described as , with the Fe ion at the center. The phosphate groups, , are tetrahedral. The three-dimensional framework is formed by the octahedra sharing O corners. Lithium ions reside within the octahedral channels in a zigzag manner. In crystallography, this structure is thought to belong to the Pmnb space group of the orthorhombic crystal system. The lattice constants are: a = 6.008 Å, b = 10.334 Å, and c = 4.693 Å. The volume of the unit cell is 291.4 Å3.
In contrast to two traditional cathode materials, and , lithium ions of migrate in the lattice's one-dimensional free volume. During charge/discharge, the lithium ions are extracted concomitant with oxidation of Fe:
LiFe^{II}PO4 <=> Fe^{III}PO4 + Li+ + e-
Extraction of lithium from produces with a similar structure. adopts a Pmnb space group with a unit cell volume of 272.4 Å3, only slightly smaller than that of its lithiated precursor. Extraction of lithium ions reduces the lattice volume, as is the case with lithium oxides. 's corner-shared octahedra are separated by the oxygen atoms of the tetrahedra and cannot form a continuous network, reducing conductivity.
A nearly close-packed hexagonal array of oxides centers provides relatively little free volume for ions to migrate within. For this reason, the ionic conductivity of is relatively low at ambient temperature. The details of the lithiation of and the delithiation of have been examined. Two phases of the lithiated material are implicated.
Applications
LFP cells have an operating voltage of 3.3 V, charge density of 170 mAh/g, high power density, long cycle life and stability at high temperatures.
LFP's major commercial advantages are that it poses few safety concerns such as overheating and explosion, as well as long cycle lifetimes, high power density and has a wider operating temperature range. Power plants and automobiles use LFP.
BAE has announced that their HybriDrive Orion 7 hybrid bus uses about 180 kW LFP battery cells. AES has developed multi-trillion watt battery systems that are capable of subsidiary services of the power network, including spare capacity and frequency adjustment. In China, BAK and Tianjin Lishen are active in the area.
The safety is a crucial property for certain applications. For example, in 2016 an LFP-based energy storage system was installed in Paiyun Lodge on Mt.Jade (Yushan) (the highest alpine lodge in Taiwan). As of 2024, the system is still operating safely.
Comparison
Although LFP has 25% less specific energy (Wh/g) than lithium batteries with oxide (e.g. nickel-cobalt-manganese, NCM) cathode materials, primarily due to its operational voltage (3.2 volts vs 3.7 for NCM-type cathode chemistries), it has 70% more than nickel-hydrogen batteries.
The major differences between LFP batteries and other lithium-ion battery types is that LFP batteries contain no cobalt (removing ethical and economic questions about cobalt's availability) and have a flat discharge curve.
LFP batteries have drawbacks, originating from a high electronic resistivity of LFP, as well as the lower maximum charge/discharge voltage. The energy density is significantly lower than (although higher than the nickel–metal hydride battery).
Lithium cobalt oxide based battery chemistries are more prone to thermal runaway if overcharged and cobalt is both expensive and not widely geographically available. Other chemistries such as nickel-manganese-cobalt (NMC) have supplanted LiCo chemistry cells in most applications. The original ratio of Ni to Mn to Co was 3:3:3, whereas today, cells are being made with ratios of 8:1:1 or 6:2:2, whereby the Co content has been drastically reduced.
LiFePO4 batteries are comparable to sealed lead acid batteries and are often being touted as a drop-in replacement for lead acid applications. The most notable difference between lithium iron phosphate and lead acid is the fact that the lithium battery capacity shows only a small dependence on the discharge rate. With very high discharge rates, for instance 0.8C, the capacity of the lead acid battery is only 60% of the rated capacity. Therefore, in cyclic applications where the discharge rate is often greater than 0.1C, a lower rated lithium battery will often have a higher actual capacity than the comparable lead acid battery. This means that at the same capacity rating, the lithium will cost more, but a lower capacity lithium battery can be used for the same application at a lower price. The cost of ownership when considering the lifecycle further increases the value of the lithium battery when compared to a lead acid battery., but they have much poorer performance at lower temperatures, as covered in the section on effects of temperature.
Intellectual property
There are 4 groups of patents on LFP battery materials:
The University of Texas at Austin (UT) patented the materials with the crystalline structure of LiFePo4 and their use in batteries.
Hydro-Québec, Université de Montréal and the French National Center for Scientific Research (CNRS) own patents, that claim improvements of the original LiFePo4 by carbon coating that enhance its conductivity.
The key feature of from A123 Systems is the nano-LFP, which modifies its physical properties and adds noble metals in the anode, as well as the use of special graphite as the cathode.
The main feature of from Phostech is increased capacitance and conductivity by an appropriate carbon coating. The special feature of • zM from Aleees a high capacitance and low impedance obtained by the stable control of the ferrites and crystal growth. This improved control is realized by applying strong mechanical stirring forces to the precursors in high oversaturation states, which induces crystallization of the metal oxides and LFP.
These patents underlie mature mass production technologies. The largest production capacity is up to 250 tons per month.
In patent lawsuits in the US in 2005 and 2006, UT and Hydro-Québec claimed that as the cathode infringed their patents, and . The patent claims involved a unique crystal structure and a chemical formula of the battery cathode material.
On April 7, 2006, A123 filed an action seeking a declaration of non-infringement and invalidity UT's patents. A123 separately filed two ex parte Reexamination Proceedings before the United States Patent and Trademark Office (USPTO), in which they sought to invalidate the patents based upon prior art.
In a parallel court proceeding, UT sued Valence Technology, a company that commercializes LFP products that alleged infringement.
The USPTO issued a Reexamination Certificate for the '382 patent on April 15, 2008, and for the '640 patent on May 12, 2009, by which the claims of these patents were amended. This allowed the current patent infringement suits filed by Hydro-Quebec against Valence and A123 to proceed. After a Markman hearing, on April 27, 2011, the Western District Court of Texas held that the claims of the reexamined patents had a narrower scope than as originally granted. The key question was whether the earlier Goodenough's patents from the UT (licensed to Hydro-Quebec) were infringed by A123, that had its own improved versions of LiFePO4 patents, that contained cobalt dopant. The end results was licensing of Goodenough's patents by A123 under undisclosed terms.
On December 9, 2008, the European Patent Office revoked Dr. Goodenough’s patent numbered 0904607. This decision basically reduced the patent risk of using LFP in European automobile applications. The decision is believed to be based on the lack of novelty.
The first major settlement was the lawsuit between NTT and the UT. In October 2008, NTT announced that they would settle the case in the Japan Supreme Civil Court for $30 million. As part of the agreement, UT agreed that NTT did not steal the information and that NTT would share its LFP patents with UT. NTT’s patent is also for an olivine LFP, with the general chemical formula of (A is for alkali metal and M for the combination of Co and Fe), now used by BYD Company. Although chemically the materials are nearly the same, from the viewpoint of patents, of NTT is different from the materials covered by UT. has higher capacity than . At the heart of the case was that NTT engineer Okada Shigeto, who had worked in the UT labs developing the material, was accused of stealing UT’s intellectual property.
As of 2020, an organization named LifePO+C claims to own the key IP and offers licenses. It is a consortium between Johnson Matthey, the CNRS, University of Montreal, and Hydro Quebec.
Research
Power density
LFP has two shortcomings: low conductivity (high overpotential) and low lithium diffusion constant, both of which limit the charge/discharge rate. Adding conducting particles in delithiated raises its electron conductivity. For example, adding conducting particles with good diffusion capability like graphite and carbon to powders significantly improves conductivity between particles, increases the efficiency of and raises its reversible capacity up to 95% of the theoretical values. However, addition of conductive additives also increases the "dead mass" present in the cell that does not contribute to energy storage. shows good cycling performance even under charge/discharge current as large as 5C.
Stability
Coating LFP with inorganic oxides can make LFP’s structure more stable and increase conductivity. Traditional with oxide coating shows improved cycling performance. This coating also inhibits dissolution of Co and slows the decay of capacity. Similarly, with an inorganic coating such as ZnO and , has a better cycling lifetime, larger capacity and better characteristics under rapid discharge. The addition of a conductive carbon increases efficiency. Mitsui Zosen and Aleees reported that addition of conducting metal particles such as copper and silver increased efficiency. with 1 wt% of metal additives has a reversible capacity up to 140 mAh/g and better efficiency under high discharge current.
Metal substitution
Substituting other materials for the iron or lithium in can also raise efficiency. Substituting zinc for iron increases crystallinity of because zinc and iron have similar ionic radii. Cyclic voltammetry confirms that , after metal substitution, has higher reversibility of lithium ion insertion and extraction. During lithium extraction, Fe (II) is oxidized to Fe (III) and the lattice volume shrinks. The shrinking volume changes lithium’s returning paths.
Synthesis processes
Mass production with stability and high quality still faces many challenges.
Similar to lithium oxides, may be synthesized by a variety of methods, including: solid-phase synthesis, emulsion drying, sol-gel process, solution coprecipitation, vapor-phase deposition, electrochemical synthesis, electron beam irradiation, microwave process, hydrothermal synthesis, ultrasonic pyrolysis and spray pyrolysis.
In the emulsion drying process, the emulsifier is first mixed with kerosene. Next, the solutions of lithium salts and iron salts are added to this mixture. This process produces nanocarbon particles. Hydrothermal synthesis produces with good crystallinity. Conductive carbon is obtained by adding polyethylene glycol to the solution followed by thermal processing. Vapor phase deposition produces a thin film . In flame spray pyrolysis FePO4 is mixed with lithium carbonate and glucose and charged with electrolytes. The mixture is then injected inside a flame and filtered to collect the synthesized .
Effects of temperature
The effects of temperature on lithium iron phosphate batteries can be divided into the effects of high temperature and low temperature.
Generally, LFP chemistry batteries are less susceptible to thermal runaway reactions like those that occur in lithium cobalt batteries; LFP batteries exhibit better performance at an elevated temperature. Research has shown that at room temperature (23 °C), the initial capacity loss approximates 40-50 mAh/g. However, at 40 °C and 60 °C, the capacity losses approximate 25 and 15 mAh/g respectively, but these capacity losses were spread over 20 cycles instead of a bulk loss like that in the case of room temperature capacity loss.
However, this is only true for a short cycling timeframe. Later yearlong study has shown that despite LFP batteries having double the equivalent full cycle, the capacity fade rate increased with increasing temperature for LFP cells but the increasing temperature does not impact NCA cells or have a negligible impact on the aging of NMC cells. This capacity fade is primarily due to the solid electrolyte interface (SEI) formation reaction being accelerated by increasing temperature.
LFP batteries are especially affected by decreasing temperature which possibly hamper their application in high-latitude areas. The initial discharge capacities for LFP/C samples at temperatures of 23, 0, -10, and -20 °C are 141.8, 92.7, 57.9 and 46.7 mAh/g with coulombic efficiency 91.2%, 74.5%, 63.6% and 61.3%. These losses are accounted for by the slow diffusion of lithium ions within electrodes and the formation of SEI that come with lower temperatures which subsequently increase the charge-transfer resistance on the electrolyte-electrode interfaces. Another possible cause of the lowered capacity formation is lithium plating. As mentioned above, low temperature lowers the diffusion rate of lithium ions within the electrodes, allowing for the lithium plating rate to compete with that of intercalation rate. The colder condition leads to higher growth rates and shifts the initial point to lower state of charge which means that the plating process starts earlier. Lithium plating uses up lithium which then compete with the intercalation of lithium into graphite, decreasing the capacity of the batteries. The aggregated lithium ions are deposited on the surface of electrodes in the form of “plates” or even dendrites which may penetrate the separators, short-circuiting the battery completely.
See also
Lithium iron phosphate battery
A123 Systems
Valence Technology
References
Lithium compounds
Iron(II) compounds
Phosphates
Rechargeable batteries | Lithium iron phosphate | [
"Chemistry"
] | 3,560 | [
"Phosphates",
"Salts"
] |
13,084,071 | https://en.wikipedia.org/wiki/Magnetic%20anisotropy | In condensed matter physics, magnetic anisotropy describes how an object's magnetic properties can be different depending on direction. In the simplest case, there is no preferential direction for an object's magnetic moment. It will respond to an applied magnetic field in the same way, regardless of which direction the field is applied. This is known as magnetic isotropy. In contrast, magnetically anisotropic materials will be easier or harder to magnetize depending on which way the object is rotated.
For most magnetically anisotropic materials, there are two easiest directions to magnetize the material, which are a 180° rotation apart. The line parallel to these directions is called the easy axis. In other words, the easy axis is an energetically favorable direction of spontaneous magnetization. Because the two opposite directions along an easy axis are usually equivalently easy to magnetize along, the actual direction of magnetization can just as easily settle into either direction, which is an example of spontaneous symmetry breaking.
Magnetic anisotropy is a prerequisite for hysteresis in ferromagnets: without it, a ferromagnet is superparamagnetic.
Sources
The observed magnetic anisotropy in an object can happen for several different reasons. Rather than having a single cause, the overall magnetic anisotropy of a given object is often explained by a combination of these different factors:
Magnetocrystalline anisotropy The atomic structure of a crystal introduces preferential directions for the magnetization.
Shape anisotropy When a particle is not perfectly spherical, the demagnetizing field will not be equal for all directions, creating one or more easy axes.
Magnetoelastic anisotropy Tension may alter magnetic behaviour, leading to magnetic anisotropy.
Exchange anisotropy Occurs when antiferromagnetic and ferromagnetic materials interact.
At the molecular level
The magnetic anisotropy of a benzene ring (A), alkene (B), carbonyl (C), alkyne (D), and a more complex molecule (E) are shown in the figure. Each of these unsaturated functional groups (A-D) create a tiny magnetic field and hence some local anisotropic regions (shown as cones) in which the shielding effects and the chemical shifts are unusual. The bisazo compound (E) shows that the designated proton {H} can appear at different chemical shifts depending on the photoisomerization state of the azo groups. The trans isomer holds proton {H} far from the cone of the benzene ring thus the magnetic anisotropy is not present. While the cis form holds proton {H} in the vicinity of the cone, shields it and decreases its chemical shift. This phenomenon enables a new set of nuclear Overhauser effect (NOE) interactions (shown in red) that come to existence in addition to the previously existing ones (shown in blue).
Single-domain magnet
Suppose that a ferromagnet is single-domain in the strictest sense: the magnetization is uniform and rotates in unison. If the magnetic moment is and the volume of the particle is , the magnetization is , where is the saturation magnetization and are direction cosines (components of a unit vector) so . The energy associated with magnetic anisotropy can depend on the direction cosines in various ways, the most common of which are discussed below.
Uniaxial
A magnetic particle with uniaxial anisotropy has one easy axis. If the easy axis is in the direction, the anisotropy energy can be expressed as one of the forms:
where is the volume, the anisotropy constant, and the angle between the easy axis and the particle's magnetization. When shape anisotropy is explicitly considered, the symbol is often used to indicate the anisotropy constant, instead of . In the widely used Stoner–Wohlfarth model, the anisotropy is uniaxial.
Triaxial
A magnetic particle with triaxial anisotropy still has a single easy axis, but it also has a hard axis (direction of maximum energy) and an intermediate axis (direction associated with a saddle point in the energy). The coordinates can be chosen so the energy has the form
If the easy axis is the direction, the intermediate axis is the direction and the hard axis is the direction.
Cubic
A magnetic particle with cubic anisotropy has three or four easy axes, depending on the anisotropy parameters. The energy has the form
If the easy axes are the and axes. If there are four easy axes characterized by .
See also
Fluorescence anisotropy
References
Further reading
Magnetic ordering
Orientation (geometry) | Magnetic anisotropy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 992 | [
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
13,085,107 | https://en.wikipedia.org/wiki/HMGA | HMGA is a family of high mobility group proteins characterized by an AT-hook. They code for a "small, nonhistone, chromatin-associated protein that has no intrinsic transcriptional activity but can modulate transcription by altering the chromatin architecture". Mammals have two orthologs: HMGA1 and HMGA2.
Genomic distribution
In mouse embryonic stem cells it has been demonstrated that both HMGA proteins binds uniformly to the DNA due to their AT-hook domains, with a slight preference for AT-rich regions/ Such regions tend to lack coding genes, an observation that argues against a direct role in transcriptional control and in agreement with previous studies, suggest that these proteins have a structural role in the chromatin, similar to histone.
Function
Normally, when cells are subjected to increased DNA damage (such as the formation of 6-O-methylguanine) this causes an increase in apoptosis (programmed cell death). However, cells with diminished activity for either proteins HMGA1 or HMGA2 (or both together) are more tolerant of such DNA damage than cells in which these proteins are not diminished. Thus a normal function of the HMGA1 and HMGA2 proteins appears to be to signal the presence of DNA damage leading to induction of apoptosis.
Association with human traits
Variations in HMGA2 to have a moderate association with adult height.
See also
HMGA1
HMGA2
References
External links
Transcription factors | HMGA | [
"Chemistry",
"Biology"
] | 300 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,085,795 | https://en.wikipedia.org/wiki/SN1CB%20mechanism | In coordination chemistry, the SN1cB (conjugate base) mechanism describes the pathway by which many metal amine complexes undergo substitution, that is, ligand exchange. Typically, the reaction entails reaction of a polyamino metal halide with aqueous base to give the corresponding polyamine metal hydroxide:
[Co(NH3)5Cl]^2+ + OH- -> [Co(NH3)5OH]^2+ + Cl-
The rate law for the reaction is:
The rate law is deceptive: hydroxide serves not as a nucleophile but as a base to deprotonate the coordinated ammonia. Simultaneously with deprotonation, the halide dissociates. Water binds to the coordinatively unsaturated complex followed by proton transfer to give the hydroxy complex. The conjugate base resulting from deprotonation of the amine is rarely observed..
References
Nucleophilic substitution reactions
Coordination chemistry
Reaction mechanisms | SN1CB mechanism | [
"Chemistry"
] | 209 | [
"Reaction mechanisms",
"Chemical kinetics",
"Coordination chemistry",
"Physical organic chemistry"
] |
18,784,020 | https://en.wikipedia.org/wiki/Cystoderma%20amianthinum | Cystoderma amianthinum, commonly called the common powdercap, saffron parasol, the saffron powder-cap, or the earthy powder-cap, is a small orange-ochre, or yellowish-brown, gilled mushroom. It grows in damp mossy grassland, in coniferous forest clearings, or on wooded heaths. It is probably the most common of the small genus Cystoderma. It is not recommended for consumption due to its resemblance to poisonous species.
Taxonomy
Cystoderma amianthinum was first noted by the Italian-Austrian naturalist Giovanni Antonio Scopoli, who called it Agaricus amianthinus in 1772. The present generic name Cystoderma was erected by Swiss mycologist Victor Fayod in 1889, and is roughly translated as 'blistered skin', and is probably a reference to the appearance of the pellicle (cap skin).
Description
The cap is usually between in diameter, convex to bell-shaped, and later flat with a slight depression around a low umbo (central boss). It is dry and powdery, often with a shaggy or fringed margin (appendiculate), and is saffron-yellow or orange-ochre. The stem is cylindrical, and has a flaky-granular sheath beneath a fleeting, powdery ring. The gills are white initially, and become creamy later. They are adnexed (narrowly attached to the stem), and initially quite crowded. The spore print is white. The flesh is thin and yellowish, with an odor that is unpleasant or resembles husked corn.
A very similar form with a markedly radially wrinkled cap, has been separated by some authors, and given the binomial Cystoderma rugoso-reticulatum. Some forms have a whitish yellow cap.
Cystodermella granulosa and Cystodermella cinnabarina are both redder as a rule, and have adnate gills (broadly attached to the stem).
Distribution and habitat
Cystoderma amianthinum is widespread in Europe and North America, and common in northern temperate zones. It occurs in mossy woodland, on heaths, amongst grass or bracken, and sometimes with willow. It is often found on acidic soils.
Edibility
Eating is not advised as the deadly toxic Lepiota castanea is a lookalike.
References
Agaricaceae
Fungi described in 1772
Fungi of Europe
Fungi of North America
Fungus species | Cystoderma amianthinum | [
"Biology"
] | 520 | [
"Fungi",
"Fungus species"
] |
18,788,208 | https://en.wikipedia.org/wiki/Joint%20source%20and%20channel%20coding | In information theory, joint source–channel coding is the encoding of a redundant information source for transmission over a noisy channel, and the corresponding decoding, using a single code instead of the more conventional steps of source coding followed by channel coding.
Joint source–channel coding has been proposed and implemented for a variety of situations, including speech and videotransmission.
References
Information theory
Fault tolerance | Joint source and channel coding | [
"Mathematics",
"Technology",
"Engineering"
] | 79 | [
"Telecommunications engineering",
"Reliability engineering",
"Applied mathematics",
"Fault tolerance",
"Computer science",
"Information theory"
] |
21,033,863 | https://en.wikipedia.org/wiki/Flow%20assurance | Flow assurance is a relatively new term in oil and gas industry. It refers to ensuring successful and economical flow of hydrocarbon stream from reservoir to the point of sale. The term was coined by Petrobras in the early 1990s ahead of a DeepStar Program meeting, in Portuguese as Garantia do Escoamento (:pt::Garantia do Escoamento), meaning literally “Guarantee of Flow”, or Flow Assurance.
Flow assurance is extremely diverse, encompassing many discrete and specialized subjects and bridging across the full gamut of engineering disciplines. Besides network modeling and transient multiphase simulation, flow assurance involves effectively handling many solid deposits, such as, gas hydrates, asphaltene, wax, scale, and naphthenates. Flow assurance is the most critical task during deep water energy production because of the high pressures and low temperature (~4 degree Celsius) involved. The financial loss from production interruption or asset damage due to flow assurance mishap can be astronomical. What compounds the flow assurance task even further is that these solid deposits can interact with each other, and can cause catastrophic blockage formation in pipelines and result in flow assurance failure.
Flow assurance includes thermal investigation of pipelines, making sure the temperature is above the hydrate's formation temperature. Other important aspects of flow assurance are the estimation of stable production limits, and evaluation of erosion due to sand and corrosion in pipelines and equipment.
References
Petroleum technology
Natural gas technology
Oilfield terminology | Flow assurance | [
"Chemistry",
"Engineering"
] | 305 | [
"Petroleum engineering",
"Petroleum technology",
"Natural gas technology"
] |
21,038,459 | https://en.wikipedia.org/wiki/Goldbeter%E2%80%93Koshland%20kinetics | The Goldbeter–Koshland kinetics describe a steady-state solution for a 2-state biological system. In this system, the interconversion between these two states is performed by two enzymes with opposing effect. One example would be a protein Z that exists in a phosphorylated form ZP and in an unphosphorylated form Z; the corresponding kinase Y and phosphatase X interconvert the two forms. In this case we would be interested in the equilibrium concentration of the protein Z (Goldbeter–Koshland kinetics only describe equilibrium properties, thus no dynamics can be modeled). It has many applications in the description of biological systems.
The Goldbeter–Koshland kinetics is described by the Goldbeter–Koshland function:
with the constants
Graphically the function takes values between 0 and 1 and has a sigmoid behavior. The smaller the parameters J1 and J2 the steeper the function gets and the more of a switch-like behavior is observed. Goldbeter–Koshland kinetics is an example of ultrasensitivity.
Derivation
Since equilibrium properties are searched one can write
From Michaelis–Menten kinetics the rate at which ZP is dephosphorylated is known to be and the rate at which Z is phosphorylated is . Here the KM stand for the Michaelis–Menten constant which describes how well the enzymes X and Y bind and catalyze the conversion whereas the kinetic parameters k1 and k2 denote the rate constants for the catalyzed reactions. Assuming that the total concentration of Z is constant one can additionally write that [Z]0 = [ZP] + [Z] and one thus gets:
with the constants
If we thus solve the quadratic equation (1) for z we get:
Thus (3) is a solution to the initial equilibrium problem and describes the equilibrium concentration of [Z] and [ZP] as a function of the kinetic parameters of the phosphorylation and dephosphorylation reaction and the concentrations of the kinase and phosphatase. The solution is the Goldbeter–Koshland function with the constants from (2):
Ultrasensitivity of Goldbeter–Koshland modules
The ultrasensitivity (sigmoidality) of a Goldbeter–Koshland module can be measured by its Hill Coefficient:
.
where EC90 and EC10 are the input values needed to produce the 10% and 90% of the maximal response, respectively.
In a living cell, Goldbeter–Koshland modules are embedded in a bigger network with upstream and downstream components. This components may constrain the range of inputs that the module will receive as well as the range of the module’s outputs that network will be able to detect. Altszyler et al. (2014) studied how the effective ultrasensitivity of a modular system is affected by these restrictions. They found that Goldbeter–Koshland modules are highly sensitive to dynamic range limitations imposed by downstream components. However, in the case of asymmetric Goldbeter–Koshland modules, a moderate downstream constrain can produce effective sensitivities much larger than that of the original module when considered in isolation.
References
Enzyme kinetics
Chemical kinetics
Ordinary differential equations
Catalysis | Goldbeter–Koshland kinetics | [
"Chemistry"
] | 699 | [
"Catalysis",
"Chemical kinetics",
"Chemical reaction engineering",
"Enzyme kinetics"
] |
21,039,403 | https://en.wikipedia.org/wiki/CPK-MB%20test | The CPK-MB test (creatine phosphokinase-MB), also known as CK-MB test, is a cardiac marker used to assist diagnoses of an acute myocardial infarction, myocardial ischemia, or myocarditis. It measures the blood level of CK-MB (creatine kinase myocardial band), the bound combination of two variants (isoenzymes CKM and CKB) of the enzyme phosphocreatine kinase.
In some locations, the test has been superseded by the troponin test. However, recently, there have been improvements to the test that involve measuring the ratio of the CK-MB1 and CK-MB2 isoforms.
The newer test detects different isoforms of the B subunit specific to the myocardium whereas the older test detected the presence of cardiac-related isoenzyme dimers.
Many cases of CK-MB levels exceeding the blood level of total CK have been reported, especially in newborns with cardiac malformations, especially ventricular septal defects. This reversal of ratios is in favor of pulmonary emboli or vasculitis. An autoimmune reaction creating a complex molecule of CK and IgG should be taken into consideration.
See also
Troponin
References
Blood tests
Cardiology | CPK-MB test | [
"Chemistry"
] | 281 | [
"Blood tests",
"Chemical pathology"
] |
21,042,117 | https://en.wikipedia.org/wiki/Brooks%27%20theorem | In graph theory, Brooks' theorem states a relationship between the maximum degree of a graph and its chromatic number. According to the theorem, in a connected graph in which every vertex has at most Δ neighbors, the vertices can be colored with only Δ colors, except for two cases, complete graphs and cycle graphs of odd length, which require Δ + 1 colors.
The theorem is named after R. Leonard Brooks, who published a proof of it in 1941. A coloring with the number of colors described by Brooks' theorem is sometimes called a Brooks coloring or a Δ-coloring.
Formal statement
For any connected undirected graph G with maximum degree Δ,
the chromatic number of G is at most Δ, unless G is a complete graph or an odd cycle, in which case the chromatic number is Δ + 1.
Proof
László Lovász gives a simplified proof of Brooks' theorem. If the graph is not biconnected, its biconnected components may be colored separately and then the colorings combined. If the graph has a vertex v with degree less than Δ, then a greedy coloring algorithm that colors vertices farther from v before closer ones uses at most Δ colors. This is because at the time that each vertex other than v is colored, at least one of its neighbors (the one on a shortest path to v) is uncolored, so it has fewer than Δ colored neighbors and has a free color. When the algorithm reaches v, its small number of neighbors allows it to be colored. Therefore, the most difficult case of the proof concerns biconnected Δ-regular graphs with Δ ≥ 3. In this case, Lovász shows that one can find a spanning tree such that two nonadjacent neighbors u and w of the root v are leaves in the tree. A greedy coloring starting from u and w and processing the remaining vertices of the spanning tree in bottom-up order, ending at v, uses at most Δ colors. For, when every vertex other than v is colored, it has an uncolored parent, so its already-colored neighbors cannot use up all the free colors, while at v the two neighbors u and w have equal colors so again a free color remains for v itself.
Extensions
A more general version of the theorem applies to list coloring: given any connected undirected graph with maximum degree Δ that is neither a clique nor an odd cycle, and a list of Δ colors for each vertex, it is possible to choose a color for each vertex from its list so that no two adjacent vertices have the same color. In other words, the list chromatic number of a connected undirected graph G never exceeds Δ, unless G is a clique or an odd cycle.
For certain graphs, even fewer than Δ colors may be needed. Δ − 1 colors suffice if and only if the given graph has no Δ-clique, provided Δ is large enough. For triangle-free graphs, or more generally graphs in which the neighborhood of every vertex is sufficiently sparse, O(Δ/log Δ) colors suffice.
The degree of a graph also appears in upper bounds for other types of coloring; for edge coloring, the result that the chromatic index is at most Δ + 1 is Vizing's theorem. An extension of Brooks' theorem to total coloring, stating that the total chromatic number is at most Δ + 2, has been conjectured by Mehdi Behzad and Vizing. The Hajnal–Szemerédi theorem on equitable coloring states that any graph has a (Δ + 1)-coloring in which the sizes of any two color classes differ by at most one.
Algorithms
A Δ-coloring, or even a Δ-list-coloring, of a degree-Δ graph may be found in linear time. Efficient algorithms are also known for finding Brooks colorings in parallel and distributed models of computation.
Notes
References
.
.
.
.
.
.
.
.
.
External links
Graph coloring
Theorems in graph theory | Brooks' theorem | [
"Mathematics"
] | 810 | [
"Graph coloring",
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Theorems in graph theory"
] |
21,042,541 | https://en.wikipedia.org/wiki/Parameter%20validation | In computer software, the term parameter validation is the automated processing, in a module, to validate the spelling or accuracy of parameters passed to that module. The term has been in common use for over 30 years. Specific best practices have been developed, for decades, to improve the handling of such parameters.
Parameter validation can be used to defend against cross-site scripting attacks.
See also
Data validation
Strong typing
Error handling
Sanity check
Notes
References
"Parameter validation for software reliability", G.B. Alleman, 1978, webpage: ACM-517: paper presents a method for increasing software reliability through parameter validation.
Software testing | Parameter validation | [
"Technology",
"Engineering"
] | 130 | [
"Software testing",
"Computer science stubs",
"Software engineering",
"Computer science",
"Computing stubs"
] |
19,933,497 | https://en.wikipedia.org/wiki/Infusion%20therapy | In medicine, infusion therapy deals with all aspects of fluid and medication infusion, via intravenous or subcutaneous application. A special infusion pump can be used for this purpose.
A fenestrated catheter is frequently inserted into the localized area to be treated.
There are a range of delivery methods for infusion of drugs via catheter:
Electronic Pump: Drugs are often pre-mixed from vials and stored in infusion bags to be delivered by electronic pump.
Elastomeric pump
Pre-Filled Infusion Therapy: with this latest technology, a unit dose can be metered to the location from a pre-filled container.
Infusion therapy has a range of medical applications including sedation, anesthesia, post-operative analgesic pain management, chemotherapy, and treatment of infectious diseases
Advantages of infusion therapy over other non-site-specific delivery methodologies are primarily efficacy through precision of medication delivery.
New standards for infusible pharmaceuticals have been achieved in recent years with the advent of pre-filled, ready-to-use, dose-specific products. Advanced aseptic presentation, with hermetically sealed containers, allows predictable sterility, ease of use, improved control, and lower total costs. Essentially, systematizing the delivery mechanism and standardizing the delivery container.
Treatments
Infusion therapy involves the administration of medication through a needle or catheter. Typically, "infusion therapy" means that a drug is administered intravenously or subcutaneously. The term may pertain where drugs are provided through other non-oral routes of administration, such as intramuscular injection and epidural administration (into the membranes surrounding the spinal cord).
Until the 1980s, patients receiving infusion therapy often had to remain in an inpatient setting for the duration of their therapy. New technologies and heightened emphasis on cost containment in health care, as well as developments in the clinical administration of the therapy, led to strategies to administer infusion therapy in alternate settings (at clinics and at home) in an effort to reduce hospital readmissions.
See also
Intravenous therapy
References
Routes of administration | Infusion therapy | [
"Chemistry"
] | 434 | [
"Pharmacology",
"Routes of administration"
] |
19,936,884 | https://en.wikipedia.org/wiki/Speed%20record | A speed record is a world record for speed by a person, animal, or vehicle. The function of speed record is to record the speed of moving animate objects such as humans, animals or vehicles.
Overall speed record
Overall speed record is the record for the highest average speed regardless of any criteria, categories or classes that all the more specific records belong to, provided that the route was completed. It helps to compare various performances that differ by the type of the craft, vessel or vehicle, the departure and the arrival points (provided that the distances are comparable), number, age and gender of the crew members, departure date, etc. The distance used for calculating the overall speed record is usually the distance in a straight line. In the case of man-powered races, overall speed record doesn't always reflect the best performance. It is highly dependent on technological advantages generating the speed of the craft, vessel or vehicle.
Term Overall Speed Record is also used to compare the highest momentary speed achieved by a vehicle, vessel or craft in the highest land speed, water speed or air speed contest.
Vehicle speed records
In the air
Flight airspeed record
Cross-America flight air speed record
Manned spacecraft speed record
On land
Land speed record
Land speed record for rail vehicles
List of fastest production cars
Motorcycle land-speed record
Fastest speed on a bicycle
British land speed record
In the water
Water speed record
Speed sailing record
Underwater speed record
Natural speed records
List of world records in athletics
List of speed skating records
Fastest known time
Fastest animals
See also
Orders of magnitude (speed)
Transport
Transportation engineering
Energy efficiency in transport
References
Record
World records | Speed record | [
"Physics"
] | 322 | [
"Physical phenomena",
"Physical quantities",
"Motion (physics)",
"Vector physical quantities",
"Velocity",
"Wikipedia categories named after physical quantities"
] |
19,937,035 | https://en.wikipedia.org/wiki/Borda%E2%80%93Carnot%20equation | In fluid dynamics the Borda–Carnot equation is an empirical description of the mechanical energy losses of the fluid due to a (sudden) flow expansion. It describes how the total head reduces due to the losses. This is in contrast with Bernoulli's principle for dissipationless flow (without irreversible losses), where the total head is a constant along a streamline. The equation is named after Jean-Charles de Borda (1733–1799) and Lazare Carnot (1753–1823).
This equation is used both for open channel flow as well as in pipe flows. In parts of the flow where the irreversible energy losses are negligible, Bernoulli's principle can be used.
Formulation
The Borda–Carnot equation is
where
ΔE is the fluid's mechanical energy loss,
ξ is an empirical loss coefficient, which is dimensionless and has a value between zero and one, 0 ≤ ξ ≤ 1,
ρ is the fluid density,
v1 and v2 are the mean flow velocities before and after the expansion.
In case of an abrupt and wide expansion, the loss coefficient is equal to one. In other instances, the loss coefficient has to be determined by other means, most often from empirical formulae (based on data obtained by experiments). The Borda–Carnot loss equation is only valid for decreasing velocity, v1 > v2, otherwise the loss ΔE is zero without mechanical work by additional external forces there cannot be a gain in mechanical energy of the fluid.
The loss coefficient ξ can be influenced by streamlining. For example, in case of a pipe expansion, the use of a gradual expanding diffuser can reduce the mechanical energy losses.
Relation to the total head and Bernoulli's principle
The Borda–Carnot equation gives the decrease in the constant of the Bernoulli equation. For an incompressible flow the result is for two locations labelled 1 and 2, with location 2 downstream to 1 along a streamline:
with
p1 and p2 the pressure at location 1 and 2,
z1 and z2 the vertical elevation (above some reference level) of the fluid particle,
g the gravitational acceleration.
The first three terms on either side of the equal sign are respectively the pressure, the kinetic energy density of the fluid and the potential energy density due to gravity. As can be seen, pressure acts effectively as a form of potential energy.
In case of high-pressure pipe flows, when gravitational effects can be neglected, ΔE is equal to the loss Δ(p + ρv2/2):
For open-channel flows, ΔE is related to the total head loss ΔH as
with H the total head:
where h is the hydraulic head the free surface elevation above a reference datum: h = z + p/(ρg).
Examples
Sudden expansion of a pipe
The Borda–Carnot equation is applied to the flow through a sudden expansion of a horizontal pipe. At cross section 1, the mean flow velocity is equal to v1, the pressure is p1 and the cross-sectional area is A1. The corresponding flow quantities at cross section 2 – well behind the expansion (and regions of separated flow) – are v2, p2 and A2, respectively. At the expansion, the flow separates and there are turbulent recirculating flow zones with mechanical energy losses. The loss coefficient ξ for this sudden expansion is approximately equal to one: ξ ≈ 1.0. Due to mass conservation, assuming a constant fluid density ρ, the volumetric flow rate through both cross sections 1 and 2 has to be equal:
so
Consequently – according to the Borda–Carnot equation – the mechanical energy loss in this sudden expansion is:
The corresponding loss of total head ΔH is:
For this case with ξ = 1, the total change in kinetic energy between the two cross sections is dissipated. As a result, the pressure change between both cross sections is (for this horizontal pipe without gravity effects):
and the change in hydraulic head h = z + p/(ρg):
The minus signs, in front of the right-hand sides, mean that the pressure (and hydraulic head) are larger after the pipe expansion.
That this change in the pressures (and hydraulic heads), just before and after the pipe expansion, corresponds with an energy loss becomes clear when comparing with the results of Bernoulli's principle. According to this dissipationless principle, a reduction in flow speed is associated with a much larger increase in pressure than found in the present case with mechanical energy losses.
Sudden contraction of a pipe
In case of a sudden reduction of pipe diameter, without streamlining, the flow is not able to follow the sharp bend into the narrower pipe. As a result, there is flow separation, creating recirculating separation zones at the entrance of the narrower pipe. The main flow is contracted between the separated flow areas, and later on expands again to cover the full pipe area.
There is not much head loss between cross section 1, before the contraction, and cross section 3, the vena contracta at which the main flow is contracted most. But there are substantial losses in the flow expansion from cross section 3 to 2. These head losses can be expressed by using the Borda–Carnot equation, through the use of the coefficient of contraction μ:
with A3 the cross-sectional area at the location of strongest main flow contraction 3, and A2 the cross-sectional area of the narrower part of the pipe. Since A3 ≤ A2, the coefficient of contraction is less than one: μ ≤ 1. Again there is conservation of mass, so the volume fluxes in the three cross sections are a constant (for constant fluid density ρ):
with v1, v2 and v3 the mean flow velocity in the associated cross sections. Then, according to the Borda–Carnot equation (with loss coefficient ξ=1), the energy loss ΔE per unit of fluid volume and due to the pipe contraction is:
The corresponding loss of total head ΔH can be computed as ΔH = ΔE/(ρg).
According to measurements by Weisbach, the contraction coefficient for a sharp-edged contraction is approximately:
Derivation from the momentum balance for a sudden expansion
For a sudden expansion in a pipe, see the figure above, the Borda–Carnot equation can be derived from mass- and momentum conservation of the flow. The momentum flux S (i.e. for the fluid momentum component parallel to the pipe axis) through a cross section of area A is – according to the Euler equations:
Consider the conservation of mass and momentum for a control volume bounded by cross section 1 just upstream of the expansion, cross section 2 downstream of where the flow re-attaches again to the pipe wall (after the flow separation at the expansion), and the pipe wall. There is the control volume's gain of momentum S1 at the inflow and loss S2 at the outflow. Besides, there is also the contribution of the force F by the pressure on the fluid exerted by the expansion's wall (perpendicular to the pipe axis):
where it has been assumed that the pressure is equal to the close-by upstream pressure p1.
Adding contributions, the momentum balance for the control volume between cross sections 1 and 2 gives:
Consequently, since by mass conservation :
in agreement with the pressure drop Δp in the example above.
The mechanical energy loss ΔE is:
which is the Borda–Carnot equation (with ξ = 1).
See also
Darcy–Weisbach equation
Prony equation
Notes
References
, 634 pp.
, 634 pp.
, 706 pp.
Equations of fluid dynamics
Fluid dynamics
Hydraulics
Piping | Borda–Carnot equation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,593 | [
"Equations of fluid dynamics",
"Equations of physics",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
19,937,075 | https://en.wikipedia.org/wiki/Heterophile%20antigen | Heterophile antigens are antigens of similar nature, if not identical, that are present in different tissues in different biological species, classes, or kingdoms. Usually different species have different antigen sets, but the hetereophile antigen is shared by different species. Other heterophile antigens are responsible for some diagnostic serological tests such as:
Weil-Felix reaction for typhus fever
Paul Bunnell test for infectious mononucleosis
Cold agglutinin test in primary atypical pneumonia
Chemically, heterophile antigens are composed of lipoprotein-polysaccharide complexes. There is a possibility of there being identical chemical groupings in the structure of mucopolysaccharids and lipids.
Example: Forssman antigen, cross reacting microbial antigen
so antibodies to these antigens produced by one species cross react with antigens of other species. It is widely present in some plants bacteria animal and birds. However it is not present in rabbit. Therefore antibodies are produced in rabbit serum by injecting the antigen (antiforssman antibodies).
References
Antibodies
Immunology
Antigens | Heterophile antigen | [
"Chemistry",
"Biology"
] | 233 | [
"Antigens",
"Immunology",
"Biomolecules"
] |
19,937,477 | https://en.wikipedia.org/wiki/Deck%20prism | A deck prism, or bullseye, is a prism inserted into the deck of a ship to provide light down below.
For centuries, sailing ships used deck prisms to provide a safe source of natural sunlight to illuminate areas below decks. Before electricity, light below a vessel's deck was provided by candles, oil and kerosene lamps—all dangerous aboard a wooden ship. The deck prism laid flush into the deck, the glass prism refracted and dispersed natural light into the space below from a small deck opening without weakening the planks or becoming a fire hazard.
In normal usage, the prism hangs below the overhead and disperses the light sideways; the top is flat and installed flush with the deck, becoming part of the deck. The lens shapes were naturally derived from the process of handmaking the glass on an 'iron' and would have predated the ability to manufacture flat glass. (A plain flat glass window would just form a single bright spot below—not very useful for general illumination—hence the prismatic shape.)
To maximize light output, the glass used was originally made colorless with the addition of manganese dioxide; the purple hue of some specimens is caused by decades of exposure to UV.
Aboard colliers (coal ships), prisms were also used to keep check on the cargo hold: light from a fire would be collected by the prism and be made visible on the deck even in daylight.
The names "deck light", "dead light" or "deadlight" are sometimes used, though the latter is uncommon as a reference to prisms, as more often refers to non-opening plain-glass panels. Deadlights were commonplace for lighting underground vaults in the 19th century, in which application they were also called "pavement lights" (UK) or "vault lights" (US).
See also
Prism lighting
Prism glass
Porthole
Daylighting
Liter of Light
References
Prism
Prisms (optics)
Energy-saving lighting | Deck prism | [
"Engineering"
] | 397 | [
"Shipbuilding",
"Marine engineering"
] |
26,908,549 | https://en.wikipedia.org/wiki/NetworkX | NetworkX is a Python library for studying graphs and networks. NetworkX is free software released under the BSD-new license.
History
NetworkX began development in 2002 by Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. It is supported by the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory.
The package was crafted with the aim of creating tools to analyze data and intervention strategies for controlling the epidemic spread of disease, while also exploring the structure and dynamics of more general social, biological, and infrastructural systems.
Inspired by Guido van Rossum's 1998 essay on Python graph representation, NetworkX made its public debut at the 2004 SciPy annual conference. In April of 2005, NetworkX was made available as open source software.
Several Python packages focusing on graph theory, including igraph, graph-tool, and numerous others, are available. As of April 2024, NetworkX had over 50 million downloads, surpassing the download count of the second most popular package, igraph, by more than 50-fold. This substantial adoption rate could potentially be attributed to NetworkX's early release and its continued evolution within the SciPy ecosystem.
In 2008, SageMath, an open source mathematics system, incorporated NetworkX into its package and added support for more graphing algorithms and functions.
Features
Classes for graphs and digraphs.
Conversion of graphs to and from several formats.
Ability to construct random graphs or construct them incrementally.
Ability to find subgraphs, cliques, k-cores.
Explore adjacency, degree, diameter, radius, center, betweenness, etc.
Draw networks in 2D and 3D.
Supported Graph Types
Overview
Graphs, in this context, represent collections of vertices (nodes) and edges (connections) between them. NetworkX provides support for several types of graphs, each suited for different applications and scenarios.
Directed Graphs (DiGraph)
Directed graphs, or DiGraphs, consist of nodes connected by directed edges. In a directed graph, edges have a direction indicating the flow or relationship between nodes.
Undirected Graphs (Graph)
Undirected graphs, simply referred to as graphs in NetworkX, are graphs where edges have no inherent direction. The connections between nodes are symmetrical, meaning if node A is connected to node B, then node B is also connected to node A.
MultiGraphs
MultiGraphs allow multiple edges between the same pair of nodes. In other words, MultiGraphs permit parallel edges, where more than one edge can exist between two nodes.
MultiDiGraphs
MultiDiGraphs are directed graphs that allow multiple directed edges between the same pair of nodes. Similar to MultiGraphs, MultiDiGraphs enable the modeling of scenarios where multiple directed relationships exist between nodes.
Challenges in Visualization
While NetworkX provides powerful tools for graph creation and analysis, producing visualizations of complex graphs can be challenging. Visualizing large or densely connected graphs may require specialized techniques and external libraries beyond the capabilities of NetworkX alone.
Graph Layouts
NetworkX provides various layout algorithms for visualizing graphs in two-dimensional space. These layout algorithms determine the positions of nodes and edges in a graph visualization, aiming to reveal its structure and relationships effectively.
Spring Layout
The Spring layout is a force-directed layout algorithm inspired by physical systems. It simulates the attraction and repulsion forces between nodes, treating edges as springs and nodes as charged particles. This results in a layout where nodes with strong connections are placed closer together, while nodes with weaker connections are pushed farther apart.
Spectral Layout
The Spectral layout is based on the spectral properties of the graph's adjacency matrix. It uses the eigenvalues and eigenvectors of the adjacency matrix to position nodes in a low-dimensional space. Spectral layout tends to emphasize the global structure of the graph, making it useful for identifying clusters and communities.
Circular Layout
The Circular layout arranges nodes evenly around a circle, with edges drawn as straight lines connecting them. This layout is particularly suitable for visualizing cyclic or symmetric graphs, where the arrangement of nodes along the circle reflects the underlying topology of the graph.
Shell Layout
The Shell layout organizes nodes into concentric circles or shells based on their distance from a specified center. Nodes within the same shell have the same distance from the center, while edges are drawn radially between nodes in adjacent shells. Shell layout is often used for visualizing hierarchical or tree structures.
Kamada-Kawai Layout
The Kamada-Kawai layout algorithm positions nodes based on their pairwise distances, aiming to minimize the total energy of the system. It takes into account both the graph's topology and edge lengths, resulting in a layout that emphasizes geometric accuracy and readability.
Usage
NetworkX provides functions for applying different layout algorithms to graphs and visualizing the results using Matplotlib or other plotting libraries. Users can specify the desired layout algorithm when calling the drawing functions, allowing for flexible and customizable graph visualizations.
Suitability
NetworkX is suitable for operation on large real-world graphs: e.g., graphs in excess of 10 million nodes and 100 million edges. Due to its dependence on a pure-Python "dictionary of dictionary" data structure, NetworkX is a reasonably efficient, very scalable, highly portable framework for network and social network analysis.
Applications
NetworkX was designed to be easy to use and learn, as well as a powerful and sophisticated tool for network analysis. It is used widely on many levels, ranging from computer science and data analysis education to large-scale scientific studies.
NetworkX has applications in any field that studies data as graphs or networks, such as mathematics, physics, biology, computer science and social science. The nodes in a NetworkX graph can be specialized to hold any data, and the data stored in edges is arbitrary, further making it widely applicable to different fields. It is able to read in networks from data and randomly generate networks with specified qualities. This allows it to be used to explore changes across wide amounts of networks. The figure below demonstrates a simple example of the software's ability to create and modify variations across large amounts of networks.
NetworkX has many network and graph analysis algorithms, aiding in a wide array of data analysis purposes. One important example of this is its various options for shortest path algorithms. The following algorithms are included in NetworkX, with time complexities given the number of vertices (V) and edges (E) in the graph:
Dijkstra: O((V+E) log V)
Bellman-Ford: O(V * E)
Goldberg-Radzik: O(V * E)
Johnson: O(V^2 log(V) + VE)
Floyd Warshall: O(V^3)
A*: O((V+E) log V)
An example of the use of NetworkX graph algorithms can be seen in a 2018 study, in which it was used to analyze the resilience of livestock production networks to the spread of epidemics. The study used a computer model to predict and study trends in epidemics throughout American hog production networks, taking into account all livestock industry roles. In the study, NetworkX was used to find information on degree, shortest paths, clustering, and k-cores as the model introduced infections and simulated their spread. This was then used to determine which networks are most susceptible to epidemics.
In addition to network creation and analysis, NetworkX also has many visualization capabilities. It provides hooks into Matplotlib and GraphViz for 2D visuals, and VTK and UbiGraph for 3D visuals. This makes the package useful in easily demonstrating and reporting network analysis and data, and allows for the simplification of networks for visual processing.
Integration
NetworkX is integrated into SageMath.
See also
Social network analysis software
JGraph
References
External links
Official website:
NetworkX discussion group
Survey of existing graph theory software
NetworkX on StackOverflow
Free mathematics software
Free software programmed in Python
Graph drawing software
Numerical software
Software using the BSD license
Python (programming language) scientific libraries | NetworkX | [
"Mathematics"
] | 1,653 | [
"Numerical software",
"Free mathematics software",
"Mathematical software"
] |
26,909,767 | https://en.wikipedia.org/wiki/False%20neurotransmitter | A false neurotransmitter is a chemical compound which closely imitates the action of a neurotransmitter in the nervous system. Examples include 5-MeO-αMT (mimicking serotonin) and α-methyldopa. Another compound that has been discussed as a possible false neurotransmitter is octopamine.
These chemicals can be accumulated by a neuron or secretory cell, are then packaged in secretory / synaptic vesicles, and then released with other neurotransmitters when an action potential provides the necessary stimulus for release.
The concept of a false transmitter is credited to Irwin Kopin of the National Institute of Neurological Disorders and Stroke who determined that the drug tyramine increased blood pressure by being loaded and then released from secretory vesicles of the adrenal chromaffin cells. Tyramine can also be converted into octopamine by dopamine β-hydroxylase (DBH) which itself acts as a false transmitter by displacing noredrenaline from its vesicle but not activating the postsynaptic α-adrenergic receptor.
There is growing evidence that a large number of well known exogenous chemicals work as substitute neurotransmitters, though the distinction between the classical model and the substitute neurotransmitter model only becomes apparent with neurotransmitters central to the signaling in the conscious brain, like dopamine and serotonin (as mentioned above). By extension, drugs that affect the uptake affinity of neurotransmitter transporters directly affect the efficacy of these substitute neurotransmitters, as shown by the interference that selective serotonin reuptake inhibitors have on serotonergic psychedelic drugs.
A family of fluorescent false neurotransmitters have been developed by Dalibor Sames and David Sulzer at Columbia University that act as analogs for dopamine and other monoamines and enable an optical means for video analysis of neurotransmitter uptake and release.
See also
α-Methyltryptophan
α-Methylserotonin
References
Neurotransmitters | False neurotransmitter | [
"Chemistry"
] | 458 | [
"Neurochemistry",
"Neurotransmitters"
] |
26,910,524 | https://en.wikipedia.org/wiki/Computational%20transportation%20science | Computational Transportation Science (CTS) is an emerging discipline that combines computer science and engineering with the modeling, planning, and economic aspects of transport. The discipline studies how to improve the safety, mobility, and sustainability of the transport system by taking advantage of information technologies and ubiquitous computing. A list of subjects encompassed by CTS can be found at include.
Computational Transportation Science is an emerging discipline going beyond vehicular technology, addressing pedestrian systems on hand-held devices but also issues such as transport data mining (or movement analysis), as well as data management aspects. CTS allows for an increasing flexibility of the system as local and autonomous negotiations between transport peers, partners and supporting infrastructure are allowed. Thus, CTS provides means to study localized computing, self-organization, cooperation and simulation of transport systems.
Several academic conferences on CTS have been held up to date:
The Fourth ACM SIGSPATIAL International Workshop on Computational Transportation Science
The Third ACM SIGSPATIAL International Workshop on Computational Transportation Science
Dagstuhl Seminar 10121 on Computational Transportation Science
The Second International Workshop on Computational Transportation Science
The First International Workshop on Computational Transportation Science
There is also an IGERT PHD program on Computational Transportation Science at the University of Illinois at Chicago.
References
External links
Computational Transportation Science
Transportation engineering
Computational science
Computational fields of study | Computational transportation science | [
"Mathematics",
"Technology",
"Engineering"
] | 266 | [
"Computational fields of study",
"Applied mathematics",
"Industrial engineering",
"Computational science",
"Civil engineering",
"Computing and society",
"Transportation engineering"
] |
26,914,142 | https://en.wikipedia.org/wiki/Commensurate%20line%20circuit | Commensurate line circuits are electrical circuits composed of transmission lines that are all the same length; commonly one-eighth of a wavelength. Lumped element circuits can be directly converted to distributed-element circuits of this form by the use of Richards' transformation. This transformation has a particularly simple result; inductors are replaced with transmission lines terminated in short-circuits and capacitors are replaced with lines terminated in open-circuits. Commensurate line theory is particularly useful for designing distributed-element filters for use at microwave frequencies.
It is usually necessary to carry out a further transformation of the circuit using Kuroda's identities. There are several reasons for applying one of the Kuroda transformations; the principal reason is usually to eliminate series connected components. In some technologies, including the widely used microstrip, series connections are difficult or impossible to implement.
The frequency response of commensurate line circuits, like all distributed-element circuits, will periodically repeat, limiting the frequency range over which they are effective. Circuits designed by the methods of Richards and Kuroda are not the most compact. Refinements to the methods of coupling elements together can produce more compact designs. Nevertheless, the commensurate line theory remains the basis for many of these more advanced filter designs.
Commensurate lines
Commensurate lines are transmission lines that are all the same electrical length, but not necessarily the same characteristic impedance (Z0). A commensurate line circuit is an electrical circuit composed only of commensurate lines terminated with resistors or short- and open-circuits. In 1948, Paul I. Richards published a theory of commensurate line circuits by which a passive lumped element circuit could be transformed into a distributed element circuit with precisely the same characteristics over a certain frequency range.
Lengths of lines in distributed-element circuits, for generality, are usually expressed in terms of the circuit's nominal operational wavelength, λ. Lines of the prescribed length in a commensurate line circuit are called unit elements (UEs). A particularly simple relationship pertains if the UEs are λ/8. Each element in the lumped circuit is transformed into a corresponding UE. However, Z0 of the lines must be set according to the component value in the analogous lumped circuit and this may result in values of Z0 that are not practical to implement. This is particularly a problem with printed technologies, such as microstrip, when implementing high characteristic impedances. High impedance requires narrow lines and there is a minimum size that can be printed. Very wide lines, on the other hand, allow the possibility of undesirable transverse resonant modes to form. A different length of UE, with a different Z0, may be chosen to overcome these problems.
Electrical length can also be expressed as the phase change between the start and the end of the line. Phase is measured in angle units. , the mathematical symbol for an angle variable, is used as the symbol for electrical length when expressed as an angle. In this convention λ represents 360°, or 2π radians.
The advantage of using commensurate lines is that the commensurate line theory allows circuits to be synthesised from a prescribed frequency function. While any circuit using arbitrary transmission line lengths can be analysed to determine its frequency function, that circuit cannot necessarily be easily synthesised starting from the frequency function. The fundamental problem is that using more than one length generally requires more than one frequency variable. Using commensurate lines requires only one frequency variable. A well developed theory exists for synthesising lumped-element circuits from a given frequency function. Any circuit so synthesised can be converted to a commensurate line circuit using Richards' transformation and a new frequency variable.
Richards' transformation
Richards' transformation transforms the angular frequency variable, ω, according to,
or, more usefully for further analysis, in terms of the complex frequency variable, s,
where k is an arbitrary constant related to the UE length, θ, and some designer chosen reference frequency, ωc, by
k has units of time and is, in fact, the phase delay inserted by a UE.
Comparing this transform with expressions for the driving point impedance of stubs terminated, respectively, with a short circuit and an open circuit,
it can be seen that (for θ < π/2) a short circuit stub has the impedance of a lumped inductance and an open circuit stub has the impedance of a lumped capacitance. Richards' transformation substitutes inductors with short circuited UEs and capacitors with open circuited UEs.
When the length is λ/8 (or θ=π/4), this simplifies to,
This is frequently written as,
L and C are conventionally the symbols for inductance and capacitance, but here they represent respectively the characteristic impedance of an inductive stub and the characteristic admittance of a capacitive stub. This convention is used by numerous authors, and later in this article.
Omega-domain
Richards' transformation can be viewed as transforming from a s-domain representation to a new domain called the Ω-domain where,
If Ω is normalised so that Ω=1 when ω=ωc, then it is required that,
and the length in distance units becomes,
Any circuit composed of discrete, linear, lumped components will have a transfer function H(s) that is a rational function in s. A circuit composed of transmission line UEs derived from the lumped circuit by Richards' transformation will have a transfer function H(jΩ) that is a rational function of precisely the same form as H(s). That is, the shape of the frequency response of the lumped circuit against the s frequency variable will be precisely the same as the shape of the frequency response of the transmission line circuit against the jΩ frequency variable and the circuit will be functionally the same.
However, infinity in the Ω domain is transformed to ω=π/4k in the s domain. The entire frequency response is squeezed down to this finite interval. Above this frequency, the same response is repeated in the same intervals, alternately in reverse. This is a consequence of the periodic nature of the tangent function. This multiple passband result is a general feature of all distributed-element circuits, not just those arrived at through Richards' transformation.
Cascade element
A UE connected in cascade is a two-port network that has no exactly corresponding circuit in lumped elements. It is functionally a fixed delay. There are lumped-element circuits that can approximate a fixed delay such as the Bessel filter, but they only work within a prescribed passband, even with ideal components. Alternatively, lumped-element all-pass filters can be constructed that pass all frequencies (with ideal components), but they have constant delay only within a narrow band of frequencies. Examples are the lattice phase equaliser and bridged T delay equaliser.
There is consequently no lumped circuit that Richard's transformation can transform into a cascade-connected line, and there is no reverse transformation for this element. Commensurate line theory thus introduces a new element of delay, or length.
Two or more UEs connected in cascade with the same Z0 are equivalent to a single, longer, transmission line. Thus, lines of length nθ for integer n are allowable in commensurate circuits. Some circuits can be implemented entirely as a cascade of UEs: impedance matching networks, for instance, can be done this way, as can most filters.
Kuroda's identities
Kuroda's identities are a set of four equivalent circuits that overcome certain difficulties with applying Richards' transformations directly. The four basic transformations are shown in the figure. Here the symbols for capacitors and inductors are used to represent open-circuit and short-circuit stubs. Likewise, the symbols C and L here represent respectively the susceptance of an open circuit stub and the reactance of a short circuit stub, which, for θ=λ/8, are respectively equal to the characteristic admittance and characteristic impedance of the stub line. The boxes with thick lines represent cascade connected commensurate lengths of line with the marked characteristic impedance.
The first difficulty solved is that all the UEs are required to be connected together at the same point. This arises because the lumped-element model assumes that all the elements take up zero space (or no significant space) and that there is no delay in signals between the elements. Applying Richards' transformation to convert the lumped circuit into a distributed circuit allows the element to now occupy a finite space (its length) but does not remove the requirement for zero distance between the interconnections. By repeatedly applying the first two Kuroda identities, UE lengths of the lines feeding into the ports of the circuit can be moved between the circuit components to physically separate them.
A second difficulty that Kuroda's identities can overcome is that series connected lines are not always practical. While series connection of lines can easily be done in, for instance, coaxial technology, it is not possible in the widely used microstrip technology and other planar technologies. Filter circuits frequently use a ladder topology with alternating series and shunt elements. Such circuits can be converted to all shunt components in the same step used to space the components with the first two identities.
The third and fourth identities allow characteristic impedances to be scaled down or up respectively. These can be useful for transforming impedances that are impractical to implement. However, they have the disadvantage of requiring the addition of an ideal transformer with a turns ratio equal to the scaling factor.
History
In the decade after Richards' publication, advances in the theory of distributed circuits took place mostly in Japan. K. Kuroda published these identities in 1955 in his Ph.D thesis. However, they did not appear in English until 1958 in a paper by Ozaki and Ishii on stripline filters.
Further refinements
One of the major applications of commensurate line theory is to design distributed-element filters. Such filters constructed directly by Richards' and Kuroda's method are not very compact. This can be an important design consideration, especially in mobile devices. The stubs stick out to the side of the main line and the space between them is not doing anything useful. Ideally, the stubs should project on alternate sides to prevent them coupling with each other, taking up further space, although this is not always done for space considerations. More than that, the cascade connected elements that couple together the stubs contribute nothing to the frequency function, they are only there to transform the stubs into the required impedance. Putting it another way, the order of the frequency function is determined solely by the number of stubs, not by the total number of UEs (generally speaking, the higher the order, the better the filter). More complex synthesis techniques can produce filters in which all elements are contributing.
The cascade connected λ/8 sections of the Kuroda circuits are an example of impedance transformers, the archetypical example of such circuits is the λ/4 impedance transformer. Although this is double the length of the λ/8 line it has the useful property that it can be transformed from a low-pass filter to a high-pass filter by replacing the open circuit stubs with short circuit stubs. The two filters are exactly matched with the same cut-off frequency and mirror-symmetrical responses. It is therefore ideal for use in diplexers. The λ/4 transformer has this property of being invariant under a low-pass to high-pass transformation because it is not just an impedance transformer, but a special case of transformer, an impedance inverter. That is, it transforms any impedance network at one port, to the inverse impedance, or dual impedance, at the other port. However, a single length of transmission line can only be precisely λ/4 long at its resonant frequency and there is consequently a limit to the bandwidth over which it will work. There are more complex kinds of inverter circuit that more accurately invert impedances. There are two classes of inverter, the J-inverter, which transforms a shunt admittance into a series impedance, and the K-inverter which does the reverse transformation. The coefficients J and K are respectively the scaling admittance and impedance of the converter.
Stubs may be lengthened in order to change from an open circuit to a short circuit stub and vice versa. Low-pass filters usually consist of series inductors and shunt capacitors. Applying Kuroda's identities will convert these to all shunt capacitors, which are open circuit stubs. Open circuit stubs are preferred in printed technologies because they are easier to implement, and this is the technology likely to be found in consumer products. However, this is not the case in other technologies such as coaxial line, or twin-lead where the short circuit may actually be helpful for mechanical support of the structure. Short circuits also have a small advantage in that they are generally have a more precise position than open circuits. If the circuit is to be further transformed into the waveguide medium then open circuits are out of the question because there would be radiation out of the aperture so formed. For a high-pass filter the inverse applies, applying Kuroda will naturally result in short circuit stubs and it may be desirable for a printed design to convert to open circuits. As an example, a λ/8 open circuit stub can be replaced with a 3λ/8 short circuit stub of the same characteristic impedance without changing the circuit functionally.
Coupling elements together with impedance transformer lines is not the most compact design. Other methods of coupling have been developed, especially for band-pass filters that are far more compact. These include parallel lines filters, interdigital filters, hairpin filters, and the semi-lumped design combline filters.
References
Bibliography
Besser, Les; Gilmore, Rowan, Practical RF Circuit Design for Modern Wireless Systems: Volume 1: Passive Circuits and Systems, Artech House, 2002 .
Bhat, Bharathi; Koul, Shiban K., Stripline-like Transmission Lines for Microwave Integrated Circuits, New Age International, 1989 .
Du, Ke-Lin; Swamy, M. N. S., Wireless Communication Systems, Cambridge University Press, 2010 .
Gardner, Mark A.; Wickert, David W., "Microwave filter design using radial line stubs", 1988 IEEE Region 5 Conference: Spanning the Peaks of Electrotechnology, p. 68-72, IEEE, March 1988.
Helszajn, Joseph, Synthesis of Lumped Element, Distributed and Planar Filters, McGraw-Hill, 1990 .
Hunter, Ian C., Theory and Design of Microwave Filters, IET, 2001 .
Kumar, Narendra; Grebennikov, Andrei; Distributed Power Amplifiers for RF and Microwave Communications, Artech House, 2015 .
Lee, Thomas H., Planar Microwave Engineering, vol. 1, Cambridge University Press, 2004 .
Levy, Ralph; Cohn, Seymour B., "A History of Microwave Filter Research, Design, and Development", IEEE Transactions on Microwave Theory and Techniques, vol. 32, iss. 9, pp. 1055–1067, September 1984.
Maloratsky, Leo, Passive RF & Microwave Integrated Circuits, Elsevier, 2003 .
Matthaei, George L.; Young, Leo; Jones, E. M. T. Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964 .
Ozaki, H.; Ishii, J., "Synthesis of a class of strip-line filters", IRE Transactions on Circuit Theory, vol. 5, iss. 2, pp. 104–109. June 1958.
Richards, Paul I., "Resistor-transmission-line circuits", Proceedings of the IRE, vol. 36, iss. 2, pp. 217–220, 1948.
Sisodia, M. L., Microwaves: Introduction To Circuits, Devices And Antennas, New Age International, 2007 .
Wen, Geyi, Foundations for Radio Frequency Engineering, World Scientific, 2015 .
Wiek, Martin, Fiber Optics Standard Dictionary, Springer, 1997 .
Filter theory
Distributed element circuits
Microwave technology
Circuit theorems
Analog circuits
Linear filters
Electronic design | Commensurate line circuit | [
"Physics",
"Engineering"
] | 3,412 | [
"Telecommunications engineering",
"Equations of physics",
"Electronic design",
"Analog circuits",
"Filter theory",
"Electronic engineering",
"Distributed element circuits",
"Circuit theorems",
"Design",
"Physics theorems"
] |
26,915,887 | https://en.wikipedia.org/wiki/Voigt%E2%80%93Thomson%20law | Voigt–Thomson law describes anisotropic magnetoresistance effect in a thin film strip as a relationship between the electric resistivity and the direction of electric current:
where:
is the angle of direction of current in relation to the direction of magnetic field
is the initial resistivity
is the change of resistivity (proportional to MR ratio)
The equation can also be expressed as:
where:
is the parallel component of resistivity
is the perpendicular component
References
Spintronics | Voigt–Thomson law | [
"Physics",
"Materials_science"
] | 97 | [
"Spintronics",
"Condensed matter physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.