id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
27,149,689 | https://en.wikipedia.org/wiki/SNOX%20process | The SNOX process is a process which removes sulfur dioxide, nitrogen oxides and particulates from flue gases. The sulfur is recovered as concentrated sulfuric acid and the nitrogen oxides are reduced to free nitrogen. The process is based on the well-known wet sulfuric acid process (WSA), a process for recovering sulfur from various process gasses in the form of commercial quality sulfuric acid (H2SO4).
The SNOX process is based on catalytic reactions and does not consume water or absorbents. Neither does it produce any waste, except for the separated dust.
In addition the process can handle other sulfurous waste streams. This is highly interesting in refineries, where e.g. hydrogen sulfide (H2S) gas, sour water stripper gas and Claus tail gas can be led to the SNOX plant, and thereby investment in other waste gas handling facilities can be saved.
Process
The SNOX process includes the following steps:
Dust removal
Catalytic reduction of NOx by adding NH3 to the gas upstream of the SCR DeNOx reactor
Catalytic oxidation of SO2 to SO3 in the oxidation reactor
Cooling of the gas to about 100 °C whereby the H2SO4 condenses and can be withdrawn as concentrated sulfuric acid
Applications
The SNOX process developed by Haldor Topsoe has been specifically designed for power and steam generation plants to remove sulfur and nitrogen oxides from combustion of heavy residuals, petroleum coke, sour gasses, or other waste products from refineries.
Today, refineries are struggling to find ways to dispose of their increasing amount of sulfurous streams and waste products. Large amounts of high-sulfur residuals, particularly heavy oil and petroleum coke, are being produced and sold as fuel to the marine market or the cement industry. These off-take markets are, however, changing due to environmental constraints, and new markets have to be identified. One attractive option would be to use these residual fuels to produce power and steam, leaving behind the issue of emissions to be addressed. The SNOX technology is especially suitable for cleaning flue gases from combustion of high-sulfur fuels in refineries. The SNOX process is a very energy-efficient way to convert the NOx in the flue gas into nitrogen and the SOx into concentrated sulfuric acid of commercial quality without using any absorbents and without producing waste products or waste water. Along with the flue gases, other sulfurous waste streams from a refinery can be treated, such as H2S gas, sour water stripping (SWS) gas, Claus tail gas and elemental sulfur, potentially turning this technology into a complete sulfur management system.
Possible configurations:
Flue-gas desulfurization
The SNOX process can be applied for treatment of flue gases from combustion of primarily high-sulfur fuels in power stations, refinery and other industrial boilers and for treatment of other waste gases containing sulfur compounds and nitrogen oxides.
The first full scale plant treating 1,000,000 Nm³/h flue gas from a 300 MW coal-fired power plant in Denmark was started up in 1991.
The largest SNOX plant in operation treats 1,200,000 Nm³/h flue gas from four petroleum coke fired boilers at a refinery in Sicily, Italy.
The process catalytically reduces both the SO2 and the NOx in flue gases by more than 95% and with integration of the recovered heat from the WSA condenser it is reported to have lower operating costs than conventional technologies.
Recycling of hot combustion air from the SNOX plants to the boilers in combination with high pressure steam production in the SNOX plants increase the thermal efficiency and output of the boilers, resulting in a proportional reduction in CO2 emission.
Enhanced sulfuric acid production
In several places there is a need for both electric power and sulfuric acid. A cheap high-sulfur fuel such as petroleum coke can be used for power generation, while the flue gas is cleaned in an SNOX plant producing sulfuric acid. Elemental sulfur is fired in the SNOX plant in order to produce the desired amount of sulfuric acid.
See also
Flue-gas desulfurization
References
Oil refining
Chemical processes
Desulfurization | SNOX process | [
"Chemistry"
] | 866 | [
"Desulfurization",
"Separation processes",
"Petroleum technology",
"Chemical processes",
"Oil refining",
"nan",
"Chemical process engineering"
] |
27,149,732 | https://en.wikipedia.org/wiki/SOXS | Solar X-Ray Spectrometer, or SOXS, was an experimental payload launched onboard Indian geostationary satellite GSAT-2 by the Indian Space Research Organisation, ISRO. SOXS collected data about X-ray emissions from solar flares with high energy and temporal resolutions.
Features
X-Ray spectrometer (SOXS) was flown onboard GSAT-2 on 8 May 2003.
SOXS employs Si and CZT semiconductor devices, which are extremely high resolution and low noise detectors.
Detector package is mounted on a sun pointing mechanism with tracking accuracy better than 0.1 degree.
Pulse height (PHA) measurements in 256 channels
System dead time - 16 microseconds for Si Pin and 13 microseconds for CZT
Energy window counters
On board calibration using Cd109 radio isotope
System health parameters monitoring
Onboard selection for background rejection (LLD/threshold)
In view of temperature sensitivity of the detectors, observational interval is < 3 hrs starting from 04:00 to 06:45 UT.
Block schematics of SLD Payload (SLED, SFE, SLE and SCE)
SSTM daily tracking (0 to 189 degrees)
References
Astronomical spectroscopy | SOXS | [
"Physics",
"Chemistry",
"Astronomy"
] | 245 | [
"Spectrum (physical sciences)",
"Outer space",
"Astronomy stubs",
"Astrophysics",
"Astronomical spectroscopy",
"Spectroscopy",
"Outer space stubs"
] |
27,150,875 | https://en.wikipedia.org/wiki/ADITYA%20%28tokamak%29 | ADITYA is a medium size tokamak installed at the Institute for Plasma Research in India. Its construction began in 1982, and it was commissioned in 1989. It was the first tokamak in India.
It has a major radius of 0.75 metres and a minor radius of 0.25 metres. The maximum field strength is 1.2 tesla produced by 20 toroidal field coils spaced symmetrically in the toroidal direction. It is operated by two power supplies, a capacitor bank and the APPS (ADITYA pulse power supply).
The typical plasma parameters during capacitor bank discharges are: Ip ~30 kA, shot duration ~25 ms, central electron temperature ~100 eV and core plasma density ~1019 m−3 and the typical parameters of APPS operation is ~100 kA plasma current, ~ 100 ms duration, central electron temp. ~300 eV and ~3x1019 m−3 core plasma density.
Various diagnostics used in ADITYA include electric and magnetic probes, microwave interferometry, Thomson scattering and charge exchange spectroscopy.
ADITYA has been upgraded to ADITYA-U, with first plasma obtained in 2016.
References
External links
Institute for Plasma Research Projects
The ADITYA Tokamak homepage
Brazilian Journal of Physics SST and ADITYA Tokamak Research in India
Tokamaks
Atomic and nuclear energy research in India | ADITYA (tokamak) | [
"Physics"
] | 280 | [
"Plasma physics stubs",
"Plasma physics"
] |
27,151,056 | https://en.wikipedia.org/wiki/%C5%81o%C5%9B%E2%80%93Tarski%20preservation%20theorem | The Łoś–Tarski theorem is a theorem in model theory, a branch of mathematics, that states that the set of formulas preserved under taking substructures is exactly the set of universal formulas. The theorem was discovered by Jerzy Łoś and Alfred Tarski.
Statement
Let be a theory in a first-order logic language and
a set of formulas of .
(The sequence of variables need not be
finite.) Then the following are equivalent:
If and are models of , , is a sequence of elements of . If , then .( is preserved in substructures for models of )
is equivalent modulo to a set of formulas of .
A formula is if and only if it is of the form where is quantifier-free.
In more common terms, this states that
every first-order formula is preserved under induced substructures if and only if it is , i.e. logically equivalent to a first-order universal formula.
As substructures and embeddings are dual notions, this theorem is sometimes stated in its dual form:
every first-order formula is preserved under embeddings on all structures if and only if it is , i.e. logically equivalent to a first-order existential formula.
Note that this property fails for finite models.
Citations
References
Model theory
Metalogic | Łoś–Tarski preservation theorem | [
"Mathematics"
] | 280 | [
"Mathematical logic stubs",
"Mathematical logic",
"Model theory"
] |
27,157,933 | https://en.wikipedia.org/wiki/Nucleic%20acid%20secondary%20structure | Nucleic acid secondary structure is the basepairing interactions within a single nucleic acid polymer or between two polymers. It can be represented as a list of bases which are paired in a nucleic acid molecule.
The secondary structures of biological DNAs and RNAs tend to be different: biological DNA mostly exists as fully base paired double helices, while biological RNA is single stranded and often forms complex and intricate base-pairing interactions due to its increased ability to form hydrogen bonds stemming from the extra hydroxyl group in the ribose sugar.
In a non-biological context, secondary structure is a vital consideration in the nucleic acid design of nucleic acid structures for DNA nanotechnology and DNA computing, since the pattern of basepairing ultimately determines the overall structure of the molecules.
Fundamental concepts
Base pairing
In molecular biology, two nucleotides on opposite complementary DNA or RNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick base pairing, adenine (A) forms a base pair with thymine (T) and guanine (G) forms one with cytosine (C) in DNA. In RNA, thymine is replaced by uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair, also occur—particularly in RNA—giving rise to complex and functional tertiary structures. Importantly, pairing is the mechanism by which codons on messenger RNA molecules are recognized by anticodons on transfer RNA during protein translation. Some DNA- or RNA-binding enzymes can recognize specific base pairing patterns that identify particular regulatory regions of genes.
Hydrogen bonding is the chemical mechanism that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content, but contrary to popular belief, the hydrogen bonds do not stabilize the DNA significantly and stabilization is mainly due to stacking interactions.
The larger nucleobases, adenine and guanine, are members of a class of doubly ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of singly ringed chemical structures called pyrimidines. Purines are only complementary with pyrimidines: pyrimidine-pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine-purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. The only other possible pairings are GT and AC; these pairings are mismatches because the pattern of hydrogen donors and acceptors do not correspond. The GU wobble base pair, with two hydrogen bonds, does occur fairly often in RNA.
Nucleic acid hybridization
Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or physical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich sequences are more easily melted than C and G rich regions. Particular base steps are also susceptible to DNA melting, particularly T A and T G base steps. These mechanical features are reflected by the use of sequences such as TATAA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in PCR, is simple providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase.
Secondary structure motifs
Nucleic acid secondary structure is generally divided into helices (contiguous base pairs), and various kinds of loops (unpaired nucleotides surrounded by helices). Frequently these elements, or combinations of them, are further classified into additional categories including, for example, tetraloops, pseudoknots, and stem-loops. Topological approaches can be used to categorize and compare complex structures that arise from combining these elements in various arrangements.
Double helix
The double helix is an important tertiary structure in nucleic acid molecules which is intimately connected with the molecule's secondary structure. A double helix is formed by regions of many consecutive base pairs.
The nucleic acid double helix is a spiral polymer, usually right-handed, containing two nucleotide strands which base pair together. A single turn of the helix constitutes about ten nucleotides, and contains a major groove and minor groove, the major groove being wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to DNA do so through the wider major groove. Many double-helical forms are possible; for DNA the three biologically relevant forms are A-DNA, B-DNA, and Z-DNA, while RNA double helices have structures similar to the A form of DNA.
Stem-loop structures
The secondary structure of nucleic acid molecules can often be uniquely decomposed into stems and loops. The stem-loop structure (also often referred to as an "hairpin"), in which a base-paired helix ends in a short unpaired loop, is extremely common and is a building block for larger structural motifs such as cloverleaf structures, which are four-helix junctions such as those found in transfer RNA. Internal loops (a short series of unpaired bases in a longer paired helix) and bulges (regions in which one strand of a helix has "extra" inserted bases with no counterparts in the opposite strand) are also frequent.
There are many secondary structure elements of functional importance to biological RNAs; some famous examples are the Rho-independent terminator stem-loops and the tRNA cloverleaf. Active research is on-going to determine the secondary structure of RNA molecules, with approaches including both experimental and computational methods (see also the List of RNA structure prediction software).
Pseudoknots
A pseudoknot is a nucleic acid secondary structure containing at least two stem-loop structures in which half of one stem is intercalated between the two halves of another stem. Pseudoknots fold into knot-shaped three-dimensional conformations but are not true topological knots. The base pairing in pseudoknots is not well nested; that is, base pairs occur that "overlap" one another in sequence position. This makes the presence of general pseudoknots in nucleic acid sequences impossible to predict by the standard method of dynamic programming, which uses a recursive scoring system to identify paired stems and consequently cannot detect non-nested base pairs with common algorithms. However, limited subclasses of pseudoknots can be predicted using modified dynamic programs.
Newer structure prediction techniques such as stochastic context-free grammars are also unable to consider pseudoknots.
Pseudoknots can form a variety of structures with catalytic activity and several important biological processes rely on RNA molecules that form pseudoknots. For example, the RNA component of the human telomerase contains a pseudoknot that is critical for its activity. The hepatitis delta virus ribozyme is a well known example of a catalytic RNA with a pseudoknot in its active site. Though DNA can also form pseudoknots, they are generally not present in standard physiological conditions.
Secondary structure prediction
Most methods for nucleic acid secondary structure prediction rely on a nearest neighbor thermodynamic model. A common method to determine the most probable structures given a sequence of nucleotides makes use of a dynamic programming algorithm that seeks to find structures with low free energy. Dynamic programming algorithms often forbid pseudoknots, or other cases in which base pairs are not fully nested, as considering these structures becomes computationally very expensive for even small nucleic acid molecules. Other methods, such as stochastic context-free grammars can also be used to predict nucleic acid secondary structure.
For many RNA molecules, the secondary structure is highly important to the correct function of the RNA — often more so than the actual sequence. This fact aids in the analysis of non-coding RNA sometimes termed "RNA genes". One application of bioinformatics uses predicted RNA secondary structures in searching a genome for noncoding but functional forms of RNA. For example, microRNAs have canonical long stem-loop structures interrupted by small internal loops.
RNA secondary structure applies in RNA splicing in certain species. In humans and other tetrapods, it has been shown that without the U2AF2 protein, the splicing process is inhibited. However, in zebrafish and other teleosts the RNA splicing process can still occur on certain genes in the absence of U2AF2. This may be because 10% of genes in zebrafish have alternating TG and AC base pairs at the 3' splice site (3'ss) and 5' splice site (5'ss) respectively on each intron, which alters the secondary structure of the RNA. This suggests that secondary structure of RNA can influence splicing, potentially without the use of proteins like U2AF2 that have been thought to be required for splicing to occur.
Secondary structure determination
RNA secondary structure can be determined from atomic coordinates (tertiary structure) obtained by X-ray crystallography, often deposited in the Protein Data Bank. Current methods include 3DNA/DSSR and MC-annotate.
See also
DNA nanotechnology
Molecular models of DNA
DiProDB. The database is designed to collect and analyse thermodynamic, structural and other dinucleotide properties.
RNA CoSSMos
References
External links
MDDNA: Structural Bioinformatics of DNA
Abalone — Commercial software for DNA modeling
DNAlive: a web interface to compute DNA physical properties. Also allows cross-linking of the results with the UCSC Genome browser and DNA dynamics.
DNA
Biophysics
Molecular structure
RNA | Nucleic acid secondary structure | [
"Physics",
"Chemistry",
"Biology"
] | 2,188 | [
"Applied and interdisciplinary physics",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Biophysics",
"Matter"
] |
27,157,939 | https://en.wikipedia.org/wiki/Asphalt%20volcano | Asphalt volcanoes are a rare variety of submarine volcano (seamount). They were unknown before 2003. Several examples have been found along the coasts of the United States and Mexico and elsewhere, some still showing activity. Asphalt volcanoes resemble other seamounts however they are made entirely of asphalt. The structures are thought to form above geologic faults through which petroleum seeps from deeper in the Earth's crust.
Formation and distribution
Asphalt volcanoes are vents on the ocean floor through which asphalt erupts rather than of lava. They were discovered in the Gulf of Mexico during an expedition of the research vessel SONNE, led by Gerhard Bohrmann of the DFG Research Center Ocean Margins. These volcanoes host a previously unknown and highly diverse ecosystem at a water depths of more than 3,000 meters.
The first asphalt volcanoes were discovered in 2003 by a research expedition to the Gulf of Mexico. They are located on a seafloor hill named "Chapopote," Nahuatl for "tar." The site is located in a field of salt domes known as the Campeche Knolls, a series of steep hills formed from salt bodies that rise from underlying rock, a common feature in the gulf. The research team documented tar flows as wide as across. Also discovered alongside the asphalt were areas soaked with petroleum and methane hydrate, also spewed from the volcano. This kind of an environment proves attractive to chemical-loving bacteria and tubeworms, although the exact biogeochemical relationship is not yet known.
One hypothesis is that the tar is relatively hot when it comes out of the seafloor, but just like undersea lava flows, it is quickly cooled by the much colder seawater around it. This produces forms similar to the distinctive A'a and pahoehoe types of basalt lava flow seen in places like Hawaii. Another similarity is that the tar heats methane hydrate and causes it to explode into a free gas, similar to the action hot lava has on groundwater in phreatomagmatic eruptions.
The team proposed an asphalt volcano formation theory in a paper published in Eos. The article suggested that water heated past the critical point underneath the seafloor found a passageway to the surface, most likely a salt dome, and carried with it a heavy load of hydrocarbons and dissolved minerals. A special property of such critically heated water is that it can mix with oils, whereas normal water cannot. The same process is attributed to the formation of black smokers. Once the water reaches the surface, it cools, and its carrying capacity drops. The lighter compounds in the mixture escape to the surface, while the tar and other heavier materials remain on the seafloor, eventually building up the asphalt volcano's structure.
The role of temperature in asphalt volcanism is debated, with evidence suggesting asphalt does not erupt in a hot state. Instead, the pāhoehoe-like textures might result from gradients in viscosity, driven by the loss of volatile components, which create a contrast between the flow's outer crust and its inner core.
In 2007, seven more such structures were discovered off the coast of Santa Barbara, California. The largest of these domes lies at a depth of . The structures were larger than a football field and about as tall as a six-story building, all made completely out of asphalt. The unusual features were first noted by Ed Keller on bathymetric surveys conducted in the 1990s, and first viewed by a team led by David Valentine in 2007, utilizing DSV Alvin. Samples were brought up for testing at the university campus and the Woods Hole Oceanographic Institution.
Two further dives with DSV Alvin in 2009 and a detailed photographic survey of the area by the autonomous underwater vehicle Sentry showed many similarities to volcanic flows, including flow texture and cracking of the asphalt layers. Carbon dating puts the structures at between 30 and 40 thousand years old. They had at one time been a prolific source of methane. The two largest structures, less than apart, are pocked by pits and depressions, a sign of methane gas bubbling up long ago. Although the structures are still emitting residual gas, at present the amounts are too small to have any effect. The amount of crude oil in the largest of the structures alone is "enough to fuel my Honda Civic for about half a billion miles. [However] the quality of the material is very poor...It's not worth something like light sweet crude," said Valentine. The petroleum in the structure is more viscous than that which is usually found in underground wells. This is because it has had less time to "bake" under the Earth's heat before being released. In addition, as much as 20% of its mass is made of "junk"—microscopic organisms, sand, and miscellaneous materials that gradually accumulated in the oil.
Analysis of the samples collected from the mounds suggest that they required several decades, even centuries, to build up their current bulk, and that the volcanoes last erupted around 35,000 years ago. In addition they may account for a mysterious spike in oceanic methane concentrations around 35,000 years ago. Methane forms naturally alongside the petroleum underneath the structure, and while petroleum flows have long abated, some residual methane continues to bubble up. This burst of methane would have caused a rapid increase in the population of methane-eating bacteria, which in turn caused a decrease in oxygen in the water, possibly causing a dead zone, in addition to the large amounts of crude oil released into the environment.
The presence of these structures provides a hard surface on which life can grow, as the surrounding ocean floor is generally muddy. This is similar to what happens on seamounts, resulting in their place as an ecological "hub."
Onshore "tar volcanoes"
Onshore "tar volcanoes" have also been observed, for instance in Carpinteria, California, in an asphalt mine. Asphalt exuded from joint cracks in the upturned Monterey shale forming the floor of the mine. Similar structures, the Carpinteria Tar Pits, still form on the beach below Carpinteria.
See also
Mud volcano
Petroleum seep
Pitch Lake
Pockmark (geology)
References
External links
What is an asphalt volcano?, Mother Nature Network, 2014. Good photos.
The Asphalt Ecosystem of the Gulf of Mexico, NOAA Ocean Explorer, April 24, 2014
Seamounts
Petroleum geology | Asphalt volcano | [
"Chemistry"
] | 1,292 | [
"Petroleum",
"Petroleum geology"
] |
275,712 | https://en.wikipedia.org/wiki/Cleanroom | A cleanroom or clean room is an engineered space that maintains a very low concentration of airborne particulates. It is well isolated, well controlled from contamination, and actively cleansed. Such rooms are commonly needed for scientific research and in industrial production for all nanoscale processes, such as semiconductor device manufacturing. A cleanroom is designed to keep everything from dust to airborne organisms or vaporised particles away from it, and so from whatever material is being handled inside it.
A cleanroom can also prevent the escape of materials. This is often the primary aim in hazardous biology, nuclear work, pharmaceutics and virology.
Cleanrooms typically come with a cleanliness level quantified by the number of particles per cubic meter at a predetermined molecule measure. The ambient outdoor air in a typical urban area contains 35,000,000 particles for each cubic meter in the size range 0.5 μm and bigger, equivalent to an ISO 9 certified cleanroom. By comparison, an ISO 14644-1 level 1 certified cleanroom permits no particles in that size range, and just 12 particles for each cubic meter of 0.3 μm and smaller. Semiconductor facilities often get by with level 7 or 5, while level 1 facilities are exceedingly rare.
History
The modern cleanroom was invented by American physicist Willis Whitfield. As an employee of the Sandia National Laboratories, Whitfield created the initial plans for the cleanroom in 1960. Prior to Whitfield's invention, earlier cleanrooms often had problems with particles and unpredictable airflows. Whitfield designed his cleanroom with a constant, highly filtered airflow to flush out impurities. Within a few years of its invention in the 1960s, Whitfield's modern cleanroom had generated more than US$50 billion in sales worldwide (approximately $ billion today).
By mid-1963, more than 200 U.S. industrial plants had such specially constructed facilities—then using the terminology “White Rooms,” “Clean Rooms,” or “Dust-Free Rooms”—including the Radio Corporation of America, McDonnell Aircraft, Hughes Aircraft, Sperry Rand, Sylvania Electric, Western Electric, Boeing, and North American Aviation. RCA began such a conversion of part of its Cambridge, Ohio facilities in February 1961. Totalling 70,000 square feet, it was used to prepare control equipment for the Minuteman ICBM missiles.
The majority of the integrated circuit manufacturing facilities in Silicon Valley were made by three companies: MicroAire, PureAire, and Key Plastics. These competitors made laminar flow units, glove boxes, cleanrooms and air showers, along with the chemical tanks and benches used in the "wet process" building of integrated circuits. These three companies were the pioneers of the use of Teflon for airguns, chemical pumps, scrubbers, water guns, and other devices needed for the production of integrated circuits. William (Bill) C. McElroy Jr. worked as an engineering manager, drafting room supervisor, QA/QC, and designer for all three companies, and his designs added 45 original patents to the technology of the time. McElroy also wrote a four-page article for MicroContamination Journal, wet processing training manuals, and equipment manuals for wet processing and cleanrooms.
Overview
A cleanroom is a necessity in the manufacturing of semiconductors, rechargeable batteries, pharmaceutical products, and any other field that is highly sensitive to environmental contamination.
Cleanrooms can range from the very small to the very large. On the one hand, a single-user laboratory can be built to cleanroom standards within several square meters, and on the other, entire manufacturing facilities can be contained within a cleanroom with factory floors covering thousands of square meters. Between the large and the small, there are also modular cleanrooms. They have been argued to lower costs of scaling the technology, and to be less susceptible to catastrophic failure.
With such a wide area of application, not every cleanroom is the same. For example, the rooms utilized in semiconductor manufacturing need not be sterile (i.e., free of uncontrolled microbes), while the ones used in biotechnology usually must be. Vice versa, operating rooms need not be absolutely pure of nanoscale inorganic salts, such as rust, while nanotechnology absolutely requires it. What then is common to all cleanrooms is strict control of airborne particulates, possibly with secondary decontamination of air, surfaces, workers entering the room, implements, chemicals, and machinery.
Sometimes particulates exiting the compartment are also of concern, such as in research into dangerous viruses, or where radioactive materials are being handled.
Basic construction
First, outside air entering a cleanroom is filtered and cooled by several outdoor air handlers using progressively finer filters to exclude dust.
Within, air is constantly recirculated through fan units containing high-efficiency particulate absorbing filters (HEPA), and/or ultra-low particulate air (ULPA) filters to remove internally generated contaminants. Special lighting fixtures, walls, equipment and other materials are used to minimize the generation of airborne particles. Plastic sheets can be used to restrict air turbulence if the cleanroom design is of the laminar airflow type.
Air temperature and humidity levels inside a cleanroom are tightly controlled, because they affect the efficiency and means of air filtration. If a particular room requires low enough humidity to make static electricity a concern, it too will be controlled by, e.g., introducing controlled amounts of charged ions into the air using a corona discharge. Static discharge is of particular concern in the electronics industry, where it can instantly destroy components and circuitry.
Equipment inside any cleanroom is designed to generate minimal air contamination. The selection of material for the construction of a cleanroom should not generate any particulates; hence, monolithic epoxy or polyurethane floor coating is preferred. Buffed stainless steel or powder-coated mild steel sandwich partition panels and ceiling panel are used instead of iron alloys prone to rusting and then flaking. Corners like the wall to wall, wall to floor, wall to ceiling are avoided by providing coved surface, and all joints need to be sealed with epoxy sealant to avoid any deposition or generation of particles at the joints, by vibration and friction. Many cleanrooms have a "tunnel" design in which there are spaces called "service chases" that serve as air plenums carrying the air from the bottom of the room to the top so that it can be recirculated and filtered at the top of the cleanroom.
Airflow principles
Cleanrooms maintain particulate-free air through the use of either HEPA or ULPA filters employing laminar or turbulent airflow principles. Laminar, or unidirectional, airflow systems direct filtered air downward or in horizontal direction in a constant stream towards filters located on walls near the cleanroom floor or through raised perforated floor panels to be recirculated. Laminar airflow systems are typically employed across 80% of a cleanroom ceiling to maintain constant air processing. Stainless steel or other non shedding materials are used to construct laminar airflow filters and hoods to prevent excess particles entering the air. Turbulent, or non-unidirectional, airflow uses both laminar airflow hoods and nonspecific velocity filters to keep air in a cleanroom in constant motion, although not all in the same direction. The rough air seeks to trap particles that may be in the air and drive them towards the floor, where they enter filters and leave the cleanroom environment. US FDA and EU have laid down stringent guidelines and limits to ensure freedom from microbial contamination in pharmaceutical products. Plenums between air handlers and fan filter units, along with sticky mats, may also be used.
In addition to air filters, cleanrooms can also use ultraviolet light to disinfect the air. UV devices can be fitted into ceiling light fixtures and irradiate air, killing potentially infectious particulates, including 99.99 percent of airborne microbial and fungal contaminants. UV light has previously been used to clean surface contaminants in sterile environments such as hospital operating rooms. Their use in other cleanrooms may increase as equipment becomes more affordable. Potential advantages of UV-based decontamination includes a reduced reliance on chemical disinfectants and the extension of HVAC filter life.
Cleanrooms of different kinds
Some cleanrooms are kept at a positive pressure so if any leaks occur, air leaks out of the chamber instead of unfiltered air coming in. This is most typically the case in semiconductor manufacturing, where even minute amounts of particulates leaking in could contaminate the whole process, while anything leaking out . The opposite is done, e.g., in the case of high-level bio-laboratories that handle dangerous bacteria or viruses; those are always held at negative pressure, with the exhaust being passed through high-efficiency filters, and further sterilizing procedures. Both are still cleanrooms because the particulate level inside is maintained within very low limits.
Some cleanroom HVAC systems control the humidity to such low levels that extra equipment like air ionizers are required to prevent electrostatic discharge problems. This is a particular concern within the semiconductor business, because static discharge can easily damage modern circuit designs. On the other hand, active ions in the air can harm exposed components as well. Because of this, most workers in high electronics and semiconductor facilities have to wear conductive boots while working. Low-level cleanrooms may only require special shoes, with completely smooth soles that do not track in dust or dirt. However, for safety reasons, shoe soles must not create slipping hazards. Access to a cleanroom is usually restricted to those wearing a cleanroom suit, including the necessary machinery.
In cleanrooms in which the standards of air contamination are less rigorous, the entrance to the cleanroom may not have an air shower. An anteroom (known as a "gray room") is used to put on cleanroom clothing. This practice is common in many nuclear power plants, which operate as low-grade inverse pressure cleanrooms, as a whole.
Recirculating vs. one pass cleanrooms
Recirculating cleanrooms return air to the negative pressure plenum via low wall air returns. The air then is pulled by HEPA fan filter units back into the cleanroom. The air is constantly recirculating and by continuously passing through HEPA filtration removing particles from the air each time. Another advantage of this design is that air conditioning can be incorporated.
One pass cleanrooms draw air from outside and pass it through HEPA fan filter units into the cleanroom. The air then leaves through exhaust grills. The advantage of this approach is the lower cost. The disadvantages are comparatively shorter HEPA fan filter life, worse particle counts than a recirculating cleanroom, and that it cannot accommodate air conditioning.
Aseptic Practices/Processing
Aseptic practices are critical in environments where contamination control is paramount, particularly in the pharmaceutical, biotechnology, and medical device industries. Aseptic processing involves maintaining a sterile environment to prevent the introduction of contaminants during the manufacturing of products, such as sterile injectable medications and sterile medical equipment. This requires stringent control over personnel behavior, equipment sterilization, and the cleanroom environment.
There are different classifications for aseptic or sterile processing cleanrooms. The Pharmaceutical Inspection Co-operation Scheme (PIC/S) classifies cleanrooms into four grades (A, B, C, and D) based on their cleanliness level, particularly the concentration of airborne particles and viable microorganisms.
Operating procedure
In order to minimize the carrying of particulate by a person moving into the cleanroom, staff enter and leave through airlocks (sometimes including an air shower stage) and wear protective clothing such as hoods, face masks, gloves, boots, and coveralls.
Common materials such as paper, pencils, and fabrics made from natural fibers are often excluded because they shed particulates in use.
Particle levels are usually tested using a particle counter and microorganisms detected and counted through . Polymer tools used in cleanrooms must be carefully determined to be chemically compatible with cleanroom processing fluids as well as ensured to generate a low level of particle generation.
When cleaning, only special mops and buckets are used. Cleaning chemicals used tend to involve sticky elements to trap dust, and may need a second step with light molecular weight solvents to clear. Cleanroom furniture is designed to produce a minimum of particles and is easy to clean.
A cleanroom is as much a process and a meticulous culture to maintain, as it is a space as such.
Personnel contamination of cleanrooms
The greatest threat to cleanroom contamination comes from the users themselves. In the healthcare and pharmaceutical sectors, control of microorganisms is important, especially microorganisms likely to be deposited into the air stream from skin shedding. Studying cleanroom microflora is of importance for microbiologists and quality control personnel to assess changes in trends. Shifts in the types of microflora may indicate deviations from the "norm" such as resistant strains or problems with cleaning practices.
In assessing cleanroom microorganisms, the typical flora are primarily those associated with human skin (Gram-positive cocci), although microorganisms from other sources such as the environment (Gram-positive rods) and water (Gram-negative rods) are also detected, although in lower number. Common bacterial genera include Micrococcus, Staphylococcus, Corynebacterium, and Bacillus, and fungal genera include Aspergillus and Penicillium.
Cleanroom classification and standardization
Cleanrooms are classified according to the number and size of particles permitted per volume of air. Large numbers like "class 100" or "class 1000" refer to FED-STD-209E, and denote the number of particles of size 0.5 μm or larger permitted per cubic foot of air. The standard also allows interpolation; for example SNOLAB is maintained as a class 2000 cleanroom.
A discrete, light-scattering airborne particle counter is used to determine the concentration of airborne particles, equal to and larger than the specified sizes, at designated sampling locations.
Small numbers refer to ISO 14644-1 standards, which specify the decimal logarithm of the number of particles 0.1 μm or larger permitted per m3 of air. So, for example, an ISO class 5 cleanroom has at most 105 particles/m3.
Both FS 209E and ISO 14644-1 assume log-log relationships between particle size and particle concentration. For that reason, zero particle concentration does not exist. Some classes do not require testing some particle sizes, because the concentration is too low or too high to be practical to test for, but such blanks should not be read as zero.
Because 1 m3 is about 35 ft3, the two standards are mostly equivalent when measuring 0.5 μm particles, although the testing standards differ. Ordinary room air is around class 1,000,000 or ISO 9.
ISO 14644-1 and ISO 14698
ISO 14644-1 and ISO 14698 are non-governmental standards developed by the International Organization for Standardization (ISO). The former applies to cleanrooms in general (see table below), the latter to cleanrooms where biocontamination may be an issue. Since the strictest standards have been achieved only for space applications, it is sometimes difficult to know whether they were achieved in vacuum or standard conditions.
ISO 14644-1 defines the maximum concentration of particles per class and per particle size with the following formula
Where is the maximum concentration of particles in a volume of 1m of airborne particles that are equal to, or larger, than the considered particle size which is rounded to the nearest whole number, using no more than three significant figures, is the ISO class number, is the size of the particle in m and 0.1 is a constant expressed in m. The result for standard particle sizes is expressed in the following table.
US FED STD 209E
US FED-STD-209E was a United States federal standard. It was officially cancelled by the General Services Administration on November 29, 2001, but is still widely used.
Current regulating bodies include ISO, USP 800, US FED STD 209E (previous standard, still used).
Drug Quality and Security Act (DQSA) created in Nov. 2013 in response to drug compounding deaths and serious adverse events.
The Federal Food, Drug, and Cosmetic Act (FD&C Act) created specific guidelines and policies for human compounding.
503A addresses compounding by state or federally licensed facility by licensed personnel (pharmacist/ physicians)
503B pertaining to outsourcing facilities need direct supervision from a licensed pharmacist and do not need to be a licensed pharmacy. Facility is licensed through the Food and Drug Administration (FDA)
EU GMP classification
EU GMP guidelines are more stringent than others, requiring cleanrooms to meet particle counts at operation (during manufacturing process) and at rest (when manufacturing process is not carried out, but room AHU is on).
BS 5295
BS 5295 is a British Standard.
BS 5295 Class 1 also requires that the greatest particle present in any sample can not exceed 5 μm. BS 5295 has been superseded, withdrawn since the year 2007 and replaced with "BS EN ISO 14644-6:2007".
USP <800> standards
USP 800 is a United States standard developed by the United States Pharmacopeial Convention (USP) with an effective date of December 1, 2019.
Ramifications and further applications
In hospitals, theatres are similar to cleanrooms for surgical patients' operations with incisions to prevent any infections for the patient.
In another case, severely immunocompromised patients sometimes have to be held in prolonged isolation from their surroundings, for fear of infection. At the extreme, this necessitates a cleanroom environment. The same is the case for patients carrying airborne infectious diseases, only they are handled at negative, not positive pressure.
In exobiology when we seek out contact with other planets, there is a biological hazard both ways: we must not contaminate any sample return missions from other stellar bodies with terrestrial microbes, and we must not contaminate possible other ecosystems existing in other planets. Thus, even by international law, any probes we send to outer space must be sterile, and so to be handled in cleanroom conditions.
Since larger cleanrooms are very sensitive controlled environments upon which multibillion-dollar industries depend, sometimes they are even fitted with numerous seismic base isolation systems to prevent costly equipment malfunction.
See also
Air quality index
Contamination control
Data recovery lab
Indoor air quality
Microfiltration
Pressure room (disambiguation)
Pneumatic filter
Secure environment
References
External links
Cleanroom Wiki--The Global Society For Contamination Control (GSFCC)
Rooms
Semiconductor device fabrication
Filters
Telecommunications engineering | Cleanroom | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,936 | [
"Telecommunications engineering",
"Microtechnology",
"Chemical equipment",
"Filters",
"Cleanroom technology",
"Rooms",
"Semiconductor device fabrication",
"Filtration",
"Electrical engineering",
"Architecture"
] |
275,935 | https://en.wikipedia.org/wiki/Flexible%20AC%20transmission%20system | A Flexible Alternating Current Transmission System (FACTS) is a family of Power-Electronic based devices designed for use on an Alternating Current (AC) Transmission System to improve and control Power Flow and support Voltage. FACTs devices are alternatives to traditional electric grid solutions and improvements, where building additional Transmission Lines or Substation is not economically or logistically viable.
In general, FACTs devices improve power and voltage in three different ways: Shunt Compensation of Voltage (replacing the function of capacitors or inductors), Series Compensation of Impedance (replacing series capacitors) or Phase-Angle Compensation (replacing Generator Droop-Control or Phase-Shifting Transformers). While other traditional equipment can accomplish all of this, FACTs devices utilize Power Electronics that are fast enough to switch sub-cycle opposed to seconds or minutes. Most FACTs devices are also dynamic and can support voltage across a range rather than just on and off, and are multi-quadrant, i.e. they can both supply and consume Reactive Power, and even sometimes Real Power. All of this give them their "flexible" nature and make them well-suited for applications with unknown or changing requirements.
The FACTs family initially grew out of the development of High-Voltage Direct-Current (HVDC) conversion and transmission, which used Power Electronics to convert AC to DC to enable large, controllable power transfers. While HVDC focused on conversion to DC, FACTs devices used the developed technology to control power and voltage on the AC system. The most common type of FACTs device is the Static VAR Compensator (SVC), which uses Thyristors to switch and control shunt capacitors and reactors, respectively.
History
When AC won the War of Currents in the late 19th century, and electric grids began expanding and connecting cities and states, the need for reactive compensation became apparent. While AC offered benefits with transformation and reduced current, the alternating nature of voltage and current lead to additional challenges with the natural capacitance and inductance of transmission lines. Heavily loaded lines consumed reactive power due to the line's inductance, and as transmission voltage increased throughout the 20th century, the higher voltage supplied capacitive reactive power. As operating a transmission line only at it surge impedance loading (SIL) was not feasible, other means to manage the reactive power was needed.
Synchronous Machines were commonly used at the time for generators, and could provide some reactive power support, however were limited due to the increase in losses it caused. They also became less effective as higher voltage transmissions lines moved loads further from sources. Fixed, shunt capacitor and reactor banks filled this need by being deployed where needed. In particular, shunt capacitors switched by circuit breakers provided an effective means to managing varying reactive power requirements due to changing loads. However, this was not without limitations.
Shunt capacitors and reactors are fixed devices, only able to be switched on and off. This required either a careful study of the exact size needed, or accepting less than ideal effects on the voltage of a transmission line. The need for a more dynamic and flexible solution was realized with the mercury-arc valve in the early 20th century. Similar to a vacuum tube, the mercury-arc valve was a high-powered rectifier, capable of converting high AC voltages to DC. As the technology improved, inverting became possible as well and mercury valves found use in power systems and HVDC ties. When connected to a reactor, different switching pattern could be used to vary the effective inductance connected, allow for more dynamic control. Arc valves continued to dominate power electronics until the rise of solid-state semiconductors in the mid 20th century.
As semiconductors replaced vacuum tubes, the thyristor created the first modern FACTs devices in the Static VAR Compensator (SVC). Effectively working as a circuit breaker that could switch on in milliseconds, it allowed for quickly switching capacitor banks. Connected to a reactor and switched sub-cycle allowed the effective inductance to be varied. The thyristor also greatly improved the control system, allowing an SVC to detect and react to faults to better support the system. The thyristor dominated the FACTs and HVDC world until the late 20th century, when the IGBT began to match its power ratings.
Theory
The basic theory for how FACTs devices affect the AC system is based on analyzing how power transfers between two points in an AC system. This is particularly relevant to how an AC electrical grid functions, as the grid has numerous nodes (substations) that lack sources (generators) or loads. Power flow must be calculated and controlled at each node (substation bus) to ensure the grid design and topology itself does not prevent generated electricity from reaching loads, as when Transmission Lines reach dozens to hundreds of miles in length, they add significant impedance and voltage drop to the system.
Given two buses, each with their own voltage magnitude and phase angle, and connected by a Transmission Line with an impedance, the current flowing between them is given by
Apparent Power flow, and thus real and reactive power, is then given by
Combining these two equations gives the real and reactive power flow as a function of voltages and impedance. This can be done relatively easily, and is done in load-flow and power analysis programs, but results in equations that are not intuitive to understand. Two approximations can be made to simplify things: assume a lossless Transmission Line (a decent assumption as very low resistance conductor is typically used) and neglecting any capacitance on the line (a fair assumption for 200kV lines and lower). This reduces the Line impedance to just a reactance, and results in the real and reactive power being
where
is the magnitude of the Sending-End Voltage, at the first bus
is the magnitude of the Receiving-End Voltage, at the second bus
is the reactance of the Transmission Line between the buses
is the phase angle difference between the sending-end and receiving end voltages
From the above equations, it can be seen that there are three variables that affect real and reactive power flow on a Transmission Line: the voltage magnitudes at either bus, the line reactance between the buses, and the voltage phase-angle difference between the buses. All FACTs devices operation on the fundamental principal that changing one or more of these variables will change the real and reactive power flow on the transmission line. Some FACTs devices will just change a single variable, while others will control all three.
It should be noted, and will be made more explicit below, that FACTs devices do not create or add real power to the system, they simply affect the circuit parameters between two points to affect how and when power flows.
Types of FACTs devices
Given that FACTs device can change up to three parameters to affect power flow (voltage, impedance, and/or phase-angle), they are often categorized by what parameter they are controlling. As the conventional devices for controlling voltage (shunt capacitors and shunt inductors) and impedance (series capacitors and load-flow reactors) are so common, FACTs devices targeting voltage and impedance parameters are categorized as shunt and series devices, respectively.
Shunt Compensation Devices
The goal shunt compensation is to connect a device in parallel with the system that will improve voltage and enable larger power flow. This is traditionally done using shunt capacitors and inductors (reactors), much like Power Factor Correction.
The most common shunt compensation device is the Static VAR Compensator (SVC). SVCs use power electronics, generally Thyristors, to switch fixed capacitors and reactors. These are referred to as Thyristor Switched Capacitor (TSC) and Thyristor Switched Reactor (TSR), respectively. Thyristors are fast enough that they can be switched sub-cycle, and can switch a reactor at different points each cycle to control the vars the reactor produces. When arranged to do this, the TSR is referred to as a Thyristor Controlled Reactor (TCR). TCRs produce large amounts of harmonics and require Filter Banks to prevent adverse effects to the system.
Another type of shunt compensation is the Static Synchronous Compensator or STATCOM. Power electronics are combined in series with a reactor to form a Voltage-Sourced Converter (VSC), which when connected to an AC system forms a STATCOM. A VSC use the same principal of power flow on a transmission line; measuring the system voltage its connected to and varying the voltage of the power electronics to cause reactive power flow in or out of the VSC. Early STATCOMs used thyristors as their power electronics and Pulse-Width Modulation (PWM) to control reactive power, but with advances in semiconductor technologies, Insulated-Gate Bipolar Transistors (IGBT) have replaced them.
Series Compensation Devices
Series Compensation devices change the impedance of the Transmission Line to increase or decrease power flow. Power flow is increased by adding a series capacitor to offset line inductance or decreased by adding a series load-flow reactor to add to the line inductance.
One type of series compensation is the Thyristor-Controlled Series Capacitor (TCSC), which combines the TCR from an SVC in parallel with a traditional fixed series capacitor. As using power electronics to switch capacitors sub-cycle is not feasible due to concerns of stored charge, a TCR is used to create a variable inductance to offset the capacitor. A TCSC can be used to dynamically vary the power flow on a transmission line.
A VSC can also be used as a series compensation device if it's connected across the secondary winding of a series-connected transformer. This arrangement is referred to as a Static Synchronous Series Compensator (SSSC), and offers the benefits of a smaller reactor than in a TCSC, and the lower harmonic production of a VSC (or Voltage-Source Inverter - VSI when used in a SSSC) compared to a TCR.
Phase Angle Compensation
Power will only flow between two points on an AC system if there is a phase angle difference between the buses. Traditionally this is controlled by generators, however in large grids this becomes ineffective for managing power flow between distant buses. Phase-Shifting Transformers (PST) are generally used for these applications and can just be a Phase-Angle Regulator (PAR) or control both phase-angle and voltage.
The most straightforward phase angle compensation device would be to replace the tap changer on PAR with thyristors to switch portions of the winding in and out, forming a Thyristor-Controlled Phase-Shifting Transformer (TCPST). However, this is generally not done as a TCPST would be considerably more expensive than a PAR. Instead, this idea is expanded to replace a Quadrature Booster with a device referred to as a Thyristor-Controlled Phase-Angle Regulator (TCPAR), also known as a Static Phase-Shifter (SPS). From the schematic, it can be seen that a TCPAR is just a Quadrature Booster with the mechanical portions of the excitor and booster transformers replaced with Power Electronics, typically Thyristors.
Another way to form a TCPAR is to separate the Excitor and Booster transformer and control their secondaries with separate sets of Power Electronics. By linking the two sets of Power Electronics through a DC bus, typically by using GTO Thyristors or IGBTs, a TCPAR can be formed. While doing this may initially seem unnecessary, by looking at the shunt and series transformers and their electronics separately, it is apparent that the shunt portion is a STATCOM, and the series portion is a SSSC. With the DC bus providing power from the shunt portion to the series portion, the device functions as a Phase-Angle regulator, however with the DC bus isolating the two side, the STATCOM can control the shunt voltage or the SSSC can control line impedance. This gives the device its name, the Unified Power Flow Controller (UPFC), as it can control all three parameters that affect Power Control.
See also
Static VAR Compensator (SVC)
Static Synchronous Compensator (STATCOM)
Thyristor-Controlled Series Capacitor (TCSC)
Static Synchronous Series Compensator (SSSC)
Thyristor-Controlled Phase Angle Regulator (TCPAR)
Unified Power Flow Controller (UPFC)
High-Voltage DC (HVDC)
References
Electric power transmission
Power engineering | Flexible AC transmission system | [
"Engineering"
] | 2,603 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
276,106 | https://en.wikipedia.org/wiki/Molar%20concentration | Molar concentration (also called molarity, amount concentration or substance concentration) is a measure of the concentration of a chemical species, in particular, of a solute in a solution, in terms of amount of substance per unit volume of solution. In chemistry, the most commonly used unit for molarity is the number of moles per liter, having the unit symbol mol/L or mol/dm3 in SI units. A solution with a concentration of 1 mol/L is said to be 1 molar, commonly designated as 1 M or 1 M. Molarity is often depicted with square brackets around the substance of interest; for example, the molarity of the hydrogen ion is depicted as [H+].
Definition
Molar concentration or molarity is most commonly expressed in units of moles of solute per litre of solution. For use in broader applications, it is defined as amount of substance of solute per unit volume of solution, or per unit volume available to the species, represented by lowercase :
Here, is the amount of the solute in moles, is the number of constituent particles present in volume (in litres) of the solution, and is the Avogadro constant, since 2019 defined as exactly . The ratio is the number density .
In thermodynamics, the use of molar concentration is often not convenient because the volume of most solutions slightly depends on temperature due to thermal expansion. This problem is usually resolved by introducing temperature correction factors, or by using a temperature-independent measure of concentration such as molality.
The reciprocal quantity represents the dilution (volume) which can appear in Ostwald's law of dilution.
Formality or analytical concentration
If a molecule or salt dissociates in solution, the concentration refers to the original chemical formula in solution, the molar concentration is sometimes called formal concentration or formality (FA) or analytical concentration (cA). For example, if a sodium carbonate solution () has a formal concentration of c() = 1 mol/L, the molar concentrations are c() = 2 mol/L and c() = 1 mol/L because the salt dissociates into these ions.
Units
In the International System of Units (SI), the coherent unit for molar concentration is mol/m3. However, most chemical literature traditionally uses mol/dm3, which is the same as mol/L. This traditional unit is often called a molar and denoted by the letter M, for example:
1 mol/m3 = 10−3 mol/dm3 = 10−3 mol/L = 10−3 M = 1 mM = 1 mmol/L.
The SI prefix "mega" (symbol M) has the same symbol. However, the prefix is never used alone, so "M" unambiguously denotes molar.
Sub-multiples, such as "millimolar" (mM) and "nanomolar" (nM), consist of the unit preceded by an SI prefix:
Related quantities
Number concentration
The conversion to number concentration is given by
where is the Avogadro constant.
Mass concentration
The conversion to mass concentration is given by
where is the molar mass of constituent .
Mole fraction
The conversion to mole fraction is given by
where is the average molar mass of the solution, is the density of the solution.
A simpler relation can be obtained by considering the total molar concentration, namely, the sum of molar concentrations of all the components of the mixture:
Mass fraction
The conversion to mass fraction is given by
Molality
For binary mixtures, the conversion to molality is
where the solvent is substance 1, and the solute is substance 2.
For solutions with more than one solute, the conversion is
Properties
Sum of molar concentrations – normalizing relations
The sum of molar concentrations gives the total molar concentration, namely the density of the mixture divided by the molar mass of the mixture or by another name the reciprocal of the molar volume of the mixture. In an ionic solution, ionic strength is proportional to the sum of the molar concentration of salts.
Sum of products of molar concentrations and partial molar volumes
The sum of products between these quantities equals one:
Dependence on volume
The molar concentration depends on the variation of the volume of the solution due mainly to thermal expansion. On small intervals of temperature, the dependence is
where is the molar concentration at a reference temperature, is the thermal expansion coefficient of the mixture.
Examples
See also
Molality
Orders of magnitude (molar concentration)
References
External links
Molar Solution Concentration Calculator
Experiment to determine the molar concentration of vinegar by titration
Concentration
Molar quantities | Molar concentration | [
"Physics",
"Chemistry"
] | 972 | [
"Stoichiometry",
"Chemical reaction engineering",
"Physical quantities",
"Quantity",
"Intensive quantities",
"Chemical quantities",
"Concentration",
"nan",
"Molar quantities"
] |
276,582 | https://en.wikipedia.org/wiki/Ricci%20curvature | In differential geometry, the Ricci curvature tensor, named after Gregorio Ricci-Curbastro, is a geometric object which is determined by a choice of Riemannian or pseudo-Riemannian metric on a manifold. It can be considered, broadly, as a measure of the degree to which the geometry of a given metric tensor differs locally from that of ordinary Euclidean space or pseudo-Euclidean space.
The Ricci tensor can be characterized by measurement of how a shape is deformed as one moves along geodesics in the space. In general relativity, which involves the pseudo-Riemannian setting, this is reflected by the presence of the Ricci tensor in the Raychaudhuri equation. Partly for this reason, the Einstein field equations propose that spacetime can be described by a pseudo-Riemannian metric, with a strikingly simple relationship between the Ricci tensor and the matter content of the universe.
Like the metric tensor, the Ricci tensor assigns to each tangent space of the manifold a symmetric bilinear form . Broadly, one could analogize the role of the Ricci curvature in Riemannian geometry to that of the Laplacian in the analysis of functions; in this analogy, the Riemann curvature tensor, of which the Ricci curvature is a natural by-product, would correspond to the full matrix of second derivatives of a function. However, there are other ways to draw the same analogy.
For three-dimensional manifolds, the Ricci tensor contains all of the information which in higher dimensions is encoded by the more complicated Riemann curvature tensor. In part, this simplicity allows for the application of many geometric and analytic tools, which led to the solution of the Poincaré conjecture through the work of Richard S. Hamilton and Grigori Perelman.
In differential geometry, the determination of lower bounds on the Ricci tensor on a Riemannian manifold would allow one to extract global geometric and topological information by comparison (cf. comparison theorem) with the geometry of a constant curvature space form. This is since lower bounds on the Ricci tensor can be successfully used in studying the length functional in Riemannian geometry, as first shown in 1941 via Myers's theorem.
One common source of the Ricci tensor is that it arises whenever one commutes the covariant derivative with the tensor Laplacian. This, for instance, explains its presence in the Bochner formula, which is used ubiquitously in Riemannian geometry. For example, this formula explains why the gradient estimates due to Shing-Tung Yau (and their developments such as the Cheng-Yau and Li-Yau inequalities) nearly always depend on a lower bound for the Ricci curvature.
In 2007, John Lott, Karl-Theodor Sturm, and Cedric Villani demonstrated decisively that lower bounds on Ricci curvature can be understood entirely in terms of the metric space structure of a Riemannian manifold, together with its volume form. This established a deep link between Ricci curvature and Wasserstein geometry and optimal transport, which is presently the subject of much research.
Definition
Suppose that is an -dimensional
Riemannian or pseudo-Riemannian manifold, equipped
with its Levi-Civita connection . The
Riemann curvature of is a map which
takes smooth vector fields , , and ,
and returns the vector fieldon vector fields . Since is a tensor field, for
each point , it gives rise to a (multilinear) map:Define for each point the map by
That is, having fixed and , then for any orthonormal basis
of the vector space , one has
It is a standard exercise of (multi)linear
algebra to verify that this definition does not depend on the choice of the basis
.
In abstract index notation,
Sign conventions. Note that some sources define to be
what would here be called they would then define
as
Although sign conventions differ about the Riemann tensor, they do not differ about
the Ricci tensor.
Definition via local coordinates on a smooth manifold
Let be a smooth Riemannian
or pseudo-Riemannian -manifold.
Given a smooth chart one then has functions
and
for each
which satisfy
for all . The latter shows that, expressed as
matrices, .
The functions are defined by evaluating on
coordinate vector fields, while the functions are defined so
that, as a matrix-valued function, they provide an inverse to the matrix-valued
function .
Now define, for each , , , ,
and between 1 and , the functions
as maps .
Now let and be two smooth charts with .
Let be the functions computed as above via the chart and let be the functions computed as above via the chart .
Then one can check by a calculation with the chain rule and the product rule that
where is the first derivative along th direction
of .
This shows that the following definition does not depend on the choice of
.
For any , define a bilinear map
by
where and are the
components of the tangent vectors at in and relative to
the coordinate vector fields of .
It is common to abbreviate the above formal presentation in the following style:
The final line includes the demonstration that the bilinear map Ric is well-defined,
which is much easier to write out with the informal notation.
Comparison of the definitions
The two above definitions are identical. The formulas defining and in the coordinate approach have an exact parallel in the formulas defining the Levi-Civita connection, and the Riemann curvature via the Levi-Civita connection. Arguably, the definitions directly using local coordinates are preferable, since the "crucial property" of the Riemann tensor mentioned above requires to be Hausdorff in order to hold. By contrast, the local coordinate approach only requires a smooth atlas. It is also somewhat easier to connect the "invariance" philosophy underlying the local approach with the methods of constructing more exotic geometric objects, such as spinor fields.
The complicated formula defining in the introductory section is the same as that in the following section. The only difference is that terms have been grouped so that it is easy to see that
Properties
As can be seen from the symmetries of the Riemann curvature tensor, the Ricci tensor of a Riemannian
manifold is symmetric, in the sense that
for all
It thus follows linear-algebraically that the Ricci tensor is completely determined
by knowing the quantity for all vectors
of unit length. This function on the set of unit tangent vectors
is often also called the Ricci curvature, since knowing it is equivalent to
knowing the Ricci curvature tensor.
The Ricci curvature is determined by the sectional curvatures of a Riemannian
manifold, but generally contains less information. Indeed, if is a
vector of unit length on a Riemannian -manifold, then
is precisely
times the average value of the sectional curvature, taken over all the 2-planes
containing . There is an -dimensional family
of such 2-planes, and so only in dimensions 2 and 3 does the Ricci tensor determine
the full curvature tensor. A notable exception is when the manifold is given a
priori as a hypersurface of Euclidean space. The second fundamental form,
which determines the full curvature via the Gauss–Codazzi equation,
is itself determined by the Ricci tensor and the principal directions
of the hypersurface are also the eigendirections of the Ricci tensor. The
tensor was introduced by Ricci for this reason.
As can be seen from the second Bianchi identity, one has
where is the scalar curvature, defined in local coordinates as This is often called the contracted second Bianchi identity.
Direct geometric meaning
Near any point in a Riemannian manifold ,
one can define preferred local coordinates, called geodesic normal coordinates.
These are adapted to the metric so that geodesics through correspond
to straight lines through the origin, in such a manner that the geodesic distance
from corresponds to the Euclidean distance from the origin.
In these coordinates, the metric tensor is well-approximated by the Euclidean
metric, in the precise sense that
In fact, by taking the Taylor expansion of the metric applied to a Jacobi field along a radial geodesic in the normal coordinate system, one has
In these coordinates, the metric volume element then has the following expansion at :
which follows by expanding the square root of the determinant of the metric.
Thus, if the Ricci curvature is positive
in the direction of a vector , the conical region in
swept out by a tightly focused family of geodesic segments of length
emanating from , with initial velocity inside
a small cone about , will have smaller volume than the corresponding
conical region in Euclidean space, at least provided that
is sufficiently small. Similarly, if the Ricci curvature is negative in the
direction of a given vector , such a conical region in the manifold
will instead have larger volume than it would in Euclidean space.
The Ricci curvature is essentially an average of curvatures in the planes including
. Thus if a cone emitted with an initially circular (or spherical)
cross-section becomes distorted into an ellipse (ellipsoid), it is possible
for the volume distortion to vanish if the distortions along the
principal axes counteract one another. The Ricci
curvature would then vanish along . In physical applications, the
presence of a nonvanishing sectional curvature does not necessarily indicate the
presence of any mass locally; if an initially circular cross-section of a cone
of worldlines later becomes elliptical, without changing its volume, then
this is due to tidal effects from a mass at some other location.
Applications
Ricci curvature plays an important role in general relativity, where it is
the key term in the Einstein field equations.
Ricci curvature also appears in the Ricci flow equation, first
introduced by Richard S. Hamilton in 1982, where certain
one-parameter families of Riemannian metrics are singled out as solutions of a
geometrically-defined partial differential equation.
In harmonic local coordinates the Ricci tensor can be expressed as .
where are the components of the metric tensor and is the Laplace–Beltrami operator.
This fact motivates the introduction of the Ricci flow equation
as a natural extension of the heat equation for the metric.
Since heat tends to spread through
a solid until the body reaches an equilibrium state of constant temperature, if
one is given a manifold, the Ricci flow may be hoped to produce an 'equilibrium'
Riemannian metric which is Einstein or of constant curvature.
However, such a clean "convergence" picture cannot be achieved since many manifolds
cannot support such metrics. A detailed study of the nature of solutions of the
Ricci flow, due principally to Hamilton and Grigori Perelman, shows that the
types of "singularities" that occur along a Ricci flow, corresponding to the
failure of convergence, encodes deep information about 3-dimensional topology.
The culmination of this work was a proof of the geometrization conjecture
first proposed by William Thurston in the 1970s, which can be thought of as
a classification of compact 3-manifolds.
On a Kähler manifold, the Ricci curvature determines the first Chern class
of the manifold (mod torsion). However, the Ricci curvature has no analogous
topological interpretation on a generic Riemannian manifold.
Global geometry and topology
Here is a short list of global results concerning manifolds with positive Ricci curvature; see also classical theorems of Riemannian geometry. Briefly, positive Ricci curvature of a Riemannian manifold has strong topological consequences, while (for dimension at least 3), negative Ricci curvature has no topological implications. (The Ricci curvature is said to be positive if the Ricci curvature function is positive on the set of non-zero tangent vectors .) Some results are also known for pseudo-Riemannian manifolds.
Myers' theorem (1941) states that if the Ricci curvature is bounded from below on a complete Riemannian n-manifold by , then the manifold has diameter . By a covering-space argument, it follows that any compact manifold of positive Ricci curvature must have finite fundamental group. Cheng (1975) showed that, in this setting, equality in the diameter inequality occurs if only if the manifold is isometric to a sphere of a constant curvature .
The Bishop–Gromov inequality states that if a complete -dimensional Riemannian manifold has non-negative Ricci curvature, then the volume of a geodesic ball is less than or equal to the volume of a geodesic ball of the same radius in Euclidean -space. Moreover, if denotes the volume of the ball with center and radius in the manifold and denotes the volume of the ball of radius in Euclidean -space then the function is nonincreasing. This can be generalized to any lower bound on the Ricci curvature (not just nonnegativity), and is the key point in the proof of Gromov's compactness theorem.)
The Cheeger–Gromoll splitting theorem states that if a complete Riemannian manifold with contains a line, meaning a geodesic such that for all , then it is isometric to a product space . Consequently, a complete manifold of positive Ricci curvature can have at most one topological end. The theorem is also true under some additional hypotheses for complete Lorentzian manifolds (of metric signature ) with non-negative Ricci tensor ().
Hamilton's first convergence theorem for Ricci flow has, as a corollary, that the only compact 3-manifolds which have Riemannian metrics of positive Ricci curvature are the quotients of the 3-sphere by discrete subgroups of SO(4) which act properly discontinuously. He later extended this to allow for nonnegative Ricci curvature. In particular, the only simply-connected possibility is the 3-sphere itself.
These results, particularly Myers' and Hamilton's, show that positive Ricci curvature has strong topological consequences. By contrast, excluding the case of surfaces, negative Ricci curvature is now known to have no topological implications; has shown that any manifold of dimension greater than two admits a complete Riemannian metric of negative Ricci curvature. In the case of two-dimensional manifolds, negativity of the Ricci curvature is synonymous with negativity of the Gaussian curvature, which has very clear topological implications. There are very few two-dimensional manifolds which fail to admit Riemannian metrics of negative Gaussian curvature.
Behavior under conformal rescaling
If the metric is changed by multiplying it by a conformal factor
, the Ricci tensor of the new, conformally-related metric
is given by
where is the (positive spectrum) Hodge Laplacian, i.e.,
the opposite of the usual trace of the Hessian.
In particular, given a point in a Riemannian manifold, it is always
possible to find metrics conformal to the given metric for which the
Ricci tensor vanishes at . Note, however, that this is only pointwise
assertion; it is usually impossible to make the Ricci curvature vanish identically
on the entire manifold by a conformal rescaling.
For two dimensional manifolds, the above formula shows that if is a
harmonic function, then the conformal scaling
does not change the Ricci tensor (although it still changes its trace with respect
to the metric unless .
Trace-free Ricci tensor
In Riemannian geometry and pseudo-Riemannian geometry, the
trace-free Ricci tensor (also called traceless Ricci tensor) of a
Riemannian or pseudo-Riemannian -manifold
is the tensor defined by
where and denote the Ricci curvature
and scalar curvature of . The name of this object reflects the
fact that its trace automatically vanishes:
However, it is quite an
important tensor since it reflects an "orthogonal decomposition" of the Ricci tensor.
The orthogonal decomposition of the Ricci tensor
The following, not so trivial, property is
It is less immediately obvious that the two terms on the right hand side are orthogonal
to each other:
An identity which is intimately connected with this (but which could be proved directly)
is that
The trace-free Ricci tensor and Einstein metrics
By taking a divergence, and using the contracted Bianchi identity, one sees that
implies .
So, provided that and is connected, the vanishing
of implies that the scalar curvature is constant. One can then see
that the following are equivalent:
for some number
In the Riemannian setting, the above orthogonal decomposition shows that
is also equivalent to these conditions.
In the pseudo-Riemmannian setting, by contrast, the condition
does not necessarily imply so the most that one can say is that
these conditions imply
In particular, the vanishing of trace-free Ricci tensor characterizes
Einstein manifolds, as defined by the condition
for a number In general relativity, this equation states
that is a solution of Einstein's vacuum field
equations with cosmological constant.
Kähler manifolds
On a Kähler manifold , the Ricci curvature determines the
curvature form of the canonical line bundle
. The canonical line bundle is the top
exterior power of the bundle of holomorphic Kähler differentials:
The Levi-Civita connection corresponding to the metric on gives
rise to a connection on . The curvature of this connection is
the 2-form defined by
where is the complex structure map on the
tangent bundle determined by the structure of the Kähler manifold. The Ricci
form is a closed 2-form. Its cohomology class is,
up to a real constant factor, the first Chern class of the canonical bundle,
and is therefore a topological invariant of (for compact )
in the sense that it depends only on the topology of and the
homotopy class of the complex structure.
Conversely, the Ricci form determines the Ricci tensor by
In local holomorphic coordinates , the Ricci form is given by
where is the Dolbeault operator and
If the Ricci tensor vanishes, then the canonical bundle is flat, so the
structure group can be locally reduced to a subgroup of the
special linear group . However, Kähler manifolds
already possess holonomy in , and so the (restricted)
holonomy of a Ricci-flat Kähler manifold is contained in .
Conversely, if the (restricted) holonomy of a 2-dimensional Riemannian
manifold is contained in , then the manifold is a Ricci-flat
Kähler manifold .
Generalization to affine connections
The Ricci tensor can also be generalized to arbitrary affine connections,
where it is an invariant that plays an especially important role in the study of
projective geometry (geometry associated to
unparameterized geodesics) . If
denotes an affine connection, then the curvature tensor is the
(1,3)-tensor defined by
for any vector fields . The Ricci tensor is defined to be the trace:
In this more general situation, the Ricci tensor is symmetric if and only if there
exists locally a parallel volume form for the connection.
Discrete Ricci curvature
Notions of Ricci curvature on discrete manifolds have been defined on graphs and
networks, where they quantify local divergence properties of edges. Ollivier's
Ricci curvature is defined using optimal transport theory.
A different (and earlier) notion, Forman's Ricci curvature, is based on
topological arguments.
See also
Curvature of Riemannian manifolds
Scalar curvature
Ricci calculus
Ricci decomposition
Ricci-flat manifold
Christoffel symbols
Introduction to the mathematics of general relativity
Footnotes
References
.
.
.
Forman (2003), "Bochner's Method for Cell Complexes and Combinatorial Ricci Curvature", Discrete & Computational Geometry, 29 (3): 323–374. doi:10.1007/s00454-002-0743-x. ISSN 1432-0444
.
.
.
.
.
Ollivier, Yann (2009), "Ricci curvature of Markov chains on metric spaces", Journal of Functional Analysis 256 (3): 810–864. doi:10.1016/j.jfa.2008.11.001. ISSN 0022-1236
.
Najman, Laurent and Romon, Pascal (2017): Modern approaches to discrete curvature, Springer (Cham), Lecture notes in mathematics
External links
Z. Shen, C. Sormani "The Topology of Open Manifolds with Nonnegative Ricci Curvature" (a survey)
G. Wei, "Manifolds with A Lower Ricci Curvature Bound" (a survey)
Curvature (mathematics)
Differential geometry
Riemannian geometry
Riemannian manifolds
Tensors in general relativity | Ricci curvature | [
"Physics",
"Mathematics",
"Engineering"
] | 4,203 | [
"Geometric measurement",
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Space (mathematics)",
"Metric spaces",
"Riemannian manifolds",
"Tensors in general relativity",
"Curvature (mathematics)"
] |
277,085 | https://en.wikipedia.org/wiki/Fock%20state | In quantum mechanics, a Fock state or number state is a quantum state that is an element of a Fock space with a well-defined number of particles (or quanta). These states are named after the Soviet physicist Vladimir Fock. Fock states play an important role in the second quantization formulation of quantum mechanics.
The particle representation was first treated in detail by Paul Dirac for bosons and by Pascual Jordan and Eugene Wigner for fermions. The Fock states of bosons and fermions obey useful relations with respect to the Fock space creation and annihilation operators.
Definition
One specifies a multiparticle state of N non-interacting identical particles by writing the state as a sum of tensor products of N one-particle states. Additionally, depending on the integrality of the particles' spin, the tensor products must be alternating (anti-symmetric) or symmetric products of the underlying one-particle Hilbert spaces. Specifically:
Fermions, having half-integer spin and obeying the Pauli exclusion principle, correspond to antisymmetric tensor products.
Bosons, possessing integer spin (and not governed by the exclusion principle) correspond to symmetric tensor products.
If the number of particles is variable, one constructs the Fock space as the direct sum of the tensor product Hilbert spaces for each particle number. In the Fock space, it is possible to specify the same state in a new notation, the occupancy number notation, by specifying the number of particles in each possible one-particle state.
Let be an orthonormal basis of states in the underlying one-particle Hilbert space. This induces a corresponding basis of the Fock space called the "occupancy number basis". A quantum state in the Fock space is called a Fock state if it is an element of the occupancy number basis.
A Fock state satisfies an important criterion: for each i, the state is an eigenstate of the particle number operator corresponding to the i-th elementary state ki. The corresponding eigenvalue gives the number of particles in the state. This criterion nearly defines the Fock states (one must in addition select a phase factor).
A given Fock state is denoted by . In this expression, denotes the number of particles in the i-th state ki, and the particle number operator for the i-th state, , acts on the Fock state in the following way:
Hence the Fock state is an eigenstate of the number operator with eigenvalue .
Fock states often form the most convenient basis of a Fock space. Elements of a Fock space that are superpositions of states of differing particle number (and thus not eigenstates of the number operator) are not Fock states. For this reason, not all elements of a Fock space are referred to as "Fock states".
If we define the aggregate particle number operator as
the definition of Fock state ensures that the variance of measurement , i.e., measuring the number of particles in a Fock state always returns a definite value with no fluctuation.
Example using two particles
For any final state , any Fock state of two identical particles given by , and any operator , we have the following condition for indistinguishability:
.
So, we must have
where for bosons and for fermions. Since and are arbitrary, we can say,
for bosons and
for fermions.
Note that the number operator does not distinguish bosons from fermions; indeed, it just counts particles without regard to their symmetry type. To perceive any difference between them, we need other operators, namely the creation and annihilation operators.
Bosonic Fock state
Bosons, which are particles with integer spin, follow a simple rule: their composite eigenstate is symmetric under operation by an exchange operator. For example, in a two particle system in the tensor product representation we have .
Boson creation and annihilation operators
We should be able to express the same symmetric property in this new Fock space representation. For this we introduce non-Hermitian bosonic creation and annihilation operators, denoted by and respectively. The action of these operators on a Fock state are given by the following two equations:
Creation operator :
Annihilation operator :
Non-Hermiticity of creation and annihilation operators
The bosonic Fock state creation and annihilation operators are not Hermitian operators.
Operator identities
The commutation relations of creation and annihilation operators in a bosonic system are
where is the commutator and is the Kronecker delta.
N bosonic basis states
Action on some specific Fock states
Action of number operators
The number operators for a bosonic system are given by , where
Number operators are Hermitian operators.
Symmetric behaviour of bosonic Fock states
The commutation relations of the creation and annihilation operators ensure that the bosonic Fock states have the appropriate symmetric behaviour under particle exchange. Here, exchange of particles between two states (say, l and m) is done by annihilating a particle in state l and creating one in state m. If we start with a Fock state , and want to shift a particle from state to state , then we operate the Fock state by in the following way:
Using the commutation relation we have,
So, the Bosonic Fock state behaves to be symmetric under operation by Exchange operator.
Fermionic Fock state
Fermion creation and annihilation operators
To be able to retain the antisymmetric behaviour of fermions, for Fermionic Fock states we introduce non-Hermitian fermion creation and annihilation operators, defined for a Fermionic Fock state as:
The creation operator acts as:
The annihilation operator acts as:
These two actions are done antisymmetrically, which we shall discuss later.
Operator identities
The anticommutation relations of creation and annihilation operators in a fermionic system are,
where is the anticommutator and is the Kronecker delta. These anticommutation relations can be used to show antisymmetric behaviour of Fermionic Fock states.
Action of number operators
Number operators for Fermions are given by .
Maximum occupation number
The action of the number operator as well as the creation and annihilation operators might seem same as the bosonic ones, but the real twist comes from the maximum occupation number of each state in the fermionic Fock state. Extending the 2-particle fermionic example above, we first must convince ourselves that a fermionic Fock state is obtained by applying a certain sum of permutation operators to the tensor product of eigenkets as follows:
This determinant is called the Slater determinant. If any of the single particle states are the same, two rows of the Slater determinant would be the same and hence the determinant would be zero. Hence, two identical fermions must not occupy the same state (a statement of the Pauli exclusion principle). Therefore, the occupation number of any single state is either 0 or 1. The eigenvalue associated to the fermionic Fock state must be either 0 or 1.
N fermionic basis states
Action on some specific Fock states
Antisymmetric behaviour of Fermionic Fock state
Antisymmetric behaviour of Fermionic states under Exchange operator is taken care of the anticommutation relations. Here, exchange of particles between two states is done by annihilating one particle in one state and creating one in other. If we start with a Fock state and want to shift a particle from state to state , then we operate the Fock state by in the following way:
Using the anticommutation relation we have
but,
Thus, fermionic Fock states are antisymmetric under operation by particle exchange operators.
Fock states are not energy eigenstates in general
In second quantization theory, the Hamiltonian density function is given by
The total Hamiltonian is given by
In free Schrödinger theory,
and
and
,
where is the annihilation operator.
Only for non-interacting particles do and commute; in general they do not commute. For non-interacting particles,
If they do not commute, the Hamiltonian will not have the above expression. Therefore, in general, Fock states are not energy eigenstates of a system.
Vacuum fluctuations
The vacuum state or is the state of lowest energy and the expectation values of and vanish in this state:
The electrical and magnetic fields and the vector potential have the mode expansion of the same general form:
Thus it is easy to see that the expectation values of these field operators vanishes in the vacuum state:
However, it can be shown that the expectation values of the square of these field operators is non-zero. Thus there are fluctuations in the field about the zero ensemble average. These vacuum fluctuations are responsible for many interesting phenomenon including the Lamb shift in quantum optics.
Multi-mode Fock states
In a multi-mode field each creation and annihilation operator operates on its own mode. So and will operate only on . Since operators corresponding to different modes operate in different sub-spaces of the Hilbert space, the entire field is a direct product of over all the modes:
The creation and annihilation operators operate on the multi-mode state by only raising or lowering the number state of their own mode:
We also define the total number operator for the field which is a sum of number operators of each mode:
The multi-mode Fock state is an eigenvector of the total number operator whose eigenvalue is the total occupation number of all the modes
In case of non-interacting particles, number operator and Hamiltonian commute with each other and hence multi-mode Fock states become eigenstates of the multi-mode Hamiltonian
Source of single photon state
Single photons are routinely generated using single emitters (atoms, ions, molecules, Nitrogen-vacancy center, Quantum dot). However, these sources are not always very efficient, often presenting a low probability of actually getting a single photon on demand; and often complex and unsuitable out of a laboratory environment.
Other sources are commonly used that overcome these issues at the expense of a nondeterministic behavior. Heralded single photon sources are probabilistic two-photon sources from whom the pair is split and the detection of one photon heralds the presence of the remaining one. These sources usually rely on the optical non-linearity of some materials like periodically poled Lithium niobate (Spontaneous parametric down-conversion), or silicon (spontaneous Four-wave mixing) for example.
Non-classical behaviour
The Glauber–Sudarshan P-representation of Fock states shows that these states are purely quantum mechanical and have no classical counterpart. The of these states in the representation is a 'th derivative of the Dirac delta function and therefore not a classical probability distribution.
See also
Coherent states
Heisenberg limit
Nonclassical light
References
External links
Vladan Vuletic of MIT has used an ensemble of atoms to produce a Fock state (a.k.a. single photon) source (PDF)
Produce and measure a single photon state (Fock state) with an interactive experiment QuantumLab
Quantum optics
Quantum field theory | Fock state | [
"Physics"
] | 2,348 | [
"Quantum field theory",
"Quantum optics",
"Quantum mechanics"
] |
277,213 | https://en.wikipedia.org/wiki/Coherent%20state | In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state that has dynamics most closely resembling the oscillatory behavior of a classical harmonic oscillator. It was the first example of quantum dynamics when Erwin Schrödinger derived it in 1926, while searching for solutions of the Schrödinger equation that satisfy the correspondence principle. The quantum harmonic oscillator (and hence the coherent states) arise in the quantum theory of a wide range of physical systems. For instance, a coherent state describes the oscillating motion of a particle confined in a quadratic potential well (for an early reference, see e.g. Schiff's textbook). The coherent state describes a state in a system for which the ground-state wavepacket is displaced from the origin of the system. This state can be related to classical solutions by a particle oscillating with an amplitude equivalent to the displacement.
These states, expressed as eigenvectors of the lowering operator and forming an overcomplete family, were introduced in the early papers of John R. Klauder, e.g.
In the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories, coherent states were introduced by the work of Roy J. Glauber in 1963 and are also known as Glauber states.
The concept of coherent states has been considerably abstracted; it has become a major topic in mathematical physics and in applied mathematics, with applications ranging from quantization to signal processing and image processing (see Coherent states in mathematical physics). For this reason, the coherent states associated to the quantum harmonic oscillator are sometimes referred to as canonical coherent states (CCS), standard coherent states, Gaussian states, or oscillator states.
Coherent states in quantum optics
In quantum optics the coherent state refers to a state of the quantized electromagnetic field, etc. that describes a maximal kind of coherence and a classical kind of behavior. Erwin Schrödinger derived it as a "minimum uncertainty" Gaussian wavepacket in 1926, searching for solutions of the Schrödinger equation that satisfy the correspondence principle. It is a minimum uncertainty state, with the single free parameter chosen to make the relative dispersion (standard deviation in natural dimensionless units) equal for position and momentum, each being equally small at high energy.
Further, in contrast to the energy eigenstates of the system, the time evolution of a coherent state is concentrated along the classical trajectories. The quantum linear harmonic oscillator, and hence coherent states, arise in the quantum theory of a wide range of physical systems. They occur in the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories.
While minimum uncertainty Gaussian wave-packets had been well-known, they did not attract full attention until Roy J. Glauber, in 1963, provided a complete quantum-theoretic description of coherence in the electromagnetic field. In this respect, the concurrent contribution of E.C.G. Sudarshan should not be omitted, (there is, however, a note in Glauber's paper that reads: "Uses of these states as generating functions for the -quantum states have, however, been made by J. Schwinger).
Glauber was prompted to do this to provide a description of the Hanbury-Brown & Twiss experiment, which generated very wide baseline (hundreds or thousands of miles) interference patterns that could be used to determine stellar diameters. This opened the door to a much more comprehensive understanding of coherence. (For more, see Quantum mechanical description.)
In classical optics, light is thought of as electromagnetic waves radiating from a source. Often, coherent laser light is thought of as light that is emitted by many such sources that are in phase. Actually, the picture of one photon being in-phase with another is not valid in quantum theory. Laser radiation is produced in a resonant cavity where the resonant frequency of the cavity is the same as the frequency associated with the atomic electron transitions providing energy flow into the field. As energy in the resonant mode builds up, the probability for stimulated emission, in that mode only, increases. That is a positive feedback loop in which the amplitude in the resonant mode increases exponentially until some nonlinear effects limit it. As a counter-example, a light bulb radiates light into a continuum of modes, and there is nothing that selects any one mode over the other. The emission process is highly random in space and time (see thermal light). In a laser, however, light is emitted into a resonant mode, and that mode is highly coherent. Thus, laser light is idealized as a coherent state. (Classically we describe such a state by an electric field oscillating as a stable wave. See Fig.1)
Besides describing lasers, coherent states also behave in a convenient manner when describing the quantum action of beam splitters: two coherent-state input beams will simply convert to two coherent-state beams at the output with new amplitudes given by classical electromagnetic wave formulas; such a simple behaviour does not occur for other input states, including number states. Likewise if a coherent-state light beam is partially absorbed, then the remainder is a pure coherent state with a smaller amplitude, whereas partial absorption of non-coherent-state light produces a more complicated statistical mixed state. Thermal light can be described as a statistical mixture of coherent states, and the typical way of defining nonclassical light is that it cannot be described as a simple statistical mixture of coherent states.
The energy eigenstates of the linear harmonic oscillator (e.g., masses on springs, lattice vibrations in a solid, vibrational motions of nuclei in molecules, or oscillations in the electromagnetic field) are fixed-number quantum states. The Fock state (e.g. a single photon) is the most particle-like state; it has a fixed number of particles, and phase is indeterminate. A coherent state distributes its quantum-mechanical uncertainty equally between the canonically conjugate coordinates, position and momentum, and the relative uncertainty in phase [defined heuristically] and amplitude are roughly equal—and small at high amplitude.
Quantum mechanical definition
Mathematically, a coherent state is defined to be the (unique) eigenstate of the annihilation operator with corresponding eigenvalue . Formally, this reads,
Since is not hermitian, is, in general, a complex number. Writing || and are called the amplitude and phase of the state .
The state is called a canonical coherent state in the literature, since there are many other types of coherent states, as can be seen in the companion article Coherent states in mathematical physics.
Physically, this formula means that a coherent state remains unchanged by the annihilation of field excitation or, say, a charged particle. An eigenstate of the annihilation operator has a Poissonian number distribution when expressed in a basis of energy eigenstates, as shown below. A Poisson distribution is a necessary and sufficient condition that all detections are statistically independent. Contrast this to a single-particle state ( Fock state): once one particle is detected, there is zero probability of detecting another.
The derivation of this will make use of (unconventionally normalized) dimensionless operators, and , normally called field quadratures in quantum optics.
(See Nondimensionalization.) These operators are related to the position and momentum operators of a mass on a spring with constant ,
For an optical field,
are the real and imaginary components of the mode of the electric field inside a cavity of volume .
With these (dimensionless) operators, the Hamiltonian of either system becomes
Erwin Schrödinger was searching for the most classical-like states when he first introduced minimum uncertainty Gaussian wave-packets. The quantum state of the harmonic oscillator that minimizes the uncertainty relation with uncertainty equally distributed between and satisfies the equation
or, equivalently,
and hence
Thus, given , Schrödinger found that the minimum uncertainty states for the linear harmonic oscillator are the eigenstates of .
Since â is , this is recognizable as a coherent state in the sense of the above definition.
Using the notation for multi-photon states, Glauber characterized the state of complete coherence to all orders in the electromagnetic field to be the eigenstate of the annihilation operator—formally, in a mathematical sense, the same state as found by Schrödinger. The name coherent state took hold after Glauber's work.
If the uncertainty is minimized, but not necessarily equally balanced between and , the state is called a squeezed coherent state.
The coherent state's location in the complex plane (phase space) is centered at the position and momentum of a classical oscillator of the phase and amplitude |α| given by the eigenvalue α (or the same complex electric field value for an electromagnetic wave). As shown in Figure 5, the uncertainty, equally spread in all directions, is represented by a disk with diameter . As the phase varies, the coherent state circles around the origin and the disk neither distorts nor spreads. This is the most similar a quantum state can be to a single point in phase space.
Since the uncertainty (and hence measurement noise) stays constant at as the amplitude of the oscillation increases, the state behaves increasingly like a sinusoidal wave, as shown in Figure 1. Moreover, since the vacuum state is just the coherent state with =0, all coherent states have the same uncertainty as the vacuum. Therefore, one may interpret the quantum noise of a coherent state as being due to vacuum fluctuations.
The notation does not refer to a Fock state. For example, when , one should not mistake for the single-photon Fock state, which is also denoted in its own notation. The expression with represents a Poisson distribution of number states with a mean photon number of unity.
The formal solution of the eigenvalue equation is the vacuum state displaced to a location in phase space, i.e., it is obtained by letting the unitary displacement operator operate on the vacuum,
,
where and .
This can be easily seen, as can virtually all results involving coherent states, using the representation of the coherent state in the basis of Fock states,
where are energy (number) eigenvectors of the Hamiltonian
and the final equality derives from the Baker-Campbell-Hausdorff formula. For the corresponding Poissonian distribution, the probability of detecting photons is
Similarly, the average photon number in a coherent state is
and the variance is
.
That is, the standard deviation of the number detected goes like the square root of the number detected. So in the limit of large , these detection statistics are equivalent to that of a classical stable wave.
These results apply to detection results at a single detector and thus relate to first order coherence (see degree of coherence). However, for measurements correlating detections at multiple detectors, higher-order coherence is involved (e.g., intensity correlations, second order coherence, at two detectors). Glauber's definition of quantum coherence involves nth-order correlation functions (n-th order coherence) for all . The perfect coherent state has all n-orders of correlation equal to 1 (coherent). It is perfectly coherent to all orders.
The second-order correlation coefficient gives a direct measure of the degree of coherence of photon states in terms of the variance of the photon statistics in the beam under study.
In Glauber's development, it is seen that the coherent states are distributed according to a Poisson distribution. In the case of a Poisson distribution, the variance is equal to the mean, i.e.
.
A second-order correlation coefficient of 1 means that photons in coherent states are uncorrelated.
Hanbury Brown and Twiss studied the correlation behavior of photons emitted from a thermal, incoherent source described by Bose–Einstein statistics. The variance of the Bose–Einstein distribution is
.
This corresponds to the correlation measurements of Hanbury Brown and Twiss, and illustrates that photons in incoherent Bose–Einstein states are correlated or bunched.
Quanta that obey Fermi–Dirac statistics are anti-correlated. In this case the variance is
.
Anti-correlation is characterized by a second-order correlation coefficient =0.
Roy J. Glauber's work was prompted by the results of Hanbury-Brown and Twiss that produced long-range (hundreds or thousands of miles) first-order interference patterns through the use of intensity fluctuations (lack of second order coherence), with narrow band filters (partial first order coherence) at each detector. (One can imagine, over very short durations, a near-instantaneous interference pattern from the two detectors, due to the narrow band filters, that dances around randomly due to the shifting relative phase difference. With a coincidence counter, the dancing interference pattern would be stronger at times of increased intensity [common to both beams], and that pattern would be stronger than the background noise.) Almost all of optics had been concerned with first order coherence. The Hanbury-Brown and Twiss results prompted Glauber to look at higher order coherence, and he came up with a complete quantum-theoretic description of coherence to all orders in the electromagnetic field (and a quantum-theoretic description of signal-plus-noise). He coined the term coherent state and showed that they are produced when a classical electric current interacts with the electromagnetic field.
At , from Figure 5, simple geometry gives Δθ |α | = 1/2.
From this, it appears that there is a tradeoff between number uncertainty and phase uncertainty, Δθ Δn = 1/2, which is sometimes interpreted as a
number-phase uncertainty relation; but this is not a formal strict uncertainty relation: there is no uniquely defined phase operator in quantum mechanics.
The wavefunction of a coherent state
To find the wavefunction of the coherent state, the minimal uncertainty Schrödinger wave packet, it is easiest to start with the Heisenberg picture of the quantum harmonic oscillator for the coherent state . Note that
The coherent state is an eigenstate of the annihilation operator in the Heisenberg picture.
It is easy to see that, in the Schrödinger picture, the same eigenvalue
occurs,
.
In the coordinate representations resulting from operating by , this amounts to the differential equation,
which is easily solved to yield
where is a yet undetermined phase, to be fixed by demanding that the wavefunction satisfies the Schrödinger equation.
It follows that
so that is the initial phase of the eigenvalue.
The mean position and momentum of this "minimal Schrödinger wave packet" are thus oscillating just like a classical system,
The probability density remains a Gaussian centered on this oscillating mean,
Mathematical features of the canonical coherent states
The canonical coherent states described so far have three properties that are mutually equivalent, since each of them completely specifies the state , namely,
They are eigenvectors of the annihilation operator: .
They are obtained from the vacuum by application of a unitary displacement operator: .
They are states of (balanced) minimal uncertainty: .
Each of these properties may lead to generalizations, in general different from each other (see the article "Coherent states in mathematical physics" for some of these). We emphasize that coherent states have mathematical features that are very different from those of a Fock state; for instance, two different coherent states are not orthogonal,
(linked to the fact that they are eigenvectors of the non-self-adjoint annihilation operator ).
Thus, if the oscillator is in the quantum state it is also with nonzero probability in the other quantum state
(but the farther apart the states are situated in phase space, the lower the probability is). However, since they obey a closure relation, any state can be decomposed on the set of coherent states. They hence form an overcomplete basis, in which one can diagonally decompose any state. This is the premise for the Glauber–Sudarshan P representation.
This closure relation can be expressed by the resolution of the identity operator in the vector space of quantum states,
This resolution of the identity is intimately connected to the Segal–Bargmann transform.
Another peculiarity is that has no eigenket (while has no eigenbra). The following equality is the closest formal substitute, and turns out to be useful for technical computations,
This last state is known as an "Agarwal state" or photon-added coherent state and denoted as
Normalized Agarwal states of order can be expressed as
The above resolution of the identity may be derived (restricting to one spatial dimension for simplicity) by taking matrix elements between eigenstates of position, , on both sides of the equation. On the right-hand side, this immediately gives . On the left-hand side, the same is obtained by inserting
from the previous section (time is arbitrary), then integrating over using the Fourier representation of the delta function, and then performing a Gaussian integral over .
In particular, the Gaussian Schrödinger wave-packet state follows from the explicit value
The resolution of the identity may also be expressed in terms of particle position and momentum. For each coordinate dimension (using an adapted notation with new meaning for ),
the closure relation of coherent states reads
This can be inserted in any quantum-mechanical expectation value, relating it to some quasi-classical phase-space integral and explaining, in particular, the origin of normalisation factors for classical partition functions, consistent with quantum
mechanics.
In addition to being an exact eigenstate of annihilation operators, a coherent state is
an approximate common eigenstate of particle position and momentum. Restricting to
one dimension again,
The error in these approximations is measured by the uncertainties
of position and momentum,
Thermal coherent state
A single mode thermal coherent state is produced by displacing a thermal mixed state in phase space, in direct analogy to the displacement of the vacuum state in view of generating a coherent state. The density matrix of a coherent thermal state in operator representation reads
where is the displacement operator, which generates the coherent state with complex amplitude , and . The partition function is equal to
Using the expansion of the identity operator in Fock states, , the density operator definition can be expressed in the following form
where stands for the displaced Fock state. We remark that if temperature goes to zero we have
which is the density matrix for a coherent state. The average number of photons in that state can be calculated as below
where for the last term we can write
As a result, we find
where is the average of the photon number calculated with respect to the thermal state. Here we have defined, for ease of notation,
and we write explicitly
In the limit we obtain , which is consistent with the expression for the density matrix operator at zero temperature. Likewise, the photon number variance can be evaluated as
with . We deduce that the second moment cannot be uncoupled to the thermal and the quantum distribution moments, unlike the average value (first moment). In that sense, the photon statistics of the displaced thermal state is not described by the sum of the Poisson statistics and the Boltzmann statistics. The distribution of the initial thermal state in phase space broadens as a result of the coherent displacement.
Coherent states of Bose–Einstein condensates
A Bose–Einstein condensate (BEC) is a collection of boson atoms that are all in the same quantum state. In a thermodynamic system, the ground state becomes macroscopically occupied below a critical temperature — roughly when the thermal de Broglie wavelength is longer than the interatomic spacing. Superfluidity in liquid Helium-4 is believed to be associated with the Bose–Einstein condensation in an ideal gas. But 4He has strong interactions, and the liquid structure factor (a 2nd-order statistic) plays an important role. The use of a coherent state to represent the superfluid component of 4He provided a good estimate of the condensate / non-condensate fractions in superfluidity, consistent with results of slow neutron scattering. Most of the special superfluid properties follow directly from the use of a coherent state to represent the superfluid component — that acts as a macroscopically occupied single-body state with well-defined amplitude and phase over the entire volume. (The superfluid component of 4He goes from zero at the transition temperature to 100% at absolute zero. But the condensate fraction is about 6% at absolute zero temperature, T=0K.)
Early in the study of superfluidity, Penrose and Onsager proposed a metric ("order parameter") for superfluidity. It was represented by a macroscopic factored component (a macroscopic eigenvalue) in the first-order reduced density matrix. Later, C. N. Yang proposed a more generalized measure of macroscopic quantum coherence, called "Off-Diagonal Long-Range Order" (ODLRO), that included fermion as well as boson systems. ODLRO exists whenever there is a macroscopically large factored component (eigenvalue) in a reduced density matrix of any order. Superfluidity corresponds to a large factored component in the first-order reduced density matrix. (And, all higher order reduced density matrices behave similarly.) Superconductivity involves a large factored component in the 2nd-order ("Cooper electron-pair") reduced density matrix.
The reduced density matrices used to describe macroscopic quantum coherence in superfluids are formally the same as the correlation functions used to describe orders of coherence in radiation. Both are examples of macroscopic quantum coherence. The macroscopically large coherent component, plus noise, in the electromagnetic field, as given by Glauber's description of signal-plus-noise, is formally the same as the macroscopically large superfluid component plus normal fluid component in the two-fluid model of superfluidity.
Every-day electromagnetic radiation, such as radio and TV waves, is also an example of near coherent states (macroscopic quantum coherence). That should "give one pause" regarding the conventional demarcation between quantum and classical.
The coherence in superfluidity should not be attributed to any subset of helium atoms; it is a kind of collective phenomena in which all the atoms are involved (similar to Cooper-pairing in superconductivity, as indicated in the next section).
Coherent electron states in superconductivity
Electrons are fermions, but when they pair up into Cooper pairs they act as bosons, and so can collectively form a coherent state at low temperatures. This pairing is not actually between electrons, but in the states available to the electrons moving in and out of those states. Cooper pairing refers to the first model for superconductivity.
These coherent states are part of the explanation of effects such as the Quantum Hall effect in low-temperature superconducting semiconductors.
Generalizations
According to Gilmore and Perelomov, who showed it independently, the construction of coherent states may be seen as a problem in group theory, and thus coherent states may be associated to groups different from the Heisenberg group, which leads to the canonical coherent states discussed above. Moreover, these coherent states may be generalized to quantum groups. These topics, with references to original work, are discussed in detail in Coherent states in mathematical physics.
In quantum field theory and string theory, a generalization of coherent states to the case where infinitely many degrees of freedom are used to define a vacuum state with a different vacuum expectation value from the original vacuum.
In one-dimensional many-body quantum systems with fermionic degrees of freedom, low energy excited states can be approximated as coherent states of a bosonic field operator that creates particle-hole excitations. This approach is called bosonization.
The Gaussian coherent states of nonrelativistic quantum mechanics can be generalized to relativistic coherent states of Klein-Gordon and Dirac particles.
Coherent states have also appeared in works on loop quantum gravity or for the construction of (semi)classical canonical quantum general relativity.
See also
Coherent states in mathematical physics
Quantum field theory
Quantum optics
Quantum amplifier
Electromagnetic field
Degree of coherence
Glauber multiple scattering theory
External links
Quantum states of the light field
Glauber States: Coherent states of Quantum Harmonic Oscillator
Measure a coherent state with photon statistics interactive
References
Quantum states | Coherent state | [
"Physics"
] | 5,192 | [
"Quantum states",
"Quantum mechanics"
] |
277,641 | https://en.wikipedia.org/wiki/Strength%20of%20materials | The field of strength of materials (also called mechanics of materials) typically refers to various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength, Young's modulus, and Poisson's ratio. In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered.
The theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko.
Definition
In the mechanics of materials, the strength of a material is its ability to withstand an applied load without failure or plastic deformation. The field of strength of materials deals with forces and deformations that result from their acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses when those forces are expressed on a unit basis. The stresses acting on the material cause deformation of the material in various manners including breaking them completely. Deformation of the material is called strain when those deformations too are placed on a unit basis.
The stresses and strains that develop within a mechanical member must be calculated in order to assess the load capacity of that member. This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the member is composed. The applied loads may be axial (tensile or compressive), or rotational (strength shear). With a complete description of the loading and the geometry of the member, the state of stress and state of strain at any point within the member can be calculated. Once the state of stress and strain within the member is known, the strength (load carrying capacity) of that member, its deformations (stiffness qualities), and its stability (ability to maintain its original configuration) can be calculated.
The calculated stresses may then be compared to some measure of the strength of the member such as its material yield or ultimate strength. The calculated deflection of the member may be compared to deflection criteria that are based on the member's use. The calculated buckling load of the member may be compared to the applied load. The calculated stiffness and mass distribution of the member may be used to calculate the member's dynamic response and then compared to the acoustic environment in which it will be used.
Material strength refers to the point on the engineering stress–strain curve (yield stress) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading and as a result, the member will have a permanent deflection. The ultimate strength of the material refers to the maximum value of stress reached. The fracture strength is the stress value at fracture (the last stress value recorded).
Types of loadings
Transverse loadings – Forces applied perpendicular to the longitudinal axis of a member. Transverse loading causes the member to bend and deflect from its original position, with internal tensile and compressive strains accompanying the change in curvature of the member. Transverse loading also induces shear forces that cause shear deformation of the material and increase the transverse deflection of the member.
Axial loading – The applied forces are collinear with the longitudinal axis of the member. The forces cause the member to either stretch or shorten.
Torsional loading – Twisting action caused by a pair of externally applied equal and oppositely directed force couples acting on parallel planes or by a single external couple applied to a member that has one end fixed against rotation.
Stress terms
Uniaxial stress is expressed by
where F is the force acting on an area A. The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest.
Compressive stress (or compression) is the stress state caused by an applied load that acts to reduce the length of the material (compression member) along the axis of the applied load; it is, in other words, a stress state that causes a squeezing of the material. A simple case of compression is the uniaxial compression induced by the action of opposite, pushing forces. Compressive strength for materials is generally higher than their tensile strength. However, structures loaded in compression are subject to additional failure modes, such as buckling, that are dependent on the member's geometry.
Tensile stress is the stress state caused by an applied load that tends to elongate the material along the axis of the applied load, in other words, the stress caused by pulling the material. The strength of structures of equal cross-sectional area loaded in tension is independent of shape of the cross-section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behaviour (many metals for example) can tolerate some defects while brittle materials (such as ceramics and some steels) can fail well below their ultimate material strength.
Shear stress is the stress state caused by the combined energy of a pair of opposing forces acting along parallel lines of action through the material, in other words, the stress caused by faces of the material sliding relative to one another. An example is cutting paper with scissors or stresses due to torsional loading.
Stress parameters for resistance
Material resistance can be expressed in several mechanical stress parameters. The term material strength is used when referring to mechanical stress parameters. These are physical quantities with dimension homogeneous to pressure and force per unit surface. The traditional measure unit for strength are therefore MPa in the International System of Units, and the psi between the United States customary units. Strength parameters include: yield strength, tensile strength, fatigue strength, crack resistance, and other parameters.
Yield strength is the lowest stress that produces a permanent deformation in a material. In some materials, like aluminium alloys, the point of yielding is difficult to identify, thus it is usually defined as the stress required to cause 0.2% plastic strain. This is called a 0.2% proof stress.
Compressive strength is a limit state of compressive stress that leads to failure in a material in the manner of ductile failure (infinite theoretical yield) or brittle failure (rupture as the result of crack propagation, or sliding along a weak plane – see shear strength).
Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads to tensile failure in the manner of ductile failure (yield as the first stage of that failure, some hardening in the second stage and breakage after a possible "neck" formation) or brittle failure (sudden breaking in two or more pieces at a low-stress state). The tensile strength can be quoted as either true stress or engineering stress, but engineering stress is the most commonly used.
Fatigue strength is a more complex measure of the strength of a material that considers several loading episodes in the service period of an object, and is usually more difficult to assess than the static strength measures. Fatigue strength is quoted here as a simple range (). In the case of cyclic loading it can be appropriately expressed as an amplitude usually at zero mean stress, along with the number of cycles to failure under that condition of stress.
Impact strength is the capability of the material to withstand a suddenly applied load and is expressed in terms of energy. Often measured with the Izod impact strength test or Charpy impact test, both of which measure the impact energy required to fracture a sample. Volume, modulus of elasticity, distribution of forces, and yield strength affect the impact strength of a material. In order for a material or object to have a high impact strength, the stresses must be distributed evenly throughout the object. It also must have a large volume with a low modulus of elasticity and a high material yield strength.
Strain parameters for resistance
Deformation of the material is the change in geometry created when stress is applied (as a result of applied forces, gravitational fields, accelerations, thermal expansion, etc.). Deformation is expressed by the displacement field of the material.
Strain, or reduced deformation, is a mathematical term that expresses the trend of the deformation change among the material field. Strain is the deformation per unit length. In the case of uniaxial loading the displacement of a specimen (for example, a bar element) lead to a calculation of strain expressed as the quotient of the displacement and the original length of the specimen. For 3D displacement fields it is expressed as derivatives of displacement functions in terms of a second-order tensor (with 6 independent elements).
Deflection is a term to describe the magnitude to which a structural element is displaced when subject to an applied load.
Stress–strain relations
Elasticity is the ability of a material to return to its previous shape after stress is released. In many materials, the relation between applied stress is directly proportional to the resulting strain (up to a certain limit), and a graph representing those two quantities is a straight line.
The slope of this line is known as Young's modulus, or the "modulus of elasticity". The modulus of elasticity can be used to determine the stress–strain relationship in the linear-elastic portion of the stress–strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress–strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs.
Plasticity or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low strain, while ductile materials such as metallics, lead, or polymers will plastically deform much more before a fracture initiation.
Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking.
Design terms
Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m2). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MPa. In Imperial units, the unit of stress is given as lbf/in2 or pounds-force per square inch. This unit is often abbreviated as psi. One thousand psi is abbreviated ksi.
A factor of safety is a design criteria that an engineered component or structure must achieve. , where FS: the factor of safety, Rf The applied stress, and F: ultimate allowable stress (psi or MPa)
Margin of Safety is the common method for design criteria. It is defined MS = Pu/P − 1.
For example, to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be = 440/4 = 110 MPa, or = 110×106 N/m2. Such allowable stresses are also known as "design stresses" or "working stresses".
Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non-steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material's yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure.
Failure theories
There are four failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory (von Mises criterion of failure). Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials.
Of the latter three, the distortion energy theory provides the most accurate results in a majority of the stress conditions. The strain energy theory needs the value of Poisson's ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result.
Maximum shear stress theory postulates that failure will occur if the magnitude of the maximum shear stress in the part exceeds the shear strength of the material determined from uniaxial testing.
Maximum normal stress theory postulates that failure will occur if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety.
Maximum strain energy theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing.
Maximum distortion energy theory, also known as maximum distortion energy theory of failure or von Mises–Hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts: one part causes change in volume, and the other part causes a change in shape. Distortion energy is the amount of energy that is needed to change the shape.
Fracture mechanics was established by Alan Arnold Griffith and George Rankine Irwin. This important theory is also known as numeric conversion of toughness of material in the case of crack existence.
A material's strength depends on its microstructure. The engineering processes to which a material is subjected can alter its microstructure. Strengthening mechanisms that alter the strength of a material include work hardening, solid solution strengthening, precipitation hardening, and grain boundary strengthening.
Strengthening mechanisms are accompanied by the caveat that some other mechanical properties of the material may degenerate in an attempt to make a material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural properties and the desired end effect. Strength is expressed in terms of the limiting values of the compressive stress, tensile stress, and shear stresses that would cause failure. The effects of dynamic loading are probably the most important practical consideration of the theory of elasticity, especially the problem of fatigue. Repeated loading often initiates cracks, which grow until failure occurs at the corresponding residual strength of the structure. Cracks always start at a stress concentrations especially changes in cross-section of the product or defects in manufacturing, near holes and corners at nominal stress levels far lower than those quoted for the strength of the material.
See also
References
Further reading
Fa-Hwa Cheng, Initials. (1997). Strength of material. Ohio: McGraw-Hill
Mechanics of Materials, E.J. Hearn
Alfirević, Ivo. Strength of Materials I. Tehnička knjiga, 1995. .
Alfirević, Ivo. Strength of Materials II. Tehnička knjiga, 1999. .
Ashby, M.F. Materials Selection in Design. Pergamon, 1992.
Beer, F.P., E.R. Johnston, et al. Mechanics of Materials, 3rd edition. McGraw-Hill, 2001.
Cottrell, A.H. Mechanical Properties of Matter. Wiley, New York, 1964.
Den Hartog, Jacob P. Strength of Materials. Dover Publications, Inc., 1961, .
Drucker, D.C. Introduction to Mechanics of Deformable Solids. McGraw-Hill, 1967.
Gordon, J.E. The New Science of Strong Materials. Princeton, 1984.
Groover, Mikell P. Fundamentals of Modern Manufacturing, 2nd edition. John Wiley & Sons, Inc., 2002. .
Hashemi, Javad and William F. Smith. Foundations of Materials Science and Engineering, 4th edition. McGraw-Hill, 2006. .
Hibbeler, R.C. Statics and Mechanics of Materials, SI Edition. Prentice-Hall, 2004. .
Lebedev, Leonid P. and Michael J. Cloud. Approximating Perfection: A Mathematician's Journey into the World of Mechanics. Princeton University Press, 2004. .
Chapter 10 – Strength of Elastomers, A.N. Gent, W.V. Mars, In: James E. Mark, Burak Erman and Mike Roland, Editor(s), The Science and Technology of Rubber (Fourth Edition), Academic Press, Boston, 2013, Pages 473–516, , 10.1016/B978-0-12-394584-6.00010-8
Mott, Robert L. Applied Strength of Materials, 4th edition. Prentice-Hall, 2002. .
Popov, Egor P. Engineering Mechanics of Solids. Prentice Hall, Englewood Cliffs, N. J., 1990. .
Ramamrutham, S. Strength of Materials.
Shames, I.H. and F.A. Cozzarelli. Elastic and inelastic stress analysis. Prentice-Hall, 1991. .
Timoshenko S. Strength of Materials, 3rd edition. Krieger Publishing Company, 1976, .
Timoshenko, S.P. and D.H. Young. Elements of Strength of Materials, 5th edition. (MKS System)
Davidge, R.W., Mechanical Behavior of Ceramics, Cambridge Solid State Science Series, (1979)
Lawn, B.R., Fracture of Brittle Solids, Cambridge Solid State Science Series, 2nd Edn. (1993)
Green, D., An Introduction to the Mechanical Properties of Ceramics, Cambridge Solid State Science Series, Eds. Clarke, D.R., Suresh, S., Ward, I.M.Babu Tom.K (1998)
External links
Failure theories
Case studies in structural failure
Solid mechanics
Materials science
Building engineering
Deformation (mechanics)
Condensed matter physics | Strength of materials | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,058 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Deformation (mechanics)",
"Building engineering",
"Phases of matter",
"Materials science",
"Civil engineering",
"Mechanics",
"Condensed matter physics",
"nan",
"Matter",
"Architecture"
] |
277,664 | https://en.wikipedia.org/wiki/Aharonov%E2%80%93Bohm%20effect | The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum-mechanical phenomenon in which an electrically charged particle is affected by an electromagnetic potential (, ), despite being confined to a region in which both the magnetic field and electric field are zero. The underlying mechanism is the coupling of the electromagnetic potential with the complex phase of a charged particle's wave function, and the Aharonov–Bohm effect is accordingly illustrated by interference experiments.
The most commonly described case, sometimes called the Aharonov–Bohm solenoid effect, takes place when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being negligible in the region through which the particle passes and the particle's wavefunction being negligible inside the solenoid. This phase shift has been observed experimentally. There are also magnetic Aharonov–Bohm effects on bound energies and scattering cross sections, but these cases have not been experimentally tested. An electric Aharonov–Bohm phenomenon was also predicted, in which a charged particle is affected by regions with different electrical potentials but zero electric field, but this has no experimental confirmation yet. A separate "molecular" Aharonov–Bohm effect was proposed for nuclear motion in multiply connected regions, but this has been argued to be a different kind of geometric phase as it is "neither nonlocal nor topological", depending only on local quantities along the nuclear path.
Werner Ehrenberg (1901–1975) and Raymond E. Siday first predicted the effect in 1949. Yakir Aharonov and David Bohm published their analysis in 1959. After publication of the 1959 paper, Bohm was informed of Ehrenberg and Siday's work, which was acknowledged and credited in Bohm and Aharonov's subsequent 1961 paper. The effect was confirmed experimentally, with a very large error, while Bohm was still alive. By the time the error was down to a respectable value, Bohm had died.
Significance
In the 18th and 19th centuries, physics was dominated by Newtonian dynamics, with its emphasis on forces. Electromagnetic phenomena were elucidated by a series of experiments involving the measurement of forces between charges, currents and magnets in various configurations. Eventually, a description arose according to which charges, currents and magnets acted as local sources of propagating force fields, which then acted on other charges and currents locally through the Lorentz force law. In this framework, because one of the observed properties of the electric field was that it was irrotational, and one of the observed properties of the magnetic field was that it was divergenceless, it was possible to express an electrostatic field as the gradient of a scalar potential (e.g. Coulomb's electrostatic potential, which is mathematically analogous to the classical gravitational potential) and a stationary magnetic field as the curl of a vector potential (then a new concept – the idea of a scalar potential was already well accepted by analogy with gravitational potential). The language of potentials generalised seamlessly to the fully dynamic case but, since all physical effects were describable in terms of the fields which were the derivatives of the potentials, potentials (unlike fields) were not uniquely determined by physical effects: potentials were only defined up to an arbitrary additive constant electrostatic potential and an irrotational stationary magnetic vector potential.
The Aharonov–Bohm effect is important conceptually because it bears on three issues apparent in the recasting of (Maxwell's) classical electromagnetic theory as a gauge theory, which before the advent of quantum mechanics could be argued to be a mathematical reformulation with no physical consequences. The Aharonov–Bohm thought experiments and their experimental realization imply that the issues were not just philosophical.
The three issues are:
whether potentials are "physical" or just a convenient tool for calculating force fields;
whether action principles are fundamental;
the principle of locality.
Because of reasons like these, the Aharonov–Bohm effect was chosen by the New Scientist magazine as one of the "seven wonders of the quantum world".
Chen-Ning Yang considered the Aharonov–Bohm effect to be the only direct experimental proof of the gauge principle. The philosophical importance is that the magnetic four-potential over describes the physics, as all observable phenomena remain unchanged after a gauge transformation. Conversely, the Maxwell fields under describe the physics, as they do not predict the Aharonov-Bohm effect. Moreover, as predicted by the gauge principle, the quantities that remain invariant under gauge transforms are precisely the physically observable phenomena.
Potentials vs. fields
It is generally argued that the Aharonov–Bohm effect illustrates the physicality of electromagnetic potentials, Φ and A, in quantum mechanics. Classically it was possible to argue that only the electromagnetic fields are physical, while the electromagnetic potentials are purely mathematical constructs, that due to gauge freedom are not even unique for a given electromagnetic field.
However, Vaidman has challenged this interpretation by showing that the Aharonov–Bohm effect can be explained without the use of potentials so long as one gives a full quantum mechanical treatment to the source charges that produce the electromagnetic field. According to this view, the potential in quantum mechanics is just as physical (or non-physical) as it was classically. Aharonov, Cohen, and Rohrlich responded that the effect may be due to a local gauge potential or due to non-local gauge-invariant fields.
Two papers published in the journal Physical Review A in 2017 have demonstrated a quantum mechanical solution for the system. Their analysis shows that the phase shift can be viewed as generated by a solenoid's vector potential acting on the electron or the electron's vector potential acting on the solenoid or the electron and solenoid currents acting on the quantized vector potential.
Global action vs. local forces
Similarly, the Aharonov–Bohm effect illustrates that the Lagrangian approach to dynamics, based on energies, is not just a computational aid to the Newtonian approach, based on forces. Thus the Aharonov–Bohm effect validates the view that forces are an incomplete way to formulate physics, and potential energies must be used instead. In fact Richard Feynman complained that he had been taught electromagnetism from the perspective of electromagnetic fields, and he wished later in life he had been taught to think in terms of the electromagnetic potential instead, as this would be more fundamental. In Feynman's path-integral view of dynamics, the potential field directly changes the phase of an electron wave function, and it is these changes in phase that lead to measurable quantities.
Locality of electromagnetic effects
The Aharonov–Bohm effect shows that the local E and B fields do not contain full information about the electromagnetic field, and the electromagnetic four-potential, (Φ, A), must be used instead. By Stokes' theorem, the magnitude of the Aharonov–Bohm effect can be calculated using the electromagnetic fields alone, or using the four-potential alone. But when using just the electromagnetic fields, the effect depends on the field values in a region from which the test particle is excluded. In contrast, when using just the four-potential, the effect only depends on the potential in the region where the test particle is allowed. Therefore, one must either abandon the principle of locality, which most physicists are reluctant to do, or accept that the electromagnetic four-potential offers a more complete description of electromagnetism than the electric and magnetic fields can. On the other hand, the Aharonov–Bohm effect is crucially quantum mechanical; quantum mechanics is well known to feature non-local effects (albeit still disallowing superluminal communication), and Vaidman has argued that this is just a non-local quantum effect in a different form.
In classical electromagnetism the two descriptions were equivalent. With the addition of quantum theory, though, the electromagnetic potentials Φ and A are seen as being more fundamental. Despite this, all observable effects end up being expressible in terms of the electromagnetic fields, E and B. This is interesting because, while you can calculate the electromagnetic field from the four-potential, due to gauge freedom the reverse is not true.
Magnetic solenoid effect
The magnetic Aharonov–Bohm effect can be seen as a result of the requirement that quantum physics must be invariant with respect to the gauge choice for the electromagnetic potential, of which the magnetic vector potential forms part.
Electromagnetic theory implies that a particle with electric charge traveling along some path in a region with zero magnetic field , but non-zero (by ), acquires a phase shift , given in SI units by
Therefore, particles, with the same start and end points, but traveling along two different routes will acquire a phase difference determined by the magnetic flux through the area between the paths (via Stokes' theorem and ), and given by:
In quantum mechanics the same particle can travel between two points by a variety of paths. Therefore, this phase difference can be observed by placing a solenoid between the slits of a double-slit experiment (or equivalent). An ideal solenoid (i.e. infinitely long and with a perfectly uniform current distribution) encloses a magnetic field , but does not produce any magnetic field outside of its cylinder, and thus the charged particle (e.g. an electron) passing outside experiences no magnetic field . (This idealization simplifies the analysis but it's important to realize that the Aharonov-Bohm effect does not rely on it, provided the magnetic flux returns outside the electron paths, for example if one path goes through a toroidal solenoid and the other around it, and the solenoid is shielded so that it produces no external magnetic field.) However, there is a (curl-free) vector potential outside the solenoid with an enclosed flux, and so the relative phase of particles passing through one slit or the other is altered by whether the solenoid current is turned on or off. This corresponds to an observable shift of the interference fringes on the observation plane.
The same phase effect is responsible for the quantized-flux requirement in superconducting loops. This quantization occurs because the superconducting wave function must be single valued: its phase difference around a closed loop must be an integer multiple of (with the charge for the electron Cooper pairs), and thus the flux must be a multiple of . The superconducting flux quantum was actually predicted prior to Aharonov and Bohm, by F. London in 1948 using a phenomenological model.
The first claimed experimental confirmation was by Robert G. Chambers in 1960, in an electron interferometer with a magnetic field produced by a thin iron whisker, and other early work is summarized in Olariu and Popèscu (1984). However, subsequent authors questioned the validity of several of these early results because the electrons may not have been completely shielded from the magnetic fields. An early experiment in which an unambiguous Aharonov–Bohm effect was observed by completely excluding the magnetic field from the electron path (with the help of a superconducting film) was performed by Tonomura et al. in 1986. The effect's scope and application continues to expand. Webb et al. (1985) demonstrated Aharonov–Bohm oscillations in ordinary, non-superconducting metallic rings; for a discussion, see Schwarzschild (1986) and Imry & Webb (1989). Bachtold et al. (1999) detected the effect in carbon nanotubes; for a discussion, see Kong et al. (2004).
Monopoles and Dirac strings
The magnetic Aharonov–Bohm effect is also closely related to Dirac's argument that the existence of a magnetic monopole can be accommodated by the existing magnetic source-free Maxwell's equations if both electric and magnetic charges are quantized.
A magnetic monopole implies a mathematical singularity in the vector potential, which can be expressed as a Dirac string of infinitesimal diameter that contains the equivalent of all of the 4πg flux from a monopole "charge" g. The Dirac string starts from, and terminates on, a magnetic monopole. Thus, assuming the absence of an infinite-range scattering effect by this arbitrary choice of singularity, the requirement of single-valued wave functions (as above) necessitates charge-quantization. That is, must be an integer (in cgs units) for any electric charge qe and magnetic charge qm.
Like the electromagnetic potential A the Dirac string is not gauge invariant (it moves around with fixed endpoints under a gauge transformation) and so is also not directly measurable.
Electric effect
Just as the phase of the wave function depends upon the magnetic vector potential, it also depends upon the scalar electric potential. By constructing a situation in which the electrostatic potential varies for two paths of a particle, through regions of zero electric field, an observable Aharonov–Bohm interference phenomenon from the phase shift has been predicted; again, the absence of an electric field means that, classically, there would be no effect.
From the Schrödinger equation, the phase of an eigenfunction with energy goes as . The energy, however, will depend upon the electrostatic potential for a particle with charge . In particular, for a region with constant potential (zero field), the electric potential energy is simply added to , resulting in a phase shift:
where t is the time spent in the potential.
For example, we may have a pair of large flat conductors, connected to a battery of voltage . Then, we can run a single electron double-slit experiment, with the two slits on the two sides of the pair of conductors. If the electron takes time to hit the screen, then we should observe a phase shift . By adjusting the battery voltage, we can horizontally shift the interference pattern on the screen.
The initial theoretical proposal for this effect suggested an experiment where charges pass through conducting cylinders along two paths, which shield the particles from external electric fields in the regions where they travel, but still allow a time dependent potential to be applied by charging the cylinders. This proved difficult to realize, however. Instead, a different experiment was proposed involving a ring geometry interrupted by tunnel barriers, with a constant bias voltage V relating the potentials of the two halves of the ring. This situation results in an Aharonov–Bohm phase shift as above, and was observed experimentally in 1998, albeit in a setup where the charges do traverse the electric field generated by the bias voltage. The original time dependent electric Aharonov–Bohm effect has not yet found experimental verification.
Gravitational effect
The Aharonov–Bohm phase shift due to the gravitational potential should also be possible to observe in theory, and in early 2022 an experiment was carried out to observe it based on an experimental design from 2012. In the experiment, ultra-cold rubidium atoms in superposition were launched vertically inside a vacuum tube and split with a laser so that one part would go higher than the other and then recombined back. Outside of the chamber at the top sits an axially symmetric mass that changes the gravitational potential. Thus, the part that goes higher should experience a greater change which manifests as an interference pattern when the wave packets recombine resulting in a measurable phase shift. Evidence of a match between the measurements and the predictions was found by the team. Multiple other tests have been proposed.
Non-abelian effect
In 1975 Tai-Tsun Wu and Chen-Ning Yang formulated the non-abelian Aharonov–Bohm effect, and in 2019 this was experimentally reported in a system with light waves rather than the electron wave function. The effect was produced in two different ways. In one light went through a crystal in strong magnetic field and in another light was modulated using time-varying electrical signals. In both cases the phase shift was observed via an interference pattern which was also different depending if going forwards and backwards in time.
Aharonov–Bohm nano rings
Nano rings were created by accident while intending to make quantum dots. They have interesting optical properties associated with excitons and the Aharonov–Bohm effect. Application of these rings used as light capacitors or buffers includes photonic computing and communications technology. Analysis and measurement of geometric phases in mesoscopic rings is ongoing. It is even suggested they could be used to make a form of slow glass.
Several experiments, including some reported in 2012, show Aharonov–Bohm oscillations in charge density wave (CDW) current versus magnetic flux, of dominant period h/2e through CDW rings up to 85 μm in circumference above 77 K. This behavior is similar to that of the superconducting quantum interference devices (see SQUID).
Mathematical interpretation
The Aharonov–Bohm effect can be understood from the fact that one can only measure absolute values of the wave function. While this allows for measurement of phase differences through quantum interference experiments, there is no way to specify a wavefunction with constant absolute phase. In the absence of an electromagnetic field one can come close by declaring the eigenfunction of the momentum operator with zero momentum to be the function "1" (ignoring normalization problems) and specifying wave functions relative to this eigenfunction "1". In this representation the i-momentum operator is (up to a factor ) the differential operator . However, by gauge invariance, it is equally valid to declare the zero momentum eigenfunction to be at the cost of representing the i-momentum operator (up to a factor) as i.e. with a pure gauge vector potential . There is no real asymmetry because representing the former in terms of the latter is just as messy as representing the latter in terms of the former. This means that it is physically more natural to describe wave "functions", in the language of differential geometry, as sections in a complex line bundle with a hermitian metric and a U(1)-connection . The curvature form of the connection, , is, up to the factor i, the Faraday tensor of the electromagnetic field strength. The Aharonov–Bohm effect is then a manifestation of the fact that a connection with zero curvature (i.e. flat), need not be trivial since it can have monodromy along a topologically nontrivial path fully contained in the zero curvature (i.e. field-free) region. By definition this means that sections that are parallelly translated along a topologically non trivial path pick up a phase, so that covariant constant sections cannot be defined over the whole field-free region.
Given a trivialization of the line-bundle, a non-vanishing section, the U(1)-connection is given by the 1-form corresponding to the electromagnetic four-potential A as where d means exterior derivation on the Minkowski space. The monodromy is the holonomy of the flat connection. The holonomy of a connection, flat or non flat, around a closed loop is (one can show this does not depend on the trivialization but only on the connection). For a flat connection one can find a gauge transformation in any simply connected field free region(acting on wave functions and connections) that gauges away the vector potential. However, if the monodromy is nontrivial, there is no such gauge transformation for the whole outside region. In fact as a consequence of Stokes' theorem, the holonomy is determined by the magnetic flux through a surface bounding the loop , but such a surface may exist only if passes through a region of non trivial field:
The monodromy of the flat connection only depends on the topological type of the loop in the field free region (in fact on the loops homology class). The holonomy description is general, however, and works inside as well as outside the superconductor. Outside of the conducting tube containing the magnetic field, the field strength . In other words, outside the tube the connection is flat, and the monodromy of the loop contained in the field-free region depends only on the winding number around the tube. The monodromy of the connection for a loop going round once (winding number 1) is the phase difference of a particle interfering by propagating left and right of the superconducting tube containing the magnetic field.
If one wants to ignore the physics inside the superconductor and only describe the physics in the outside region, it becomes natural and mathematically convenient to describe the quantum electron by a section in a complex line bundle with an "external" flat connection with monodromy
magnetic flux through the tube /
rather than an external EM field . The Schrödinger equation readily generalizes to this situation by using the Laplacian of the connection for the (free) Hamiltonian
.
Equivalently, one can work in two simply connected regions with cuts that pass from the tube towards or away from the detection screen. In each of these regions the ordinary free Schrödinger equations would have to be solved, but in passing from one region to the other, in only one of the two connected components of the intersection (effectively in only one of the slits) a monodromy factor is picked up, which results in the shift in the interference pattern as one changes the flux.
Effects with similar mathematical interpretation can be found in other fields. For example, in classical statistical physics, quantization of a molecular motor motion in a stochastic environment can be interpreted as an Aharonov–Bohm effect induced by a gauge field acting in the space of control parameters.
See also
Geometric phase
Hannay angle
Wannier function
Berry phase
Wilson loop
Winding number
Byers–Yang theorem
Aharonov–Casher effect
Maxwell–Lodge effect
References
Further reading
External links
The David Bohm Society page about the Aharonov–Bohm effect.
Quantum mechanics
Physical phenomena
Mesoscopic physics | Aharonov–Bohm effect | [
"Physics",
"Materials_science"
] | 4,588 | [
"Physical phenomena",
"Theoretical physics",
"Quantum mechanics",
"Condensed matter physics",
"Mesoscopic physics"
] |
277,702 | https://en.wikipedia.org/wiki/Electron%20diffraction | Electron diffraction is a generic term for phenomena associated with changes in the direction of electron beams due to elastic interactions with atoms. It occurs due to elastic scattering, when there is no change in the energy of the electrons. The negatively charged electrons are scattered due to Coulomb forces when they interact with both the positively charged atomic core and the negatively charged electrons around the atoms. The resulting map of the directions of the electrons far from the sample is called a diffraction pattern, see for instance Figure 1. Beyond patterns showing the directions of electrons, electron diffraction also plays a major role in the contrast of images in electron microscopes.
This article provides an overview of electron diffraction and electron diffraction patterns, collective referred to by the generic name electron diffraction. This includes aspects of how in a general way electrons can act as waves, and diffract and interact with matter. It also involves the extensive history behind modern electron diffraction, how the combination of developments in the 19th century in understanding and controlling electrons in vacuum and the early 20th century developments with electron waves were combined with early instruments, giving birth to electron microscopy and diffraction in 1920–1935. While this was the birth, there have been a large number of further developments since then.
There are many types and techniques of electron diffraction. The most common approach is where the electrons transmit through a thin sample, from 1 nm to 100 nm (10 to 1000 atoms thick), where the results depending upon how the atoms are arranged in the material, for instance a single crystal, many crystals or different types of solids. Other cases such as larger repeats, no periodicity or disorder have their own characteristic patterns. There are many different ways of collecting diffraction information, from parallel illumination to a converging beam of electrons or where the beam is rotated or scanned across the sample which produce information that is often easier to interpret. There are also many other types of instruments. For instance, in a scanning electron microscope (SEM), electron backscatter diffraction can be used to determine crystal orientation across the sample. Electron diffraction patterns can also be used to characterize molecules using gas electron diffraction, liquids, surfaces using lower energy electrons, a technique called LEED, and by reflecting electrons off surfaces, a technique called RHEED.
There are also many levels of analysis of electron diffraction, including:
The simplest approximation using the de Broglie wavelength for electrons, where only the geometry is considered and often Bragg's law is invoked. This approach only considers the electrons far from the sample, a far-field or Fraunhofer approach.
The first level of more accuracy where it is approximated that the electrons are only scattered once, which is called kinematical diffraction and is also a far-field or Fraunhofer approach.
More complete and accurate explanations where multiple scattering is included, what is called dynamical diffraction (e.g. refs). These involve more general analyses using relativistically corrected Schrödinger equation methods, and track the electrons through the sample, being accurate both near and far from the sample (both Fresnel and Fraunhofer diffraction).
Electron diffraction is similar to x-ray and neutron diffraction. However, unlike x-ray and neutron diffraction where the simplest approximations are quite accurate, with electron diffraction this is not the case. Simple models give the geometry of the intensities in a diffraction pattern, but dynamical diffraction approaches are needed for accurate intensities and the positions of diffraction spots.
A primer on electron diffraction
All matter can be thought of as matter waves, from small particles such as electrons up to macroscopic objects – although it is impossible to measure any of the "wave-like" behavior of macroscopic objects. Waves can move around objects and create interference patterns, and a classic example is the Young's two-slit experiment shown in Figure 2, where a wave impinges upon two slits in the first of the two images (blue waves). After going through the slits there are directions where the wave is stronger, ones where it is weaker – the wave has been diffracted. If instead of two slits there are a number of small points then similar phenomena can occur as shown in the second image where the wave (red and blue) is coming in from the bottom right corner. This is comparable to diffraction of an electron wave where the small dots would be atoms in a small crystal, see also note. Note the strong dependence on the relative orientation of the crystal and the incoming wave.
Close to an aperture or atoms, often called the "sample", the electron wave would be described in terms of near field or Fresnel diffraction. This has relevance for imaging within electron microscopes, whereas electron diffraction patterns are measured far from the sample, which is described as far-field or Fraunhofer diffraction. A map of the directions of the electron waves leaving the sample will show high intensity (white) for favored directions, such as the three prominent ones in the Young's two-slit experiment of Figure 2, while the other directions will be low intensity (dark). Often there will be an array of spots (preferred directions) as in Figure 1 and the other figures shown later.
History
The historical background is divided into several subsections. The first is the general background to electrons in vacuum and the technological developments that led to cathode-ray tubes as well as vacuum tubes that dominated early television and electronics; the second is how these led to the development of electron microscopes; the last is work on the nature of electron beams and the fundamentals of how electrons behave, a key component of quantum mechanics and the explanation of electron diffraction.
Electrons in vacuum
Experiments involving electron beams occurred long before the discovery of the electron; ēlektron (ἤλεκτρον) is the Greek word for amber, which is connected to the recording of electrostatic charging by Thales of Miletus around 585 BCE, and possibly others even earlier.
In 1650, Otto von Guericke invented the vacuum pump allowing for the study of the effects of high voltage electricity passing through rarefied air. In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube that had been partially evacuated of air, and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end at the anode (positive electrode). Building on this, in the 1850s, Heinrich Geissler was able to achieve a pressure of around 10−3 atmospheres, inventing what became known as Geissler tubes. Using these tubes, while studying electrical conductivity in rarefied gases in 1859, Julius Plücker observed that the radiation emitted from the negatively charged cathode caused phosphorescent light to appear on the tube wall near it, and the region of the phosphorescent light could be moved by application of a magnetic field.
In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed between the cathode and the phosphorescence would cast a shadow on the tube wall, e.g. Figure 3. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876 Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which differentiated them from the incandescent light. Eugen Goldstein dubbed them cathode rays. By the 1870s William Crookes and others were able to evacuate glass tubes below 10−6 atmospheres, and observed that the glow in the tube disappeared when the pressure was reduced but the glass behind the anode began to glow. Crookes was also able to show that the particles in the cathode rays were negatively charged and could be deflected by an electromagnetic field.
In 1897, Joseph Thomson measured the mass of these cathode rays, proving they were made of particles. These particles, however, were 1800 times lighter than the lightest particle known at that time – a hydrogen atom. These were originally called corpuscles and later named electrons by George Johnstone Stoney.
The control of electron beams that this work led to resulted in significant technology advances in electronic amplifiers and television displays.
Waves, diffraction and quantum mechanics
Independent of the developments for electrons in vacuum, at about the same time the components of quantum mechanics were being assembled. In 1924 Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta introduced his theory of electron waves. He suggested that an electron around a nucleus could be thought of as standing waves, and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles (or corpuscles), and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass, see for instance Figure 4. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before.
This rapidly became part of what was called by Erwin Schrödinger undulatory mechanics, now called the Schrödinger equation or wave mechanics. As stated by Louis de Broglie on September 8, 1927, in the preface to the German translation of his theses (in turn translated into English):M. Einstein from the beginning has supported my thesis, but it was M. E. Schrödinger who developed the propagation equations of a new theory and who in searching for its solutions has established what has become known as “Wave Mechanics”.
The Schrödinger equation combines the kinetic energy of waves and the potential energy due to, for electrons, the Coulomb potential. He was able to explain earlier work such as the quantization of the energy of electrons around atoms in the Bohr model, as well as many other phenomena. Electron waves as hypothesized by de Broglie were automatically part of the solutions to his equation, see also introduction to quantum mechanics and matter waves.
Both the wave nature and the undulatory mechanics approach were experimentally confirmed for electron beams by experiments from two groups performed independently, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid; see note for more discussion. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Clinton Davisson and Lester Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Hans Bethe which includes the refraction due to the average potential yielded more accurate results. These advances in understanding of electron wave mechanics were important for many developments of electron-based analytical techniques such as Seishi Kikuchi's observations of lines due to combined elastic and inelastic scattering, gas electron diffraction developed by Herman Mark and Raymond Weil, diffraction in liquids by Louis Maxwell, and the first electron microscopes developed by Max Knoll and Ernst Ruska.
Electron microscopes and early electron diffraction
In order to have a practical microscope or diffractometer, just having an electron beam was not enough, it needed to be controlled. Many developments laid the groundwork of electron optics; see the paper by Chester J. Calbick for an overview of the early work. One significant step was the work of Heinrich Hertz in 1883 who made a cathode-ray tube with electrostatic and magnetic deflection, demonstrating manipulation of the direction of an electron beam. Others were focusing of electrons by an axial magnetic field by Emil Wiechert in 1899, improved oxide-coated cathodes which produced more electrons by Arthur Wehnelt in 1905 and the development of the electromagnetic lens in 1926 by Hans Busch.
Building an electron microscope involves combining these elements, similar to an optical microscope but with magnetic or electrostatic lenses instead of glass ones. To this day the issue of who invented the transmission electron microscope is controversial, as discussed by Thomas Mulvey and more recently by Yaping Tao. Extensive additional information can be found in the articles by Martin Freundlich, Reinhold Rüdenberg and Mulvey.
One effort was university based. In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), (Professor of High Voltage Technology and Electrical Installations) appointed Max Knoll to lead a team of researchers to advance research on electron beams and cathode-ray oscilloscopes. The team consisted of several PhD students including Ernst Ruska. In 1931, Max Knoll and Ernst Ruska successfully generated magnified images of mesh grids placed over an anode aperture. The device, a replicate of which is shown in Figure 5, used two magnetic lenses to achieve higher magnifications, the first electron microscope. (Max Knoll died in 1969, so did not receive a share of the Nobel Prize in Physics in 1986.)
Apparently independent of this effort was work at Siemens-Schuckert by Reinhold Rudenberg. According to patent law (U.S. Patent No. 2058914 and 2070318, both filed in 1932), he is the inventor of the electron microscope, but it is not clear when he had a working instrument. He stated in a very brief article in 1932 that Siemens had been working on this for some years before the patents were filed in 1932, so his effort was parallel to the university effort. He died in 1961, so similar to Max Knoll, was not eligible for a share of the Nobel Prize.
These instruments could produce magnified images, but were not particularly useful for electron diffraction; indeed, the wave nature of electrons was not exploited during the development. Key for electron diffraction in microscopes was the advance in 1936 where showed that they could be used as micro-diffraction cameras with an aperture—the birth of selected area electron diffraction.
Less controversial was the development of LEED—the early experiments of Davisson and Germer used this approach. As early as 1929 Germer investigated gas adsorption, and in 1932 Harrison E. Farnsworth probed single crystals of copper and silver. However, the vacuum systems available at that time were not good enough to properly control the surfaces, and it took almost forty years before these became available. Similarly, it was not until about 1965 that Peter B. Sewell and M. Cohen demonstrated the power of RHEED in a system with a very well controlled vacuum.
Subsequent developments in methods and modelling
Despite early successes such as the determination of the positions of hydrogen atoms in NH4Cl crystals by W. E. Laschkarew and I. D. Usykin in 1933, boric acid by John M. Cowley in 1953 and orthoboric acid by William Houlder Zachariasen in 1954, electron diffraction for many years was a qualitative technique used to check samples within electron microscopes. John M Cowley explains in a 1968 paper: Thus was founded the belief, amounting in some cases almost to an article of faith, and persisting even to the present day, that it is impossible to interpret the intensities of electron diffraction patterns to gain structural information.This has changed, in transmission, reflection and for low energies. Some of the key developments (some of which are also described later) from the early days to 2023 have been:
Fast numerical methods based upon the Cowley–Moodie multislice algorithm, which only became possible once the fast Fourier transform (FFT) method was developed. With these and other numerical methods Fourier transforms are fast, and it became possible to calculate accurate, dynamical diffraction in seconds to minutes with laptops using widely available multislice programs.
Developments in the convergent-beam electron diffraction approach. Building on the original work of Walther Kossel and Gottfried Möllenstedt in 1939, it was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to determine point groups and space groups. It can also be used for higher-level refinements of the electron density; for a brief history see CBED history. In many cases this is the best method to determine symmetry.
The development of new approaches to reduce dynamical effects such as precession electron diffraction and three-dimensional diffraction methods. Averaging over different directions has, empirically, been found to significantly reduce dynamical diffraction effects, e.g., see PED history for further details. Not only is it easier to identify known structures with this approach, it can also be used to solve unknown structures in some cases – see precession electron diffraction for further information.
The development of experimental methods exploiting ultra-high vacuum technologies (e.g. the approach described by in 1953) to better control surfaces, making LEED and RHEED more reliable and reproducible techniques. In the early days the surfaces were not well controlled; with these technologies they can both be cleaned and remain clean for hours to days, a key component of surface science.
Fast and accurate methods to calculate intensities for LEED so it could be used to determine atomic positions, for instance references. These have been extensively exploited to determine the structure of many surfaces, and the arrangement of foreign atoms on surfaces.
Methods to simulate the intensities in RHEED, so it can be used semi-quantitatively to understand surfaces during growth and thereby to control the resulting materials.
The development of advanced detectors for transmission electron microscopy such as charge-coupled device and direct electron detectors, which improve the accuracy and reliability of intensity measurements. These have efficiencies and accuracies that can be a thousand or more times that of the photographic film used in the earliest experiments, with the information available in real time rather than requiring photographic processing after the experiment.
Core elements of electron diffraction
Plane waves, wavevectors and reciprocal lattice
What is seen in an electron diffraction pattern depends upon the sample and also the energy of the electrons. The electrons need to be considered as waves, which involves describing the electron via a wavefunction, written in crystallographic notation (see notes and) as:for a position . This is a quantum mechanics description; one cannot use a classical approach. The vector is called the wavevector, has units of inverse nanometers, and the form above is called a plane wave as the term inside the exponential is constant on the surface of a plane. The vector is what is used when drawing ray diagrams, and in vacuum is parallel to the direction or, better, group velocity or probability current of the plane wave. For most cases the electrons are travelling at a respectable fraction of the speed of light, so rigorously need to be considered using relativistic quantum mechanics via the Dirac equation, which as spin does not normally matter can be reduced to the Klein–Gordon equation. Fortunately one can side-step many complications and use a non-relativistic approach based around the Schrödinger equation. Following Kunio Fujiwara and Archibald Howie, the relationship between the total energy of the electrons and the wavevector is written as:withwhere is the Planck constant, is a relativistic effective mass used to cancel out the relativistic terms for electrons of energy with the speed of light and the rest mass of the electron. The concept of effective mass occurs throughout physics (see for instance Ashcroft and Mermin), and comes up in the behavior of quasiparticles. A common one is the electron hole, which acts as if it is a particle with a positive charge and a mass similar to that of an electron, although it can be several times lighter or heavier. For electron diffraction the electrons behave as if they are non-relativistic particles of mass in terms of how they interact with the atoms.
The wavelength of the electrons in vacuum is from the above equationsand can range from about , roughly the size of an atom, down to a thousandth of that. Typically the energy of the electrons is written in electronvolts (eV), the voltage used to accelerate the electrons; the actual energy of each electron is this voltage times the electron charge. For context, the typical energy of a chemical bond is a few eV; electron diffraction involves electrons up to .
The magnitude of the interaction of the electrons with a material scales asWhile the wavevector increases as the energy increases, the change in the effective mass compensates this so even at the very high energies used in electron diffraction there are still significant interactions.
The high-energy electrons interact with the Coulomb potential, which for a crystal can be considered in terms of a Fourier series (see for instance Ashcroft and Mermin), that iswith a reciprocal lattice vector and the corresponding Fourier coefficient of the potential. The reciprocal lattice vector is often referred to in terms of Miller indices , a sum of the individual reciprocal lattice vectors with integers in the form:(Sometimes reciprocal lattice vectors are written as , , and see note.) The contribution from the needs to be combined with what is called the shape function (e.g.), which is the Fourier transform of the shape of the object. If, for instance, the object is small in one dimension then the shape function extends far in that direction in the Fourier transform—a reciprocal relationship.
Around each reciprocal lattice point one has this shape function. How much intensity there will be in the diffraction pattern depends upon the intersection of the Ewald sphere, that is energy conservation, and the shape function around each reciprocal lattice point—see Figure 6, 20 and 22. The vector from a reciprocal lattice point to the Ewald sphere is called the excitation error .
For transmission electron diffraction the samples used are thin, so most of the shape function is along the direction of the electron beam. For both LEED and RHEED the shape function is mainly normal to the surface of the sample. In LEED this results in (a simplification) back-reflection of the electrons leading to spots, see Figure 20 and 21 later, whereas in RHEED the electrons reflect off the surface at a small angle and typically yield diffraction patterns with streaks, see Figure 22 and 23 later. By comparison, with both x-ray and neutron diffraction the scattering is significantly weaker, so typically requires much larger crystals, in which case the shape function shrinks to just around the reciprocal lattice points, leading to simpler Bragg's law diffraction.
For all cases, when the reciprocal lattice points are close to the Ewald sphere (the excitation error is small) the intensity tends to be higher; when they are far away it tends to be smaller. The set of diffraction spots at right angles to the direction of the incident beam are called the zero-order Laue zone (ZOLZ) spots, as shown in Figure 6. One can also have intensities further out from reciprocal lattice points which are in a higher layer. The first of these is called the first order Laue zone (FOLZ); the series is called by the generic name higher order Laue zone (HOLZ).
The result is that the electron wave after it has been diffracted can be written as an integral over different plane waves:that is a sum of plane waves going in different directions, each with a complex amplitude . (This is a three dimensional integral, which is often written as rather than .) For a crystalline sample these wavevectors have to be of the same magnitude for elastic scattering (no change in energy), and are related to the incident direction by (see Figure 6)
A diffraction pattern detects the intensitiesFor a crystal these will be near the reciprocal lattice points typically forming a two dimensional grid. Different samples and modes of diffraction give different results, as do different approximations for the amplitudes .
A typical electron diffraction pattern in TEM and LEED is a grid of high intensity spots (white) on a dark background, approximating a projection of the reciprocal lattice vectors, see Figure 1, 9, 10, 11, 14 and 21 later. There are also cases which will be mentioned later where diffraction patterns are not periodic, see Figure 15, have additional diffuse structure as in Figure 16, or have rings as in Figure 12, 13 and 24. With conical illumination as in CBED they can also be a grid of discs, see Figure 7, 9 and 18. RHEED is slightly different, see Figure 22, 23. If the excitation errors were zero for every reciprocal lattice vector, this grid would be at exactly the spacings of the reciprocal lattice vectors. This would be equivalent to a Bragg's law condition for all of them. In TEM the wavelength is small and this is close to correct, but not exact. In practice the deviation of the positions from a simple Bragg's law interpretation is often neglected, particularly if a column approximation is made (see below).
Kinematical diffraction
In Kinematical theory an approximation is made that the electrons are only scattered once. For transmission electron diffraction it is common to assume a constant thickness , and also what is called the Column Approximation (e.g. references and further reading). For a perfect crystal the intensity for each diffraction spot is then:where is the magnitude of the excitation error along z, the distance along the beam direction (z-axis by convention) from the diffraction spot to the Ewald sphere, and is the structure factor:the sum being over all the atoms in the unit cell with the form factors, the reciprocal lattice vector, is a simplified form of the Debye–Waller factor, and is the wavevector for the diffraction beam which is:for an incident wavevector of , as in Figure 6 and above. The excitation error comes in as the outgoing wavevector has to have the same modulus (i.e. energy) as the incoming wavevector . The intensity in transmission electron diffraction oscillates as a function of thickness, which can be confusing; there can similarly be intensity changes due to variations in orientation and also structural defects such as dislocations. If a diffraction spot is strong it could be because it has a larger structure factor, or it could be because the combination of thickness and excitation error is "right". Similarly the observed intensity can be small, even though the structure factor is large. This can complicate interpretation of the intensities. By comparison, these effects are much smaller in x-ray diffraction or neutron diffraction because they interact with matter far less and often Bragg's law is adequate.
This form is a reasonable first approximation which is qualitatively correct in many cases, but more accurate forms including multiple scattering (dynamical diffraction) of the electrons are needed to properly understand the intensities.
Dynamical diffraction
While kinematical diffraction is adequate to understand the geometry of the diffraction spots, it does not correctly give the intensities and has a number of other limitations. For a more complete approach one has to include multiple scattering of the electrons using methods that date back to the early work of Hans Bethe in 1928. These are based around solutions of the Schrödinger equation using the relativistic effective mass described earlier. Even at very high energies dynamical diffraction is needed as the relativistic mass and wavelength partially cancel, so the role of the potential is larger than might be thought.
The main components of current dynamical diffraction of electrons include:
Taking into account the scattering back into the incident beam both from diffracted beams and between all others, not just single scattering from the incident beam to diffracted beams. This is important even for samples which are only a few atoms thick.
Modelling at least semi-empirically the role of inelastic scattering by an imaginary component of the potential, also called an "optical potential". There is always inelastic scattering, and often it can have a major effect on both the background and sometimes the details, see Figure 7 and 18.
Higher-order numerical approaches to calculate the intensities such as multislice, matrix methods which are called Bloch-wave approaches or muffin-tin approaches. With these diffraction spots which are not present in kinematical theory can be present, e.g.
Contributions to the diffraction from elastic strain and crystallographic defects, and also what Jens Lindhard called the string potential.
For transmission electron microscopes effects due to variations in the thickness of the sample and the normal to the surface.
Both in the geometry of scattering and calculations, for both LEED and RHEED, effects due to the presence of surface steps, surface reconstructions and other atoms at the surface. Often these change the diffraction details significantly.
For LEED, use more careful analyses of the potential because contributions from exchange terms can be important. Without these the calculations may not be accurate enough.
Kikuchi lines
Kikuchi lines, first observed by Seishi Kikuchi in 1928, are linear features created by electrons scattered both inelastically and elastically. As the electron beam interacts with matter, the electrons are diffracted via elastic scattering, and also scattered inelastically losing part of their energy. These occur simultaneously, and cannot be separated – according to the Copenhagen interpretation of quantum mechanics, only the probabilities of electrons at detectors can be measured. These electrons form Kikuchi lines which provide information on the orientation.
Kikuchi lines come in pairs forming Kikuchi bands, and are indexed in terms of the crystallographic planes they are connected to, with the angular width of the band equal to the magnitude of the corresponding diffraction vector . The position of Kikuchi bands is fixed with respect to each other and the orientation of the sample, but not against the diffraction spots or the direction of the incident electron beam. As the crystal is tilted, the bands move on the diffraction pattern. Since the position of Kikuchi bands is quite sensitive to crystal orientation, they can be used to fine-tune a zone-axis orientation or determine crystal orientation. They can also be used for navigation when changing the orientation between zone axes connected by some band, an example of such a map produced by combining many local sets of experimental Kikuchi patterns is in Figure 8; Kikuchi maps are available for many materials.
Types and techniques
In a transmission electron microscope
Electron diffraction in a TEM exploits controlled electron beams using electron optics. Different types of diffraction experiments, for instance Figure 9, provide information such as lattice constants, symmetries, and sometimes to solve an unknown crystal structure.
It is common to combine it with other methods, for instance images using selected diffraction beams, high-resolution images showing the atomic structure, chemical analysis through energy-dispersive x-ray spectroscopy, investigations of electronic structure and bonding through electron energy loss spectroscopy, and studies of the electrostatic potential through electron holography; this list is not exhaustive. Compared to x-ray crystallography, TEM analysis is significantly more localized and can be used to obtain information from tens of thousands of atoms to just a few or even single atoms.
Formation of a diffraction pattern
In TEM, the electron beam passes through a thin film of the material as illustrated in Figure 10. Before and after the sample the beam is manipulated by the electron optics including magnetic lenses, deflectors and apertures; these act on the electrons similar to how glass lenses focus and control light. Optical elements above the sample are used to control the incident beam which can range from a wide and parallel beam to one which is a converging cone and can be smaller than an atom, 0.1 nm. As it interacts with the sample, part of the beam is diffracted and part is transmitted without changing its direction. This occurs simultaneously as electrons are everywhere until they are detected (wavefunction collapse) according to the Copenhagen interpretation.
Below the sample, the beam is controlled by another set of magnetic lneses and apertures. Each set of initially parallel rays (a plane wave) is focused by the first lens (objective) to a point in the back focal plane of this lens, forming a spot on a detector; a map of these directions, often an array of spots, is the diffraction pattern. Alternatively the lenses can form a magnified image of the sample. Herein the focus is on collecting a diffraction pattern; for other information see the pages on TEM and scanning transmission electron microscopy.
Selected area electron diffraction
The simplest diffraction technique in TEM is selected area electron diffraction (SAED) where the incident beam is wide and close to parallel. An aperture is used to select a particular region of interest from which the diffraction is collected. These apertures are part of a thin foil of a heavy metal such as tungsten which has a number of small holes in it. This way diffraction information can be limited to, for instance, individual crystallites. Unfortunately the method is limited by the spherical aberration of the objective lens, so is only accurate for large grains with tens of thousands of atoms or more; for smaller regions a focused probe is needed.
If a parallel beam is used to acquire a diffraction pattern from a single-crystal, the result is similar to a two-dimensional projection of the crystal reciprocal lattice. From this one can determine interplanar distances and angles and in some cases crystal symmetry, particularly when the electron beam is down a major zone axis, see for instance the database by Jean-Paul Morniroli. However, projector lens aberrations such as barrel distortion as well as dynamical diffraction effects (e.g.) cannot be ignored. For instance, certain diffraction spots which are not present in x-ray diffraction can appear, for instance those due to Gjønnes-Moodie extinction conditions.
If the sample is tilted relative to the electron beam, different sets of crystallographic planes contribute to the pattern yielding different types of diffraction patterns, approximately different projections of the reciprocal lattice, see Figure 11. This can be used to determine the crystal orientation, which in turn can be used to set the orientation needed for a particular experiment. Furthermore, a series of diffraction patterns varying in tilt can be acquired and processed using a diffraction tomography approach. There are ways to combine this with direct methods algorithms using electrons and other methods such as charge flipping, or automated diffraction tomography to solve crystal structures.
Polycrystalline pattern
Diffraction patterns depend on whether the beam is diffracted by one single crystal or by a number of differently oriented crystallites, for instance in a polycrystalline material. If there are many contributing crystallites, the diffraction image is a superposition of individual crystal patterns, see Figure 12. With a large number of grains this superposition yields diffraction spots of all possible reciprocal lattice vectors. This results in a pattern of concentric rings as shown in Figure 12 and 13.
Textured materials yield a non-uniform distribution of intensity around the ring, which can be used to discriminate between nanocrystalline and amorphous phases. However, diffraction often cannot differentiate between very small grain polycrystalline materials and truly random order amorphous. Here high-resolution transmission electron microscopy and fluctuation electron microscopy can be more powerful, although this is still a topic of continuing development.
Multiple materials and double diffraction
In simple cases there is only one grain or one type of material in the area used for collecting a diffraction pattern. However, often there is more than one. If they are in different areas then the diffraction pattern will be a combination. In addition there can be one grain on top of another, in which case the electrons that go through the first are diffracted by the second. Electrons have no memory (like many of us), so after they have gone through the first grain and been diffracted, they traverse the second as if their current direction was that of the incident beam. This leads to diffraction spots which are the vector sum of those of the two (or even more) reciprocal lattices of the crystals, and can lead to complicated results. It can be difficult to know if this is real and due to some novel material, or just a case where multiple crystals and diffraction is leading to odd results.
Bulk and surface superstructures
Many materials have relatively simple structures based upon small unit cell vectors (see also note). There are many others where the repeat is some larger multiple of the smaller unit cell (subcell) along one or more direction, for instance . which has larger dimensions in two directions. These superstructures can arise from many reasons:
Larger unit cells due to electronic ordering which leads to small displacements of the atoms in the subcell. One example is antiferroelectricity ordering.
Chemical ordering, that is different atom types at different locations of the subcell.
Magnetic order of the spins. These may be in opposite directions on some atoms, leading to what is called antiferromagnetism.
In addition to those which occur in the bulk, superstructures can also occur at surfaces. When half the material is (nominally) removed to create a surface, some of the atoms will be under coordinated. To reduce their energy they can rearrange. Sometimes these rearrangements are relatively small; sometimes they are quite large. Similar to a bulk superstructure there will be additional, weaker diffraction spots. One example is for the silicon (111) surface, where there is a supercell which is seven times larger than the simple bulk cell in two directions. This leads to diffraction patterns with additional spots some of which are marked in Figure 14. Here the (220) are stronger bulk diffraction spots, and the weaker ones due to the surface reconstruction are marked 7 × 7—see note for convention comments.
Aperiodic materials
In an aperiodic crystal the structure can no longer be simply described by three different vectors in real or reciprocal space. In general there is a substructure describable by three (e.g. ), similar to supercells above, but in addition there is some additional periodicity (one to three) which cannot be described as a multiple of the three; it is a genuine additional periodicity which is an irrational number relative to the subcell lattice. The diffraction pattern can then only be described by more than three indices.
An extreme example of this is for quasicrystals, which can be described similarly by a higher number of Miller indices in reciprocal space—but not by any translational symmetry in real space. An example of this is shown in Figure 15 for an Al–Cu–Fe–Cr decagonal quasicrystal grown by magnetron sputtering on a sodium chloride substrate and then lifted off by dissolving the substrate with water. In the pattern there are pentagons which are a characteristic of the aperiodic nature of these materials.
Diffuse scattering
A further step beyond superstructures and aperiodic materials is what is called diffuse scattering in electron diffraction patterns due to disorder, which is also known for x-ray or neutron scattering. This can occur from inelastic processes, for instance, in bulk silicon the atomic vibrations (phonons) are more prevalent along specific directions, which leads to streaks in diffraction patterns. Sometimes it is due to arrangements of point defects. Completely disordered substitutional point defects lead to a general background which is called Laue monotonic scattering. Often there is a probability distribution for the distances between point defects or what type of substitutional atom there is, which leads to distinct three-dimensional intensity features in diffraction patterns. An example of this is for a Nb0.83CoSb sample, with the diffraction pattern shown in Figure 16. Because of the vacancies at the niobium sites, there is diffuse intensity with snake-like structure due to correlations of the distances between vacancies and also the relaxation of Co and Sb atoms around these vacancies.
Convergent beam electron diffraction
In convergent beam electron diffraction (CBED), the incident electrons are normally focused in a converging cone-shaped beam with a crossover located at the sample, e.g. Figure 17, although other methods exist. Unlike the parallel beam, the convergent beam is able to carry information from the sample volume, not just a two-dimensional projection available in SAED. With convergent beam there is also no need for the selected area aperture, as it is inherently site-selective since the beam crossover is positioned at the object plane where the sample is located.
A CBED pattern consists of disks arranged similar to the spots in SAED. Intensity within the disks represents dynamical diffraction effects and symmetries of the sample structure, see Figure 7 and 18. Even though the zone axis and lattice parameter analysis based on disk positions does not significantly differ from SAED, the analysis of disks content is more complex and simulations based on dynamical diffraction theory is often required. As illustrated in Figure 18, the details within the disk change with sample thickness, as does the inelastic background. With appropriate analysis CBED patterns can be used for indexation of the crystal point group, space group identification, measurement of lattice parameters, thickness or strain.
The disk diameter can be controlled using the microscope optics and apertures. The larger is the angle, the broader the disks are with more features. If the angle is increased to significantly, the disks begin to overlap. This is avoided in large angle convergent electron beam diffraction (LACBED) where the sample is moved upwards or downwards. There are applications, however, where the overlapping disks are beneficial, for instance with a ronchigram. It is a CBED pattern, often but not always of an amorphous material, with many intentionally overlapping disks providing information about the optical aberrations of the electron optical system.
Precession electron diffraction
Precession electron diffraction (PED), invented by Roger Vincent and Paul Midgley in 1994, is a method to collect electron diffraction patterns in a transmission electron microscope (TEM). The technique involves rotating (precessing) a tilted incident electron beam around the central axis of the microscope, compensating for the tilt after the sample so a spot diffraction pattern is formed, similar to a SAED pattern. However, a PED pattern is an integration over a collection of diffraction conditions, see Figure 19. This integration produces a quasi-kinematical diffraction pattern that is more suitable as input into direct methods algorithms using electrons to determine the crystal structure of the sample. Because it avoids many dynamical effects it can also be used to better identify crystallographic phases.
4D STEM
4D scanning transmission electron microscopy (4D STEM) is a subset of scanning transmission electron microscopy (STEM) methods which uses a pixelated electron detector to capture a convergent beam electron diffraction (CBED) pattern at each scan location; see the main page for further information. This technique captures a 2 dimensional reciprocal space image associated with each scan point as the beam rasters across a 2 dimensional region in real space, hence the name 4D STEM. Its development was enabled by better STEM detectors and improvements in computational power. The technique has applications in diffraction contrast imaging, phase orientation and identification, strain mapping, and atomic resolution imaging among others; it has become very popular and rapidly evolving from about 2020 onwards.
The name 4D STEM is common in literature, however it is known by other names: 4D STEM EELS, ND STEM (N- since the number of dimensions could be higher than 4), position resolved diffraction (PRD), spatial resolved diffractometry, momentum-resolved STEM, "nanobeam precision electron diffraction", scanning electron nano diffraction, nanobeam electron diffraction, or pixelated STEM. Most of these are the same, although there are instances such as momentum-resolved STEM where the emphasis can be very different.
Low-energy electron diffraction (LEED)
Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV). In this case the Ewald sphere leads to approximately back-reflection, as illustrated in Figure 20, and diffracted electrons as spots on a fluorescent screen as shown in Figure 21; see the main page for more information and references. It has been used to solve a very large number of relatively simple surface structures of metals and semiconductors, plus cases with simple chemisorbants. For more complex cases transmission electron diffraction or surface x-ray diffraction have been used, often combined with scanning tunneling microscopy and density functional theory calculations.
LEED may be used in one of two ways:
Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I–V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface.
Reflection high-energy electron diffraction (RHEED)
Reflection high energy electron diffraction (RHEED), is a technique used to characterize the surface of crystalline materials by reflecting electrons off a surface. As illustrated for the Ewald sphere construction in Figure 22, it uses mainly the higher-order Laue zones which have a reflection component. An experimental diffraction pattern is shown in Figure 23 and shows both rings from the higher-order Laue zones and streaky spots. RHEED systems gather information only from the surface layers of the sample, which distinguishes RHEED from other materials characterization methods that also rely on diffraction of electrons. Transmission electron microscopy samples mainly the bulk of the sample, although in special cases it can provide surface information. Low-energy electron diffraction (LEED) is also surface sensitive, and achieves surface sensitivity through the use of low energy electrons. The main uses of RHEED to date have been during thin film growth, as the geometry is amenable to simultaneous collection of the diffraction data and deposition. It can, for instance, be used to monitor surface roughness during growth by looking at both the shapes of the streaks in the diffraction pattern as well as variations in the intensities.
Gas electron diffraction
Gas electron diffraction (GED) can be used to determine the geometry of molecules in gases. A gas carrying the molecules is exposed to the electron beam, which is diffracted by the molecules. Since the molecules are randomly oriented, the resulting diffraction pattern consists of broad concentric rings, see Figure 24. The diffraction intensity is a sum of several components such as background, atomic intensity or molecular intensity.
In GED the diffraction intensities at a particular diffraction angle is described via a scattering variable defined asThe total intensity is then given as a sum of partial contributions:where results from scattering by individual atoms, by pairs of atoms and by atom triplets. Intensity corresponds to the background which, unlike the previous contributions, must be determined experimentally. The intensity of atomic scattering is defined aswhere , is the distance between the scattering object detector, is the intensity of the primary electron beam and is the scattering amplitude of the atom of the molecular structure in the experiment. is the main contribution and easily obtained for known gas composition. Note that the vector used here is not the same as the excitation error used in other areas of diffraction, see earlier.
The most valuable information is carried by the intensity of molecular scattering , as it contains information about the distance between all pairs of atoms in the molecule. It is given bywhere is the distance between two atoms, is the mean square amplitude of vibration between the two atoms, similar to a Debye–Waller factor, is the anharmonicity constant and a phase factor which is important for atomic pairs with very different nuclear charges. The summation is performed over all atom pairs. Atomic triplet intensity is negligible in most cases. If the molecular intensity is extracted from an experimental pattern by subtracting other contributions, it can be used to match and refine a structural model against the experimental data.
Similar methods of analysis have also been applied to analyze electron diffraction data from liquids.
In a scanning electron microscope
In a scanning electron microscope the region near the surface can be mapped using an electron beam that is scanned in a grid across the sample. A diffraction pattern can be recorded using electron backscatter diffraction (EBSD), as illustrated in Figure 25, captured with a camera inside the microscope. A depth from a few nanometers to a few microns, depending upon the electron energy used, is penetrated by the electrons, some of which are diffracted backwards and out of the sample. As result of combined inelastic and elastic scattering, typical features in an EBSD image are Kikuchi lines. Since the position of Kikuchi bands is highly sensitive to the crystal orientation, EBSD data can be used to determine the crystal orientation at particular locations of the sample. The data are processed by software yielding two-dimensional orientation maps. As the Kikuchi lines carry information about the interplanar angles and distances and, therefore, about the crystal structure, they can also be used for phase identification or strain analysis.
Notes
References
Further reading
. Contains extensive coverage of kinematical and other diffraction.
Large coverage of many different areas of electron microscopy with large numbers of references.
, often called the bible of electron microscopy.
, a large coverage of topic related to dynamical diffraction and CBED
. Very extensive coverage of modern dynamical diffraction.
, a recent textbook with many images, stronger on experimental aspects.
, an older source for experimental details, albeit hard to find.
Applied and interdisciplinary physics
Crystallography
Diffraction
Electron
Electron microscopy
Materials science
Quantum mechanics
Scattering | Electron diffraction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 10,623 | [
"Electron",
"Electron microscopy",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Molecular physics",
"Theoretical physics",
"Materials science",
"Quantum mechanics",
"Scattering",
"Crystallography",
"Diffraction",
"Particle physics",
"Condensed matter physics",
... |
277,750 | https://en.wikipedia.org/wiki/Electric%20power%20industry | The electric power industry covers the generation, transmission, distribution and sale of electric power to the general public and industry. The commercial distribution of electric power started in 1882 when electricity was produced for electric lighting. In the 1880s and 1890s, growing economic and safety concerns lead to the regulation of the industry. What was once an expensive novelty limited to the most densely populated areas, reliable and economical electric power has become an essential aspect for normal operation of all elements of developed economies.
By the middle of the 20th century, electricity was seen as a "natural monopoly", only efficient if a restricted number of organizations participated in the market; in some areas, vertically-integrated companies provide all stages from generation to retail, and only governmental supervision regulated the rate of return and cost structure.
Since the 1990s, many regions have broken up the generation and distribution of electric power. While such markets can be abusively manipulated with consequent adverse price and reliability impact to consumers, generally competitive production of electrical energy leads to worthwhile improvements in efficiency. However, transmission and distribution are harder problems since returns on investment are not as easy to find.
History
Although electricity had been known to be produced as a result of the chemical reactions that take place in an electrolytic cell since Alessandro Volta developed the voltaic pile in 1800, its production by this means was, and still is, expensive. In 1831, Michael Faraday devised a machine that generated electricity from rotary motion, but it took almost 50 years for the technology to reach a commercially viable stage. In 1878, in the United States, Thomas Edison developed and sold a commercially viable replacement for gas lighting and heating using locally generated and distributed direct current electricity.
Robert Hammond, in December 1881, demonstrated the new electric light in the Sussex town of Brighton in the UK for a trial period. The ensuing success of this installation enabled Hammond to put this venture on both a commercial and legal footing, as a number of shop owners wanted to use the new electric light. Thus the Hammond Electricity Supply Co. was launched.
In early 1882, Edison opened the world's first steam-powered electricity generating station at Holborn Viaduct in London, where he had entered into an agreement with the City Corporation for a period of three months to provide street lighting. In time he had supplied a number of local consumers with electric light. The method of supply was direct current (DC). Whilst the Godalming and the 1882 Holborn Viaduct Scheme closed after a few years the Brighton Scheme continued on, and supply was in 1887 made available for 24 hours per day.
It was later on in the year in September 1882 that Edison opened the Pearl Street Power Station in New York City and again it was a DC supply. It was for this reason that the generation was close to or on the consumer's premises as Edison had no means of voltage conversion. The voltage chosen for any electrical system is a compromise. For a given amount of power transmitted, increasing the voltage reduces the current and therefore reduces the required wire thickness. Unfortunately it also increases the danger from direct contact and increases the required insulation thickness. Furthermore, some load types were difficult or impossible to make work with higher voltages. The overall effect was that Edison's system required power stations to be within a mile of the consumers. While this could work in city centres, it would be unable to economically supply suburbs with power.
The mid to late 1880s saw the introduction of alternating current (AC) systems in Europe and the U.S. AC power had an advantage in that transformers, installed at power stations, could be used to raise the voltage from the generators, and transformers at local substations could reduce voltage to supply loads. Increasing the voltage reduced the current in the transmission and distribution lines and hence the size of conductors and distribution losses. This made it more economical to distribute power over long distances. Generators (such as hydroelectric sites) could be located far from the loads. AC and DC competed for a while, during a period called the war of the currents. The DC system was able to claim slightly greater safety, but this difference was not great enough to overwhelm the enormous technical and economic advantages of alternating current which eventually won out.
The AC power system used today developed rapidly, backed by industrialists such as George Westinghouse with Mikhail Dolivo-Dobrovolsky, Galileo Ferraris, Sebastian Ziani de Ferranti, Lucien Gaulard, John Dixon Gibbs, Carl Wilhelm Siemens, William Stanley Jr., Nikola Tesla, and others contributed to this field.
Power electronics is the application of solid-state electronics to the control and conversion of electric power. Power electronics started with the development of the mercury arc rectifier in 1902, used to convert AC into DC. From the 1920s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. Grading electrodes made them suitable for high voltage direct current (HVDC) power transmission. In 1933, selenium rectifiers were invented. Transistor technology dates back to 1947, with the invention of the point-contact transistor, which was followed by the bipolar junction transistor (BJT) in 1948. By the 1950s, higher power semiconductor diodes became available and started replacing vacuum tubes. In 1956, the silicon controlled rectifier (SCR) was introduced, increasing the range of power electronic applications.
A breakthrough in power electronics came with the invention of the MOSFET (metal-oxide-semiconductor field-effect transistor) in 1959. Generations of MOSFETs enabled power designers to achieve performance and density levels not possible with bipolar transistors. In 1969, Hitachi introduced the first vertical power MOSFET, which would later be known as the VMOS (V-groove MOSFET). The power MOSFET has since become the most common power device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth, ruggedness, easy drive, simple biasing, ease of application, and ease of repair.
While HVDC is increasingly being used to transmit large quantities of electricity over long distances or to connect adjacent asynchronous power systems, the bulk of electricity generation, transmission, distribution and retailing takes place using alternating current.
Organization
The electric power industry is commonly split up into four processes. These are electricity generation such as a power station, electric power transmission, electricity distribution and electricity retailing. In many countries, electric power companies own the whole infrastructure from generating stations to transmission and distribution infrastructure. For this reason, electric power is viewed as a natural monopoly. The industry is generally heavily regulated, often with price controls and is frequently government-owned and operated. However, the modern trend has been growing deregulation in at least the latter two processes.
The nature and state of market reform of the electricity market often determines whether electric companies are able to be involved in just some of these processes without having to own the entire infrastructure, or citizens choose which components of infrastructure to patronise. In countries where electricity provision is deregulated, end-users of electricity may opt for more costly green electricity.
Generation
Generation is the conversion of some primary energy source into electric power suitable for commercial use on an electrical grid. Most commercial electric power is produced by rotating electrical machines, "generators", which move conductors through a magnetic field to produce electric current. The generator is rotated by some other prime mover machine; in typical grid-connected generators this is a steam turbine, a gas turbine, or a hydraulic turbine. Primary energy sources for these machine are often fossil fuels (coal, oil, natural gas), nuclear fission, geothermal steam, or falling water. Renewable sources such as wind and solar energy are increasingly of commercial importance.
Since electrical generation must be closely matched with electrical consumption, enough generation capacity must be installed to meet peak demands. At the same time, primary energy sources must be selected to minimize the cost of produced electrical energy. Generally the lowest-incremental-cost source of electrical energy will be the next unit connected to meet rising demand. Electrical generators have automatic controls to regulate the power fed into the electrical transmission system, adjusting generator output moment by moment to balance with electrical demand. For a large grid with scores or hundreds of generators connected and thousands of loads, management of stable generator supply is a problem with significant challenges, to meet economic, environmental and reliability requirements. For example, low-incremental-cost generation sources such as nuclear power plants may be run continually to meet the average "base load" of the connected system, whereas more costly peaking power plants such as natural gas turbines may be run for brief times during the day to meet peak loads. Alternatively, load management strategies may encourage more even demand for electrical power and reduce costly peaks. Designated generator units for a particular electrical grid may be run at partial output only, to provide "spinning reserve" for sudden increases in demand or faults with other generating units.
In addition to electrical power production, electrical generation units may provide other ancillary services to the electrical grid, such as frequency control, reactive power, and black start of a collapsed power grid. These ancillary services may be commercially valuable when the generation, transmission, and distribution electrical companies are separate commercial entities.
Electric power transmission
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines which facilitate this movement are known as a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is known as the "power grid" in North America, or just "the grid". In the United Kingdom, India, Malaysia and New Zealand, the network is known as the National Grid.
A wide area synchronous grid, also known as an "interconnection" in North America, directly connects many generators delivering AC power with the same relative frequency numerous consumers. For example, there are four major interconnections in North America (the Western Interconnection, the Eastern Interconnection, the Quebec Interconnection and the Electric Reliability Council of Texas (ERCOT) grid). In Europe one large grid connects most of continental Europe.
Historically, transmission and distribution lines were owned by the same company, but starting in the 1990s, many countries have liberalized the regulation of the electricity market in ways that have led to the separation of the electricity transmission business from the distribution business.
Electric power distribution
Electric power distribution is the final stage in the delivery of electric power; it carries electricity from the transmission system to individual consumers. Distribution substations connect to the transmission system and lower the transmission voltage to medium voltage ranging between 2 kV and 35 kV with the use of transformers. Primary distribution lines carry this medium voltage power to distribution transformers located near the customer's premises. Distribution transformers again lower the voltage to the utilization voltage used by lighting, industrial equipment or household appliances. Often several customers are supplied from one transformer through secondary distribution lines. Commercial and residential customers are connected to the secondary distribution lines through service drops. Customers demanding a much larger amount of power may be connected directly to the primary distribution level or the subtransmission level.
Electric retailing
Electricity retailing is the final sale of electricity from generation to the end-use consumer.
World electricity industries
The organization of the electrical sector of a country or region varies depending on the economic system of the country. In some places, all electric power generation, transmission and distribution is provided by a government controlled organization. Other regions have private or investor-owned utility companies, city or municipally owned companies, cooperative companies owned by their own customers, or combinations. Generation, transmission and distribution may be offered by a single company, or different organizations may provide each of these portions of the system.
Not everyone has access to grid electricity. About 840 million people (mostly in Africa) had no access in 2017, down from 1.2 billion in 2010.
Market reform
The business model behind the electric utility has changed over the years playing a vital role in shaping the electricity industry into what it is today; from generation, transmission, distribution, to the final local retailing. This has occurred prominently since the reform of the electricity supply industry in England and Wales in 1990.
United States
In 1996 – 1999 the Federal Energy Regulatory Commission (FERC) made a series of decisions which were intended to open the U.S. wholesale power market to new players, with the hope that spurring competition would save consumers $4 to $5 billion per year and encourage technical innovation in the industry. Steps were taken to give all market participants open access to existing interstate transmission lines.
Order No. 888 ordered vertically integrated electric utilities to functionally separate their transmission, power generation and marketing businesses to prevent self-dealing.
Order No. 889 set up a system to provide all participants with timely access to information about available transmission capacity and prices.
The FERC also endorsed the concept of appointing independent system operators (ISOs) to manage the electric power grid – a function that was traditionally the responsibility of vertically integrated electric utility companies. The concept of an independent system operator evolved into that of regional transmission organizations (RTOs). FERC's intention was that all U.S. companies owning interstate electric transmission lines would place those facilities under the control of an RTO. In its Order No. 2000 (Regional Transmission Organizations), issued in 1999, FERC specified the minimum capabilities that an RTO should possess.
These decisions, which were intended to create a fully interconnected grid and an integrated national power market, resulted in the restructuring of the U.S. electricity industry. That process was soon dealt two setbacks: the California energy crisis of 2000, and the Enron scandal and collapse. Although industry restructuring proceeded, these events made clear that competitive markets could be manipulated and thus must be properly designed and monitored. Furthermore, the Northeast blackout of 2003 highlighted the need for a dual focus on competitive pricing and strong reliability standards.
Other countries
In some countries, wholesale electricity markets operate, with generators and retailers trading electricity in a similar manner to shares and currency. As deregulation continues further, utilities are driven to sell their assets as the energy market follows in line with the gas market in use of the futures and spot markets and other financial arrangements. Even globalization with foreign purchases are taking place. One such purchase was when the UK's National Grid, the largest private electric utility in the world, bought several electric utilities in New England for $3.2 billion. Between 1995 and 1997, seven of the 12 Regional Electric Companies (RECs) in England and Wales were bought by U.S. energy companies. Domestically, local electric and gas firms have merged operations as they saw the advantages of joint affiliation, especially with the reduced cost of joint-metering. Technological advances will take place in the competitive wholesale electric markets, such examples already being utilized include fuel cells used in space flight; aeroderivative gas turbines used in jet aircraft; solar engineering and photovoltaic systems; off-shore wind farms; and the communication advances spawned by the digital world, particularly with microprocessing which aids in monitoring and dispatching.
Outlook
Electricity is expected to see growing demand in the future. The Information Revolution is highly reliant on electric power. Other growth areas include emerging new electricity-exclusive technologies, developments in space conditioning, industrial processes, and transportation (for example hybrid vehicles, locomotives).
See also
AC power
Distributed generation
Emissions & Generation Resource Integrated Database
Meter Point Administration Number, a unique UK supply number
National Grid (disambiguation)
North American Electric Reliability Corporation
Rate Case
Reddy Kilowatt, a U.S. electricity corporate logo
Samuel Insull
References
Further reading
P. Strange, "Early Electricity Supply in Britain: Chesterfield and Godalming", IEEE Proceedings (1979).
D. G. Tucker, "Hydro-Electricity for Public Supply in Britain", Industrial Archaeology Review, (1977).
B. Bowers, A History of Electric Light & Power, Peregrinus (1982).
T. P. Hughes, Networks of Power, Johns Hopkins Press London (1983).
IRENA, INNOVATION LANDSCAPE FOR A RENEWABLE-POWERED FUTURE: SOLUTIONS TO INTEGRATE VARIABLE RENEWABLES , (2019).
Electric power
Energy industry
Industries (economics) | Electric power industry | [
"Physics",
"Engineering"
] | 3,338 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
277,810 | https://en.wikipedia.org/wiki/Trace%20metal | Trace metals are the metals subset of trace elements; that is, metals normally present in small but measurable amounts in animal and plant cells and tissues. Some of these trace metals are a necessary part of nutrition and physiology. Some biometals are trace metals. Ingestion of, or exposure to, excessive quantities can be toxic. However, insufficient plasma or tissue levels of certain trace metals can cause pathology, as is the case with iron.
Trace metals within the human body include iron, lithium, zinc, copper, chromium, nickel, cobalt, vanadium, molybdenum, manganese and others.
Some of the trace metals are needed by living organisms to function properly and are depleted through the expenditure of energy by various metabolic processes of living organisms. They are replenished in animals through diet as well as environmental exposure, and in plants through the uptake of nutrients from the soil in which the plant grows. Human vitamin pills and plant fertilizers can be a source of trace metals.
Trace metals are sometimes referred to as trace elements, although the latter includes minerals and is a broader category. See also Dietary mineral. Trace elements are required by the body for specific functions. Things such as vitamins, sports drinks, fresh fruits and vegetables are sources. Taken in excessive amounts, trace elements can cause problems. For example, fluorine is required for the formation of bones and enamel on teeth. However, when taken in an excessive amount can cause a disease called "Fluorosis', in which bone deformations and yellowing of teeth are seen. Fluorine can occur naturally in some areas in ground water.
Iron
Humans
Roughly 5 grams of iron are present in the human body and is the most abundant trace metal. It is absorbed in the intestine as heme or non-heme iron depending on the food source. Heme iron is derived from the digestion of hemoproteins in meat. Non-heme iron is mainly derived from plants and exist as iron(II) or iron(III) ions.
Iron is essential for more than 500 hemeproteins, the likes of which include hemoglobin and myoglobin, and account for 80% of iron usage. The other 20% is present in ferritin, hemosiderin, iron-sulfur (Fe/S) proteins, such as ferrochelatase, and more.
Zinc
Humans
A relatively non-toxic metal to humans and the second most abundant, the body has 2-3 grams of zinc. It can enter the body through inhalation, skin absorption, and ingestion, with the latter of the bunch being the most common. The mucosal cells of the digestive tract contain metallothionein proteins that store the zinc ions.
Nearly 90% of zinc is found in the bones, muscles, and vesicles in the brain. Zinc is a cofactor in hundreds of enzyme reactions and a major component of zinc finger proteins.
Copper
Humans
The third most abundant trace metal in the human body.
It is found in cytochrome c oxidase, a protein necessary for the electron transport chain in mitochondria.
References
Biochemistry
Nutrition
Physiology
Metals
Analytical chemistry
Geochemistry | Trace metal | [
"Chemistry",
"Biology"
] | 663 | [
"Biochemistry",
"Metals",
"nan",
"Physiology"
] |
278,148 | https://en.wikipedia.org/wiki/Formula%20SAE | Formula SAE is a student design competition organized by SAE International (previously known as the Society of Automotive Engineers, SAE). The competition was started in 1980 by the SAE student branch at the University of Texas at Austin after a prior asphalt racing competition proved to be unsustainable.
Concept
The concept behind Formula SAE is that a fictional manufacturing company has contracted a student design team to develop a small Formula-style race car. The prototype race car is to be evaluated for its potential as a production item. The target marketing group for the race car is the non-professional weekend autocross racer. Each student team designs, builds and tests a prototype based on a series of rules, whose purpose is both ensuring on-track safety (the cars are driven by the students themselves) and promoting clever problem solving. There are combustion and electric divisions of the competition, primarily only differing in their rules for powertrain.
The prototype race car is judged in a number of different events. The points schedule for most Formula SAE events is:
In addition to these events, various sponsors of the competition provide awards for superior design accomplishments. For example, best use of E-85 ethanol fuel, innovative use of electronics, recyclability, crash worthiness, analytical approach to design, and overall dynamic performance are some of the awards available. At the beginning of the competition, the vehicle is checked for rule compliance during the Technical Inspection. Its braking ability, rollover stability and noise levels are checked before the vehicle is allowed to compete in the dynamic events (Skidpad, Autocross, Acceleration, and Endurance).
Large companies, such as General Motors, Ford, and Chrysler, can have staff interact with more than 1000 student engineers. Working in teams of anywhere between two and 30, these students have proven themselves to be capable of producing a functioning prototype vehicle.
The volunteers for the design judging include some of the racing industry's most prominent engineers and consultants including the late Carroll Smith, Bill Mitchell, Doug Milliken, Claude Rouelle, Jack Auld, John LePlante, Ron Tauranac, and Bryan Kubala.
Today, the competition has expanded and includes more than 12 events all over the world. For example, Formula Student is a similar SAE-sanctioned event in the UK, as well as Formula SAE Australasia (Formula SAE-A) taking place in Australia. The Verein Deutscher Ingenieure (VDI) holds the Formula Student Germany competition at the Hockenheimring.
In 2007, an offshoot called Formula Hybrid was inaugurated. It is similar to Formula SAE, except all cars must have gasoline-electric hybrid power plants. The competition takes place at the New Hampshire International Speedway.
In 2010, the Formula Student Electric was inaugurated, which requires the students to build a fully electrically powered racing vehicle.
In 2017, the Formula Student Driverless was inaugurated.
Summary of rules
Student competition
Formula SAE has relatively few performance restrictions. The team must be made up entirely of active college students (including drivers) which places obvious restrictions on available work hours, skill sets, experience, and presents unique challenges that professional race teams do not face with a paid, skilled staff. This restriction means that the rest of the regulations can be much less restrictive than most professional series.
Students are allowed to receive advice and criticism from professional engineers or faculty, but all of the car design must be done by the students themselves. Students are also solely responsible for fundraising, though most successful teams are based on curricular programs and have university-sponsored budgets. Additionally, the points system is organized so that multiple strategies can lead to success. This leads to a great variety among cars, which is a rarity in the world of motorsports.
Engine (IC Competition)
The engine must be a four-stroke, Otto-cycle piston engine with a displacement no greater than 710cc. An air restrictor of circular cross-section must be fitted downstream of the throttle and upstream of any compressor, with a diameter no greater than 20mm for gasoline engines, forced induction or naturally aspirated, or 19mm for ethanol-fueled engines. The restrictor keeps power levels below 100 hp in the vast majority of FSAE cars. Most commonly, production four-cylinder 600cc sport bike engines are used due to their availability and displacement. However, there are many teams that use smaller V-twin and single-cylinder engines, mainly due to their weight-saving and packaging benefits. Very rarely do teams build an engine from scratch, few examples include Western Washington University's 554cc V8 entry in 2001, University of Melbourne's "WATTARD" engine in 2003–2004, and University of Auckland's V twin.
Electric Powertrain (EV Competition)
The accumulator must not have a voltage greater than 600V, but does not have a capacity limit. An energy meter is installed at competition ensuring no more than 80kW are drawn. Most teams elect to use lithium-ion cells, but both lead acid cells as well as other energy storage devices such as capacitors are also permitted — this accounts for the commonly referred battery pack being referred to as an accumulator in this competition. Cell voltages and temperatures must be monitored and individual cell connected via fusible links. These challenges lead many (especially young) teams to use preconfigured cell modules that are connected together in the accumulator enclosure. The competition organizers attempt to prepare teams for competition EV technical inspection by having teams complete an Electrical Safety Form (ESF) prior to competition — this form outlines many of the parts used in the high voltage system as well as design decisions the team is making.
Suspension
The suspension is unrestricted save for safety regulations and the requirement to have 50mm total of wheel travel. Most teams opt for four-wheel independent suspension, almost universally double-wishbone. Active suspension is legal.
Aerodynamics
Complex aerodynamic packages, while not required to compete at competition are common among the fastest teams at competition. With the low speeds of the FSAE competition rarely exceeding , designs must be thoroughly justified in the design judging event through wind tunnel testing, computational fluid dynamics, and on track testing. Aerodynamic devices are regulated through maximum size and powered aerodynamic devices are outlawed.
Weight
There is no weight restriction. The weight of the average competitive Formula SAE car is usually less than in race trim. However, the lack of weight regulation combined with the somewhat fixed power ceiling encourages teams to adopt innovative weight-saving strategies, such as the use of composite materials, elaborate and expensive machining projects, and rapid prototyping. In 2009 the fuel economy portion of the endurance event was assigned 100 of the 400 endurance points, up from 50. This rules change has marked a trend in engine downsizing in an attempt to save weight and increase fuel economy. Several top-running teams have switched from high-powered four-cylinder cars to smaller, one- or two-cylinder engines which, though they usually make much less power, allow weight savings of or more, and also provide much better fuel economy. If a lightweight single-cylinder car can keep a reasonable pace in the endurance race, it can often make up the points lost in overall time to the heavier, high-powered cars by an exceptional fuel economy score.
Example: At the 2009 Formula SAE West endurance event, third-place finishers Rochester Institute of Technology completed the endurance course in 22 minutes, 45 seconds with their four-cylinder car, while fourth-place finishers Oregon State University finished in 22 minutes, 47 seconds with their single-cylinder car; this gave RIT 290.6 of 300 points for the race portion of the event and OSU 289.2 points. However, OSU used the least fuel of any car (, or over the entire endurance race) and received the full 100 points for fuel economy, while RIT used () and was thus only awarded 23.9 of the available points. RIT went on to win the overall competition by only 8.9 points over OSU, having scored slightly better in all of the other dynamic events.
Safety
The majority of the regulations pertain to safety. Cars must have two steel roll hoops of designated thickness and alloy, regardless of the composition of the rest of the chassis. There must be an impact attenuator in the nose, and impact testing data on this attenuator must be submitted prior to competing. Cars must also have two hydraulic brake circuits, full five-point racing harnesses, and must meet geometric templates for driver location in the cockpit for all drivers competing. Tilt-tests ensure that no fluids will spill from the car under heavy cornering, and there must be no line-of-sight between the driver and fuel, coolant, or oil lines. Electric vehicles are also fitted with a Shutdown Circuit, which is the physical electrical path the current takes that closes the main contactors of the vehicle. All safety buttons, switches, and circuits are part of the Shutdown Circuit such that removing any should make it physically impossible for high voltage to be present outside the accumulator.
History
In 1979 the only SAE Mini-Indy was held at the University of Houston. Conceived by Dr. Kurt M. Marshek, the competition was inspired by a how-to article that appeared in Popular Mechanics magazine, for a small, "Indy-style" vehicle made out of wood, and powered by a five-horsepower Briggs and Stratton engine. Using the Mini Baja competitions as a guide, engineering students had to design and build small, "Indy-style" vehicles using the same stock engine used in the Popular Mechanics article. Thirteen schools entered and eleven competed. The University of Texas at El Paso won the overall competition under chief supervision.
Although Dr. William Shapton (who had recently left the University of Cincinnati to join Michigan Technological University) broached the idea of hosting a similar competition in 1980, no one stepped up to organize another Mini-Indy.
In 1980 when the members of the new SAE student branch at the University of Texas (Austin) learned that the Mini-Indy had died, they generated the concept for a new intercollegiate student engineering design competition that would allow students to apply what they were learning in the classroom to a complex, real-world engineering design problem: design and development of a race car. UT SAE student branch members Robert Edwards and John Tellkamp led a discussion among UT SAE members and envisioned a competition that would involve designing and constructing a race car along the lines of the SCCA Formula 440 entry-level racing series that was popular at the time. Prof. Matthews came up with the “Formula SAE” name following the format of Formula A and Formula Vee but emphasizing that this new race car was an engineering competition rather than a driver's competition. Schools would meet after the end of the academic year to compete and determine who had built the best car. Edwards, Tellkamp, and fellow UT SAE students Joe Green, Dick Morton, Mike Best, and Carl Morris drafted a set of safety and competition rules and presented them to the SAE student branch membership and to UT SAE Faculty Advisor Prof. Ron Matthews. Prof. Matthews then contacted Bob Sechler of the SAE Educational Relations Department at SAE headquarters and asked for his permission both to establish the new intercollegiate student engineering design competition and to host the first Formula SAE competition during the summer of 1981, and he agreed. The newly formed UT SAE branch, consisting mostly of automotive and motorcycle enthusiasts pursuing engineering degrees, including several who had left careers in fields for which the job market had virtually disappeared due to the depressed economy in the early 1980s – including some experienced auto mechanics, embraced and adopted the concept with little idea of what they were getting themselves into. SAE student branch officers Mike Best, Carl Morris, and Sylvia Obregon, along with Dr. Matthews began planning and organizing the event to be held the following year.
Here, it is important to note that Formula SAE was NOT a simple renaming of the Mini-Indy competition but was instead an entirely new intercollegiate student engineering design competition. Unlike all previous SAE-sanctioned student racing/design competitions including Mini-Indy, the Formula SAE rules left the selection of the engine to the design team, as long as a 4 stroke engine with a one-inch diameter intake restrictor was used. (The current Formula SAE rules allow the teams to use 4-stroke engines up to 710 cc, with a smaller restrictor.) Also, unlike all previous SAE-sanctioned student racing/design competitions including Mini-Indy, engine modifications were both allowed and encouraged.
The first Formula SAE competition was held in the parking lot of the UT baseball field (Disch-Falk field) on the University of Texas campus on Memorial Day weekend, 1981. Judges included legendary race car engineer/owner/driver and Indy 500 champion Jim Hall. While a sudden Texas rain storm sent everyone scrambling for cover just before the endurance event that day, the weather failed to dampen the spirits of the students, judges, or spectators and Formula SAE was born.
The University of Texas continued to host the event from 1982 to 1984 as the popularity and number of participants grew. In these subsequent years, UT moved the Formula SAE competition to other parking areas that included elevation changes and driveway aprons that forced the use of functioning suspensions. The event became international in 1982 with the entry of Universidad La Salle team from Mexico City. The significant rules changes for 1982 were: 1) a displacement limit of 600 cc (300 cc for Wankels), but the 1 inch diameter restrictor rule was retained, 2) a requirement for 4-wheel independent suspension (Mini-Indy did not have any suspension rules), and 3) the addition of a temporary “B&S” class of vehicles that were originally designed for Mini-Baja, had to retain the 8 hp Briggs & Stratton engine, and did not need to comply with the 4-wheel independent suspension rule. Formula SAE continued to be an international competition when the team from Universidad La Salle returned. With the only engine restrictions being a displacement limit of 600 cc and a 1-inch maximum diameter for the intake, creativity flourished. Also in 1983, the temporary B&S class was eliminated, the University of Texas at Austin entered the first composite Formula SAE vehicle and Marquette University entered the first turbocharged engine. The rules allowed a Formula SAE car to compete for two years in recognition of the effort required to build and test a quality car. This also allowed students the experience of re-engineering and improving on design elements that did not work. The rules for 1984 specifically allowed turbochargers, superchargers, and use of nitrous oxide but the engine had to breathe through a 25.4 mm exit bore of the carburetor casting (1984 was well before electronic fuel injection). Engine intake restrictors were later tightened as cars became faster year over year as knowledge was passed on within and between teams. Also, a 65-100 inch wheelbase rule was promulgated, as was a rule requiring all vehicles to have a “body that resembles a formula car”. The Formula SAE field had grown to eleven cars in 1984, so the University of Texas at Austin decided that the competition had matured sufficiently that it was safe to pass it on to other hosts.
The University of Texas at Austin hosted the competition through 1984. In 1985, the competition was hosted by The University of Texas at Arlington. There, Dr. Robert Woods, with guidance from the SAE student activities committee, changed the concept of the competition from one where students built a pure racing car, to one that mirrored the SAE Mini-Baja competitions, where they were to design and build a vehicle for limited series production.
General Motors hosted the competition in 1991, Ford Motor Co. in 1992, and Chrysler Corp. in 1993. After the 1992 competition, the three formed a consortium to run Formula SAE.
At the end of the 2008 competition, the consortium ceased to exist. The event is now funded by SAE through company sponsorships and donations along with the teams' enrollment fees.
Winners
See also
Baja SAE
Formula Student
References
Bass, Edward A., Larry M. Bendele, and Scott T. McBroom (1990), "The 1989 Formula SAE Student Design Competition", SAE Paper 900840, doi:10.4271/900840.
Beckel, Stephen A., Sylvia Obregon, and Ronald D. Matthews (1982), "The 1982 National Intercollegiate Formula SAE Competition", SAE Paper 821093, doi:10.4271/821093.
Matthews, Ronald D., Richard K. Morton, and Billy H. Wood (1983), "The 1983 Formula SAE Championship Competition", SAE Paper 831390, 1983, doi:10.4271/831390.
Matthews, Ronald D., Dan Worcester, Billy Wood, and Tim Ryan (1984), "The 1984 Formula SAE Intercollegiate Competition", SAE Paper 841163, doi:10.4271/841163.
External links
Formula SAE
Formula SAE Online
FStotal.com - Formula SAE and Formula Student News, Tips, Pictures, Videos, ...
HowStuffWorks.com - How does a Formula SAE Car work?
Ratchet Blog - A blog with the most commonly occurring issues of Formula SAE teams
Sae
Formula Sae
Mechanical engineering competitions
es:Fórmula SAE | Formula SAE | [
"Engineering"
] | 3,583 | [
"Mechanical engineering competitions",
"Mechanical engineering"
] |
278,366 | https://en.wikipedia.org/wiki/Moment%20%28physics%29 | A moment is a mathematical expression involving the product of a distance and a physical quantity such as a force or electric charge. Moments are usually defined with respect to a fixed reference point and refer to physical quantities located some distance from the reference point. For example, the moment of force, often called torque, is the product of a force on an object and the distance from the reference point to the object. In principle, any physical quantity can be multiplied by a distance to produce a moment. Commonly used quantities include forces, masses, and electric charge distributions; a list of examples is provided later.
Elaboration
In its most basic form, a moment is the product of the distance to a point, raised to a power, and a physical quantity
(such as force or electrical charge) at that point:
where is the physical quantity such as a force applied at a point, or a point charge, or a point mass, etc. If the quantity is not concentrated solely at a single point, the moment is the integral of that quantity's density over space:
where is the distribution of the density of charge, mass, or whatever quantity is being considered.
More complex forms take into account the angular relationships between the distance and the physical quantity, but the above equations capture the essential feature of a moment, namely the existence of an underlying or equivalent term. This implies that there are multiple moments (one for each value of n) and that the moment generally depends on the reference point from which the distance is measured, although for certain moments (technically, the lowest non-zero moment) this dependence vanishes and the moment becomes independent of the reference point.
Each value of n corresponds to a different moment: the 1st moment corresponds to n = 1; the 2nd moment to n = 2, etc. The 0th moment (n = 0) is sometimes called the monopole moment; the 1st moment (n = 1) is sometimes called the dipole moment, and the 2nd moment (n = 2) is sometimes called the quadrupole moment, especially in the context of electric charge distributions.
Examples
The moment of force, or torque, is a first moment: , or, more generally, .
Similarly, angular momentum is the 1st moment of momentum: . Momentum itself is not a moment.
The electric dipole moment is also a 1st moment: for two opposite point charges or for a distributed charge with charge density .
Moments of mass:
The total mass is the zeroth moment of mass.
The center of mass is the 1st moment of mass normalized by total mass: for a collection of point masses, or for an object with mass distribution .
The moment of inertia is the 2nd moment of mass: for a point mass, for a collection of point masses, or for an object with mass distribution . The center of mass is often (but not always) taken as the reference point.
Multipole moments
Assuming a density function that is finite and localized to a particular region, outside that region a 1/r potential may be expressed as a series of spherical harmonics:
The coefficients are known as multipole moments, and take the form:
where expressed in spherical coordinates is a variable of integration. A more complete treatment may be found in pages describing multipole expansion or spherical multipole moments. (The convention in the above equations was taken from Jackson – the conventions used in the referenced pages may be slightly different.)
When represents an electric charge density, the are, in a sense, projections of the moments of electric charge: is the monopole moment; the are projections of the dipole moment, the are projections of the quadrupole moment, etc.
Applications of multipole moments
The multipole expansion applies to 1/r scalar potentials, examples of which include the electric potential and the gravitational potential. For these potentials, the expression can be used to approximate the strength of a field produced by a localized distribution of charges (or mass) by calculating the first few moments. For sufficiently large r, a reasonable approximation can be obtained from just the monopole and dipole moments. Higher fidelity can be achieved by calculating higher order moments. Extensions of the technique can be used to calculate interaction energies and intermolecular forces.
The technique can also be used to determine the properties of an unknown distribution . Measurements pertaining to multipole moments may be taken and used to infer properties of the underlying distribution. This technique applies to small objects such as molecules,
but has also been applied to the universe itself, being for example the technique employed by the WMAP and Planck experiments to analyze the cosmic microwave background radiation.
History
In works believed to stem from Ancient Greece, the concept of a moment is alluded to by the word ῥοπή (rhopḗ, "inclination") and composites like ἰσόρροπα (isorropa, "of equal inclinations"). The context of these works is mechanics and geometry involving the lever. In particular, in extant works attributed to Archimedes, the moment is pointed out in phrasings like:
"Commensurable magnitudes ( ) [A and B] are equally balanced () if their distances [to the center Γ, i.e., ΑΓ and ΓΒ] are inversely proportional () to their weights ()."
Moreover, in extant texts such as The Method of Mechanical Theorems, moments are used to infer the center of gravity, area, and volume of geometric figures.
In 1269, William of Moerbeke translates various works of Archimedes and Eutocious into Latin. The term ῥοπή is transliterated into ropen.
Around 1450, Jacobus Cremonensis translates ῥοπή in similar texts into the Latin term momentum ( "movement"). The same term is kept in a 1501 translation by Giorgio Valla, and subsequently by Francesco Maurolico, Federico Commandino, Guidobaldo del Monte, Adriaan van Roomen, Florence Rivault, Francesco Buonamici, Marin Mersenne, and Galileo Galilei. That said, why was the word momentum chosen for the translation? One clue, according to Treccani, is that momento in Medieval Italy, the place the early translators lived, in a transferred sense meant both a "moment of time" and a "moment of weight" (a small amount of weight that turns the scale).
In 1554, Francesco Maurolico clarifies the Latin term momentum in the work Prologi sive sermones. Here is a Latin to English translation as given by Marshall Clagett:
"[...] equal weights at unequal distances do not weigh equally, but unequal weights [at these unequal distances may] weigh equally. For a weight suspended at a greater distance is heavier, as is obvious in a balance. Therefore, there exists a certain third kind of power or third difference of magnitude—one that differs from both body and weight—and this they call moment. Therefore, a body acquires weight from both quantity [i.e., size] and quality [i.e., material], but a weight receives its moment from the distance at which it is suspended. Therefore, when distances are reciprocally proportional to weights, the moments [of the weights] are equal, as Archimedes demonstrated in The Book on Equal Moments. Therefore, weights or [rather] moments like other continuous quantities, are joined at some common terminus, that is, at something common to both of them like the center of weight, or at a point of equilibrium. Now the center of gravity in any weight is that point which, no matter how often or whenever the body is suspended, always inclines perpendicularly toward the universal center.
In addition to body, weight, and moment, there is a certain fourth power, which can be called impetus or force. Aristotle investigates it in On Mechanical Questions, and it is completely different from [the] three aforesaid [powers or magnitudes]. [...]"
in 1586, Simon Stevin uses the Dutch term staltwicht ("parked weight") for momentum in De Beghinselen Der Weeghconst.
In 1632, Galileo Galilei publishes Dialogue Concerning the Two Chief World Systems and uses the Italian momento with many meanings, including the one of his predecessors.
In 1643, Thomas Salusbury translates some of Galilei's works into English. Salusbury translates Latin momentum and Italian momento into the English term moment.
In 1765, the Latin term momentum inertiae (English: moment of inertia) is used by Leonhard Euler to refer to one of Christiaan Huygens's quantities in Horologium Oscillatorium. Huygens 1673 work involving finding the center of oscillation had been stimulated by Marin Mersenne, who suggested it to him in 1646.
In 1811, the French term moment d'une force (English: moment of a force) with respect to a point and plane is used by Siméon Denis Poisson in Traité de mécanique. An English translation appears in 1842.
In 1884, the term torque is suggested by James Thomson in the context of measuring rotational forces of machines (with propellers and rotors). Today, a dynamometer is used to measure the torque of machines.
In 1893, Karl Pearson uses the term n-th moment and in the context of curve-fitting scientific measurements. Pearson wrote in response to John Venn, who, some years earlier, observed a peculiar pattern involving meteorological data and asked for an explanation of its cause. In Pearson's response, this analogy is used: the mechanical "center of gravity" is the mean and the "distance" is the deviation from the mean. This later evolved into moments in mathematics. The analogy between the mechanical concept of a moment and the statistical function involving the sum of the th powers of deviations was noticed by several earlier, including Laplace, Kramp, Gauss, Encke, Czuber, Quetelet, and De Forest.
See also
Torque (or moment of force), see also the article couple (mechanics)
Moment (mathematics)
Mechanical equilibrium, applies when an object is balanced so that the sum of the clockwise moments about a pivot is equal to the sum of the anticlockwise moments about the same pivot
Moment of inertia , analogous to mass in discussions of rotational motion. It is a measure of an object's resistance to changes in its rotation rate
Moment of momentum , the rotational analog of linear momentum.
Magnetic moment , a dipole moment measuring the strength and direction of a magnetic source.
Electric dipole moment, a dipole moment measuring the charge difference and direction between two or more charges. For example, the electric dipole moment between a charge of –q and q separated by a distance of d is
Bending moment, a moment that results in the bending of a structural element
First moment of area, a property of an object related to its resistance to shear stress
Second moment of area, a property of an object related to its resistance to bending and deflection
Polar moment of inertia, a property of an object related to its resistance to torsion
Image moments, statistical properties of an image
Seismic moment, quantity used to measure the size of an earthquake
Plasma moments, fluid description of plasma in terms of density, velocity and pressure
List of area moments of inertia
List of moments of inertia
Multipole expansion
Spherical multipole moments
Notes
References
External links
A dictionary definition of moment.
Length
Physical quantities
Multiplication
el:Ροπή
sq:Momenti | Moment (physics) | [
"Physics",
"Mathematics"
] | 2,377 | [
"Scalar physical quantities",
"Physical phenomena",
"Distance",
"Physical quantities",
"Quantity",
"Size",
"Length",
"Wikipedia categories named after physical quantities",
"Physical properties",
"Moment (physics)"
] |
279,314 | https://en.wikipedia.org/wiki/Membrane%20paradigm | In black hole theory, the black hole membrane paradigm is a simplified model, useful for visualising and calculating the effects predicted by quantum mechanics for the exterior physics of black holes, without using quantum-mechanical principles or calculations. It models a black hole as a thin, classically radiating surface (or membrane) at or vanishingly close to the black hole's event horizon. This approach to the theory of black holes was created by Kip S. Thorne, R. H. Price and D. A. Macdonald.
Electrical resistance
Thorne (1994) relates that this approach to studying black holes was prompted by the realisation by Hanni, Ruffini, Wald and Cohen in the early 1970s that since an electrically charged pellet dropped into a black hole should still appear to a distant outsider to be remaining just outside the event horizon, if its image persists, its electrical fieldlines ought to persist too, and ought to point to the location of the "frozen" image (1994, pp. 406). If the black hole rotates, and the image of the pellet is pulled around, the associated electrical fieldlines ought to be pulled around with it to create basic "electrical dynamo" effects (see: dynamo theory).
Further calculations yielded properties for a black hole such as apparent electrical resistance (pp. 408). Since these fieldline properties seemed to be exhibited down to the event horizon, and general relativity insisted that no dynamic exterior interactions could extend through the horizon, it was considered convenient to invent a surface at the horizon that these electrical properties could be said to belong to.
Hawking radiation
After being introduced to model the theoretical electrical characteristics of the horizon, the membrane approach was then pressed into service to model the Hawking radiation effect predicted by quantum mechanics.
In the coordinate system of a distant stationary observer, Hawking radiation tends to be described as a quantum-mechanical particle-pair production effect (involving virtual particles), but for stationary observers hovering nearer to the hole, the effect is supposed to look like a purely conventional radiation effect involving real particles. In the membrane paradigm, the black hole is described as it should be seen by an array of these stationary, suspended noninertial observers, and since their shared coordinate system ends at the event horizon (because an observer cannot legally hover at or below the event horizon under general relativity), this conventional-looking radiation is described as being emitted by an arbitrarily thin shell of hot material at or just above the event horizon, where this coordinate system fails.
As in the electrical case, the membrane paradigm is useful because these effects should appear all the way down to the event horizon, but are not allowed by GR to be coming through the horizon – attributing them to a hypothetical thin radiating membrane at the horizon allows them to be modeled classically without explicitly contradicting general relativity's prediction that event horizon surface is inescapable.
In 1986, Kip S. Thorne, Richard H. Price and D. A. Macdonald published an anthology of papers by various authors that examined this idea: "Black Holes: The membrane paradigm".
See also
Holographic principle
Black hole complementarity
References
Leonard Susskind, "Black holes and the information paradox", Scientific American, April 1997 (cover story). Also reprinted in the special edition "The edge of physics"
Kip S. Thorne, R. H. Price and D. A. Macdonald (eds.) "Black Holes: The Membrane Paradigm" (1986)
Thorne, Kip, Black Holes and Time Warps: Einstein's Outrageous Legacy, W. W. Norton & Company; Reprint edition, January 1, 1995, , chapter 11, pp. 397–411
Black holes
Quantum gravity
Holonomic brain theory | Membrane paradigm | [
"Physics",
"Astronomy"
] | 765 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
279,624 | https://en.wikipedia.org/wiki/Magnetic%20flux%20quantum | The magnetic flux, represented by the symbol , threading some contour or loop is defined as the magnetic field multiplied by the loop area , i.e. . Both and can be arbitrary, meaning that the flux can be as well but increments of flux can be quantized. The wave function can be multivalued as it happens in the Aharonov–Bohm effect or quantized as in superconductors. The unit of quantization is therefore called magnetic flux quantum.
Dirac magnetic flux quantum
The first to realize the importance of the flux quantum was Dirac in his publication on monopoles
The phenomenon of flux quantization was predicted first by Fritz London then within the Aharonov–Bohm effect and later discovered experimentally in superconductors (see below).
Superconducting magnetic flux quantum
If one deals with a superconducting ring (i.e. a closed loop path in a superconductor) or a hole in a bulk superconductor, the magnetic flux threading such a hole/loop is quantized.
The (superconducting) magnetic flux quantum is a combination of fundamental physical constants: the Planck constant and the electron charge .
Its value is, therefore, the same for any superconductor.
To understand this definition in the context of the Dirac flux quantum one shall consider that the effective quasiparticles active in a superconductors are Cooper pairs with an effective charge of 2 electrons .
The phenomenon of flux quantization was first discovered in superconductors experimentally by B. S. Deaver and W. M. Fairbank and, independently, by R. Doll and M. Näbauer, in 1961. The quantization of magnetic flux is closely related to the Little–Parks effect, but was predicted earlier by Fritz London in 1948 using a phenomenological model.
The inverse of the flux quantum, , is called the Josephson constant, and is denoted J. It is the constant of proportionality of the Josephson effect, relating the potential difference across a Josephson junction to the frequency of the irradiation. The Josephson effect is very widely used to provide a standard for high-precision measurements of potential difference, which (from 1990 to 2019) were related to a fixed, conventional value of the Josephson constant, denoted J-90. With the 2019 revision of the SI, the Josephson constant has an exact value of J = .
Derivation of the superconducting flux quantum
The following physical equations use SI units. In CGS units, a factor of would appear.
The superconducting properties in each point of the superconductor are described by the complex quantum mechanical wave function – the superconducting order parameter. As any complex function can be written as , where is the amplitude and is the phase. Changing the phase by will not change and, correspondingly, will not change any physical properties. However, in the superconductor of non-trivial topology, e.g. superconductor with the hole or superconducting loop/cylinder, the phase may continuously change from some value to the value as one goes around the hole/loop and comes to the same starting point. If this is so, then one has magnetic flux quanta trapped in the hole/loop, as shown below:
Per minimal coupling, the current density of Cooper pairs in the superconductor is:
where is the charge of the Cooper pair.
The wave function is the Ginzburg–Landau order parameter:
Plugged into the expression of the current, one obtains:
Inside the body of the superconductor, the current density J is zero, and therefore
Integrating around the hole/loop using Stokes' theorem and gives:
Now, because the order parameter must return to the same value when the integral goes back to the same point, we have:
Due to the Meissner effect, the magnetic induction inside the superconductor is zero. More exactly, magnetic field penetrates into a superconductor over a small distance called London's magnetic field penetration depth (denoted and usually ≈ 100 nm). The screening currents also flow in this -layer near the surface, creating magnetization inside the superconductor, which perfectly compensates the applied field , thus resulting in inside the superconductor.
The magnetic flux frozen in a loop/hole (plus its -layer) will always be quantized. However, the value of the flux quantum is equal to only when the path/trajectory around the hole described above can be chosen so that it lays in the superconducting region without screening currents, i.e. several away from the surface. There are geometries where this condition cannot be satisfied, e.g. a loop made of very thin () superconducting wire or the cylinder with the similar wall thickness. In the latter case, the flux has a quantum different from .
The flux quantization is a key idea behind a SQUID, which is one of the most sensitive magnetometers available.
Flux quantization also plays an important role in the physics of type II superconductors. When such a superconductor (now without any holes) is placed in a magnetic field with the strength between the first critical field and the second critical field , the field partially penetrates into the superconductor in a form of Abrikosov vortices. The Abrikosov vortex consists of a normal core – a cylinder of the normal (non-superconducting) phase with a diameter on the order of the , the superconducting coherence length. The normal core plays a role of a hole in the superconducting phase. The magnetic field lines pass along this normal core through the whole sample. The screening currents circulate in the -vicinity of the core and screen the rest of the superconductor from the magnetic field in the core. In total, each such Abrikosov vortex carries one quantum of magnetic flux .
Measuring the magnetic flux
Prior to the 2019 revision of the SI, the magnetic flux quantum was measured with great precision by exploiting the Josephson effect. When coupled with the measurement of the von Klitzing constant , this provided the most accurate values of the Planck constant obtained until 2019. This may be counterintuitive, since is generally associated with the behaviour of microscopically small systems, whereas the quantization of magnetic flux in a superconductor and the quantum Hall effect are both emergent phenomena associated with thermodynamically large numbers of particles.
As a result of the 2019 revision of the SI, the Planck constant has a fixed value which, together with the definitions of the second and the metre, provides the official definition of the kilogram. Furthermore, the elementary charge also has a fixed value of to define the ampere. Therefore, both the Josephson constant and the von Klitzing constant have fixed values, and the Josephson effect along with the von Klitzing quantum Hall effect becomes the primary mise en pratique for the definition of the ampere and other electric units in the SI.
See also
Aharonov–Bohm effect
Brian Josephson
Committee on Data for Science and Technology
Domain wall (magnetism)
Flux pinning
Ginzburg–Landau theory
Husimi Q representation
Macroscopic quantum phenomena
Magnetic domain
Magnetic monopole
Quantum vortex
Topological defect
von Klitzing constant
References
Further reading
Aharonov–Bohm effect and flux quantization in superconductors
David tong lectures:
Superconductivity
Quantum magnetism
Metrology
Physical constants | Magnetic flux quantum | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,565 | [
"Physical quantities",
"Quantity",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Quantum magnetism",
"Physical constants",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
279,651 | https://en.wikipedia.org/wiki/Truncated%20tetrahedron | In geometry, the truncated tetrahedron is an Archimedean solid. It has 4 regular hexagonal faces, 4 equilateral triangle faces, 12 vertices and 18 edges (of two types). It can be constructed by truncating all 4 vertices of a regular tetrahedron.
Construction
The truncated tetrahedron can be constructed from a regular tetrahedron by cutting all of its vertices off, a process known as truncation. The resulting polyhedron has 4 equilateral triangles and 4 regular hexagons, 18 edges, and 12 vertices. With edge length 1, the Cartesian coordinates of the 12 vertices are points
that have an even number of minus signs.
Properties
Given the edge length . The surface area of a truncated tetrahedron is the sum of 4 regular hexagons and 4 equilateral triangles' area, and its volume is:
The dihedral angle of a truncated tetrahedron between triangle-to-hexagon is approximately 109.47°, and that between adjacent hexagonal faces is approximately 70.53°.
The densest packing of the truncated tetrahedron is believed to be , as reported by two independent groups using Monte Carlo methods by and . Although no mathematical proof exists that this is the best possible packing for the truncated tetrahedron, the high proximity to the unity and independence of the findings make it unlikely that an even denser packing is to be found. If the truncation of the corners is slightly smaller than that of a truncated tetrahedron, this new shape can be used to fill space completely.
The truncated tetrahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. The truncated tetrahedron has the same three-dimensional group symmetry as the regular tetrahedron, the tetrahedral symmetry . The polygonal faces that meet for every vertex are one equilateral triangle and two regular hexagons, and the vertex figure is denoted as . Its dual polyhedron is triakis tetrahedron, a Catalan solid, shares the same symmetry as the truncated tetrahedron.
Related polyhedrons
The truncated tetrahedron can be found in the construction of polyhedrons. For example, the augmented truncated tetrahedron is a Johnson solid constructed from a truncated tetrahedron by attaching triangular cupola onto its hexagonal face. The triakis truncated tetrahedron is a polyhedron constructed from a truncated tetrahedron by adding three tetrahedrons onto its triangular faces, as interpreted by the name "triakis". It is classified as plesiohedron, meaning it can tessellate in three-dimensional space known as honeycomb; an example is triakis truncated tetrahedral honeycomb.
The Friauf polyhedron is named after J. B. Friauf in which he described it as a intermetallic structure formed by a compound of metallic elements. It can be found in crystals such as complex metallic alloys, an example is dizinc magnesium MgZn2. It is a lower symmetry version of the truncated tetrahedron, interpreted as a truncated tetragonal disphenoid with its three-dimensional symmetry group as the dihedral group of order 8.
Truncating a truncated tetrahedron gives the resulting polyhedron 54 edges, 32 vertices, and 20 faces—4 hexagons, 4 nonagons, and 12 trapeziums. This polyhedron was used by Adidas as the underlying geometry of the Jabulani ball designed for the 2010 World Cup.
Truncated tetrahedral graph
In the mathematical field of graph theory, a truncated tetrahedral graph is an Archimedean graph, the graph of vertices and edges of the truncated tetrahedron, one of the Archimedean solids. It has 12 vertices and 18 edges. It is a connected cubic graph, and connected cubic transitive graph.
Examples
See also
Quarter cubic honeycomb – Fills space using truncated tetrahedra and smaller tetrahedra
Truncated 5-cell – Similar uniform polytope in 4-dimensions
Truncated triakis tetrahedron
Triakis truncated tetrahedron
Octahedron – a rectified tetrahedron
Truncated Triangular Pyramid Number
References
External links
Editable printable net of a truncated tetrahedron with interactive 3D view
The Uniform Polyhedra
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
Archimedean solids
Truncated tilings
Individual graphs
Planar graphs | Truncated tetrahedron | [
"Physics",
"Mathematics"
] | 936 | [
"Truncated tilings",
"Tessellation",
"Planar graphs",
"Planes (geometry)",
"Symmetry"
] |
279,654 | https://en.wikipedia.org/wiki/Truncated%20octahedron | In geometry, the truncated octahedron is the Archimedean solid that arises from a regular octahedron by removing six pyramids, one at each of the octahedron's vertices. The truncated octahedron has 14 faces (8 regular hexagons and 6 squares), 36 edges, and 24 vertices. Since each of its faces has point symmetry the truncated octahedron is a 6-zonohedron. It is also the Goldberg polyhedron GIV(1,1), containing square and hexagonal faces. Like the cube, it can tessellate (or "pack") 3-dimensional space, as a permutohedron.
The truncated octahedron was called the "mecon" by Buckminster Fuller.
Its dual polyhedron is the tetrakis hexahedron. If the original truncated octahedron has unit edge length, its dual tetrakis hexahedron has edge lengths and .
Classifications
As an Archimedean solid
A truncated octahedron is constructed from a regular octahedron by cutting off all vertices. This resulting polyhedron has six squares and eight hexagons, leaving out six square pyramids. Setting the edge length of the regular octahedron equal to , it follows that the length of each edge of a square pyramid (to be removed) is (the square pyramid is has four equilateral triangles as faces, the first Johnson solid). From the equilateral square pyramid's property, its volume is . Because six equilateral square pyramids are removed by truncation, the volume of a truncated octahedron is obtained by subtracting the volume of those six from that of a regular octahedron:
The surface area of a truncated octahedron can be obtained by summing all polygonals' area, six squares and eight hexagons. Considering the edge length , this is:
The truncated octahedron is one of the thirteen Archimedean solids. In other words, it has a highly symmetric and semi-regular polyhedron with two or more different regular polygonal faces that meet in a vertex. The dual polyhedron of a truncated octahedron is the tetrakis hexahedron. They both have the same three-dimensional symmetry group as the regular octahedron does, the octahedral symmetry . A square and two hexagons surround each of its vertex, denoting its vertex figure as .
The dihedral angle of a truncated octahedron between square-to-hexagon is , and that between adjacent hexagonal faces is .
The Cartesian coordinates of the vertices of a truncated octahedron with edge length 1 are all permutations of
As a space-filling polyhedron
The truncated octahedron can be described as a permutohedron of order 4 or 4-permutohedron, meaning it can be represented with even more symmetric coordinates in four dimensions: all permutations of form the vertices of a truncated octahedron in the three-dimensional subspace . Therefore, each vertex corresponds to a permutation of and each edge represents a single pairwise swap of two elements. With this labeling, the swaps are of elements whose values differ by one. If, instead, the truncated octahedron is labeled by the inverse permutations, the edges correspond to swaps of elements whose positions differ by one. With this alternative labeling, the edges and vertices of the truncated octahedron form the Cayley graph of the symmetric group , the group of four-element permtutations, as generated by swaps of consecutive positions.
The truncated octahedron can tile space. It is classified as plesiohedron, meaning it can be defined as the Voronoi cell of a symmetric Delone set. Plesiohedra, translated without rotating, can be repeated to fill space. There are five three-dimensional primary parallelohedrons, one of which is the truncated octahedron. More generally, every permutohedron and parallelohedron is a zonohedron, a polyhedron that is centrally symmetric and can be defined by a Minkowski sum.
Applications
In chemistry, the truncated octahedron is the sodalite cage structure in the framework of a faujasite-type of zeolite crystals.
In solid-state physics, the first Brillouin zone of the face-centered cubic lattice is a truncated octahedron.
The truncated octahedron (in fact, the generalized truncated octahedron) appears in the error analysis of quantization index modulation (QIM) in conjunction with repetition coding.
Dissection
The truncated octahedron can be dissected into a central octahedron, surrounded by 8 triangular cupolae on each face, and 6 square pyramids above the vertices.
Removing the central octahedron and 2 or 4 triangular cupolae creates two Stewart toroids, with dihedral and tetrahedral symmetry:
It is possible to slice a tesseract by a hyperplane so that its sliced cross-section is a truncated octahedron.
The cell-transitive bitruncated cubic honeycomb can also be seen as the Voronoi tessellation of the body-centered cubic lattice. The truncated octahedron is one of five three-dimensional primary parallelohedra.
Objects
Truncated octahedral graph
In the mathematical field of graph theory, a truncated octahedral graph is the graph of vertices and edges of the truncated octahedron. It has 24 vertices and 36 edges, and is a cubic Archimedean graph. It has book thickness 3 and queue number 2.
As a Hamiltonian cubic graph, it can be represented by LCF notation in multiple ways: [3, −7, 7, −3]6, [5, −11, 11, 7, 5, −5, −7, −11, 11, −5, −7, 7]2, and [−11, 5, −3, −7, −9, 3, −5, 5, −3, 9, 7, 3, −5, 11, −3, 7, 5, −7, −9, 9, 7, −5, −7, 3].
References
(Section 3–9)
External links
Editable printable net of a truncated octahedron with interactive 3D view
Uniform polyhedra
Archimedean solids
Space-filling polyhedra
Truncated tilings
Zonohedra | Truncated octahedron | [
"Physics"
] | 1,373 | [
"Symmetry",
"Uniform polytopes",
"Truncated tilings",
"Tessellation",
"Uniform polyhedra"
] |
279,722 | https://en.wikipedia.org/wiki/Fredholm%20operator | In mathematics, Fredholm operators are certain operators that arise in the Fredholm theory of integral equations. They are named in honour of Erik Ivar Fredholm. By definition, a Fredholm operator is a bounded linear operator T : X → Y between two Banach spaces with finite-dimensional kernel and finite-dimensional (algebraic) cokernel , and with closed range . The last condition is actually redundant.
The index of a Fredholm operator is the integer
or in other words,
Properties
Intuitively, Fredholm operators are those operators that are invertible "if finite-dimensional effects are ignored." The formally correct statement follows. A bounded operator T : X → Y between Banach spaces X and Y is Fredholm if and only if it is invertible modulo compact operators, i.e., if there exists a bounded linear operator
such that
are compact operators on X and Y respectively.
If a Fredholm operator is modified slightly, it stays Fredholm and its index remains the same. Formally: The set of Fredholm operators from X to Y is open in the Banach space L(X, Y) of bounded linear operators, equipped with the operator norm, and the index is locally constant. More precisely, if T0 is Fredholm from X to Y, there exists ε > 0 such that every T in L(X, Y) with ||T − T0|| < ε is Fredholm, with the same index as that of T0.
When T is Fredholm from X to Y and U Fredholm from Y to Z, then the composition is Fredholm from X to Z and
When T is Fredholm, the transpose (or adjoint) operator is Fredholm from to , and . When X and Y are Hilbert spaces, the same conclusion holds for the Hermitian adjoint T∗.
When T is Fredholm and K a compact operator, then T + K is Fredholm. The index of T remains unchanged under such a compact perturbations of T. This follows from the fact that the index i(s) of is an integer defined for every s in [0, 1], and i(s) is locally constant, hence i(1) = i(0).
Invariance by perturbation is true for larger classes than the class of compact operators. For example, when U is Fredholm and T a strictly singular operator, then T + U is Fredholm with the same index. The class of inessential operators, which properly contains the class of strictly singular operators, is the "perturbation class" for Fredholm operators. This means an operator is inessential if and only if T+U is Fredholm for every Fredholm operator .
Examples
Let be a Hilbert space with an orthonormal basis indexed by the non negative integers. The (right) shift operator S on H is defined by
This operator S is injective (actually, isometric) and has a closed range of codimension 1, hence S is Fredholm with . The powers , , are Fredholm with index . The adjoint S* is the left shift,
The left shift S* is Fredholm with index 1.
If H is the classical Hardy space on the unit circle T in the complex plane, then the shift operator with respect to the orthonormal basis of complex exponentials
is the multiplication operator Mφ with the function . More generally, let φ be a complex continuous function on T that does not vanish on , and let Tφ denote the Toeplitz operator with symbol φ, equal to multiplication by φ followed by the orthogonal projection :
Then Tφ is a Fredholm operator on , with index related to the winding number around 0 of the closed path : the index of Tφ, as defined in this article, is the opposite of this winding number.
Applications
Any elliptic operator can be extended to a Fredholm operator. The use of Fredholm operators in partial differential equations is an abstract form of the parametrix method.
The Atiyah-Singer index theorem gives a topological characterization of the index of certain operators on manifolds.
The Atiyah-Jänich theorem identifies the K-theory K(X) of a compact topological space X with the set of homotopy classes of continuous maps from X to the space of Fredholm operators H→H, where H is the separable Hilbert space and the set of these operators carries the operator norm.
Generalizations
Semi-Fredholm operators
A bounded linear operator T is called semi-Fredholm if its range is closed and at least one of , is finite-dimensional. For a semi-Fredholm operator, the index is defined by
Unbounded operators
One may also define unbounded Fredholm operators. Let X and Y be two Banach spaces.
The closed linear operator is called Fredholm if its domain is dense in , its range is closed, and both kernel and cokernel of T are finite-dimensional.
is called semi-Fredholm if its domain is dense in , its range is closed, and either kernel or cokernel of T (or both) is finite-dimensional.
As it was noted above, the range of a closed operator is closed as long as the cokernel is finite-dimensional (Edmunds and Evans, Theorem I.3.2).
Notes
References
D.E. Edmunds and W.D. Evans (1987), Spectral theory and differential operators, Oxford University Press. .
A. G. Ramm, "A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators", American Mathematical Monthly, 108 (2001) p. 855 (NB: In this paper the word "Fredholm operator" refers to "Fredholm operator of index 0").
Bruce K. Driver, "Compact and Fredholm Operators and the Spectral Theorem", Analysis Tools with Applications, Chapter 35, pp. 579–600.
Robert C. McOwen, "Fredholm theory of partial differential equations on complete Riemannian manifolds", Pacific J. Math. 87, no. 1 (1980), 169–185.
Tomasz Mrowka, A Brief Introduction to Linear Analysis: Fredholm Operators, Geometry of Manifolds, Fall 2004 (Massachusetts Institute of Technology: MIT OpenCouseWare)
Fredholm theory
Linear operators | Fredholm operator | [
"Mathematics"
] | 1,303 | [
"Functions and mappings",
"Mathematical relations",
"Mathematical objects",
"Linear operators"
] |
280,509 | https://en.wikipedia.org/wiki/Fire%20hydrant | A fire hydrant, fireplug, firecock (archaic), hydrant riser or Johnny Pump is a connection point by which firefighters can tap into a water supply. It is a component of active fire protection. Underground fire hydrants have been used in Europe and Asia since at least the 18th century. Above-ground pillar-type hydrants are a 19th-century invention.
Operation
The user (most likely a fire department) attaches a hose to the fire hydrant, then opens a valve on the hydrant to provide a powerful flow of water, on the order of ; this pressure varies according to region and depends on various factors (including the size and location of the attached water main). This user can attach this hose to a fire engine, which can use a powerful pump to boost the water pressure and possibly split it into multiple streams. One may connect the hose with a threaded connection, instantaneous "quick connector" or a Storz connector.
If a fire hydrant is opened or closed too quickly, a water hammer can occur and damage nearby pipes and equipment. The water inside a charged hose line causes it to be very heavy, and high-water pressure causes it to be stiff and unable to make a tight turn while pressurized. When a fire hydrant is unobstructed, this is not a problem, as there is enough room to adequately position the hose.
Most fire hydrant valves are not designed to throttle the water flow; they are designed to be operated full-on or full-off. The valving arrangement of most dry-barrel hydrants is for the drain valve to be open at anything other than full operation. Usage at partial opening can consequently result in considerable flow directly into the soil surrounding the hydrant, which, over time, can cause severe scouring. Gate or butterfly valves can be installed directly onto the hydrant opening to control individual outputs and allow for changing equipment connections without turning off the flow to other outlets. These valves can be up to in diameter to accommodate the large central "steamer" outlets on many US hydrants. It is good practice to install valves on all outlets before using a hydrant as the protective caps are unreliable and can cause major injury if they fail.
New firefighters are often trained extensively on fire hydrants in the fire academy to be quick and safe while connecting the fire engine to the fire hydrant (usually within one minute). Time is often critical as other firefighters will be waiting for the water supply. When operating a hydrant, a firefighter typically wears appropriate personal protective equipment, such as gloves and a helmet with face shield worn. High-pressure water coursing through a potentially aging and corroding hydrant could cause a failure, injuring the firefighter operating the hydrant or bystanders.
In most jurisdictions it is illegal to park a car within a certain distance of a fire hydrant. In North America, the distances are commonly , often indicated by yellow or red paint on the curb. The rationale behind these laws is that hydrants need to be visible and accessible in an emergency. In the event that a car is illegally parked next to a fire hydrant when firefighters need access to it, firefighters are legally allowed to break the car's windows to run the hose through it, while the car owner receives a parking citation.
Other uses
Street pooling
In 1896, during a terrible heatwave in New York City, the Commissioner of Public Works ordered the opening of the fire hydrants to provide relief to the population. Today some US communities provide low flow sprinkler heads to enable residents to use the hydrants to cool off during hot weather, while gaining some control on water usage. Sometimes those simply seeking to play in the water remove the caps and open the valve, providing residents a place to play and cool off in summer.
Preventing misuse
To prevent casual use or misuse, the hydrant requires special tools to be opened, usually a large wrench with a pentagonal socket. Vandals sometimes cause monetary loss by wasting water when they open hydrants. Such vandalism can also reduce municipal water pressure and impair firefighters' efforts to extinguish fires. Most fire hydrants in Australia are protected by a silver-coloured cover with a red top, secured to the ground with bolts to protect the hydrant from vandalism and unauthorized use. The cover must be removed before use.
In most areas of the United States, contractors who need temporary water may purchase permits to use hydrants. The permit will generally require a hydrant meter, a gate valve and sometimes a clapper valve (if not designed into the hydrant already) to prevent backflow into the hydrant. Additionally, residents who wish to use the hydrant to fill their in-ground swimming pool are commonly permitted to do so, provided they pay for the water and agree to allow firefighters to draft from their pool in the case of an emergency.
Municipal services, such as street sweepers and tank trucks, may also be allowed to use hydrants to fill their water tanks. Often sewer maintenance trucks need water to flush out sewerage lines and fill their tanks on site from a hydrant. If necessary, the municipal workers will record the amount of water they used or use a meter.
Fire hydrants may be used to supply water to riot control vehicles. These vehicles use a high-pressure water cannon to discourage rioting.
Since fire hydrants are one of the most accessible parts of a water distribution system, they are often used for attaching pressure gauges or loggers or monitor system water pressure. Automatic flushing devices are often attached to hydrants to maintain chlorination levels in areas of low usage. Hydrants are also used as an easy above-ground access point by leak detection devices to locate leaks from the sound they make.
Construction
Depending on the country or location, hydrants can be above or below ground. In countries including Japan, the UK, Ukraine, Russia or Spain hydrants are accessible under a heavy metal cover. In other countries, such as the US, and many parts of China, an accessible part of the hydrant is above ground. It can also be mounted in an exterior wall of a building.
In areas subject to freezing temperatures, at most only a portion of the hydrant is above ground. The valve is located below the frost line and connected by a riser to the above-ground portion. A valve rod extends from the valve up through a seal at the top of the hydrant, where it can be operated with the proper wrench. This design is known as a "dry barrel" hydrant, in that the barrel, or vertical body of the hydrant, is normally dry. A drain valve underground opens when the water valve is completely closed; this allows all water to drain from the hydrant body to prevent the hydrant from freezing.
In warm areas, above-ground hydrants may be used with one or more valves in the above-ground portion. Unlike with cold-weather hydrants, it is possible to turn the water supply on and off to each port. This style is known as a "wet barrel" hydrant.
Both wet- and dry-barrel hydrants typically have multiple outlets. Wet barrel hydrant outlets are typically individually controlled, while a single stem operates all the outlets of a dry barrel hydrant simultaneously. Thus, wet barrel hydrants allow single outlets to be opened, requiring somewhat more effort, but simultaneously allowing more flexibility.
A typical US dry-barrel hydrant has two smaller outlets and one larger outlet. The larger outlet is often a Storz connection if the local fire department has standardized on hose using Storz fittings for large diameter supply line. The larger outlet is known as a "steamer" connection, because they were once used to supply steam powered water pumps, and a hydrant with such an outlet may be called a "steamer hydrant", although this usage is becoming archaic. Likewise, an older hydrant without a steamer connection may be called a "village hydrant."
Appearance
Above ground hydrants are coloured for purely practical criteria or more aesthetic reasons. In the United States, the AWWA and NFPA recommend hydrants be colored chrome yellow for rapid identification apart from the bonnet and nozzle caps which should be coded according to their available flow. Class AA hydrants (>1500 gpm) should have their nozzle caps and bonnet colored light blue, Class A hydrants (1000–1499 gpm) green, Class B hydrants (500–999 gpm) orange, Class C hydrants (0–499 gpm) red, and inoperable or end-of-system (risking water hammer) black. These aids arriving firefighters in determining how much water is available and whether to call for additional resources or find another hydrant.
Other codings can be and frequently are used, some of greater complexity, incorporating pressure information, others more simplistic. In Ottawa, Ontario, hydrant colors communicate different messages to firefighters; for example, if the inside of the hydrant is corroded so much that the interior diameter is too narrow for good pressure, it will be painted in a specific scheme to indicate to firefighters to move on to the next one. In many localities, a white or purple top indicates that the hydrant provides non-potable water. Where artistic and/or aesthetic considerations are paramount, hydrants can be extremely varied, or more subdued. In both instances this is usually at the cost of reduced practicality.
In Germany, the Netherlands, Spain, the UK, and many other countries, most hydrants are located below ground and are reached by a riser, which provides the connections for the hoses. The covers can also be artistically designed.
Signage
In the United Kingdom and Ireland, hydrants are located in the ground. Yellow "H" hydrant signs indicate the location of the hydrants and are similar to the blue signs in Finland. Mounted on a small post or nearby wall etc., the two numbers indicate the diameter of the water main (top number) and the distance from the sign (lower number). Modern signs show these measurements in millimetres and metres, whereas older signs use inches and feet. Because the orders of magnitude are so different (6 inches versus 150 mm) there is no ambiguity whichever measuring system is used.
In areas of the United States without winter snow cover, blue reflectors embedded in the street are used to allow rapid identification of hydrants at night. In areas with snow cover, tall signs or flags are used so that hydrants can be found even if covered with snow. In rural areas tall narrow posts painted with visible colours such as red are attached to the hydrants to allow them to be found during heavy snowfall periods. The tops of the fire hydrants indicate available flow in gallons per minute; the color helps make a more accurate choice of what hydrants will be utilized to supply water to the fire scene.
Blue: or more; very good flow
Green: ; good for residential areas
Orange: ; marginally adequate
Red: below ; inadequate
The hydrant bodies are also color-coded.
Chrome Yellow: Municipal System
Red: Private System
Violet: Non-potable supply
These markings and colours are prescribed in NFPA 291: Recommended Practice for Water Flow Testing and Marking of Hydrants. but most municipal water authorities do not actually follow these guidelines.
In Australia, hydrant signage varies, with several types displayed across the country. Most Australian hydrants are underground, being of a ballcock system (spring hydrant type), and a separate standpipe with a central plunger is used to open the valve. Consequently, hydrant signage is essential, because of their concealed nature.
Painted markersUsually a white or yellow (sometimes reflective paint) triangle or arrow painted on the road, pointing towards the side of the road the hydrant will be found on. These are most common in old areas, or on new roads where more advanced signs have not been installed. These are almost always coupled with a secondary form of signage.
Hydrant Marker PlatesFound on power poles, fences, or street-signs, these are a comprehensive and effective system of identification. The plate consists of several codes; H (Potable water Hydrant), RH (Recycled/Non-Potable), P (Pathway, where the hydrant cover can be found), R (Roadway). The plate is vertically oriented, around 8 cm wide, and 15 cm high. It usually faces in the direction of the hydrant. Found on this plate, from top to bottom, are the following features:
The codes listed above, Potable/Non-potable at the top, Path/Roadway on the bottom of the plate.
Below this, a number giving the distance to the hydrant (in meters), then a second number below that giving the size (in millimeters) of the water main.
A black line across the center of the plate indicates the hydrant is found on the opposite side of the road to which the plate is affixed.
Plates for recycled water have a purple background, as well as the RH code, normal potable hydrants are white, with the H code.
Road markers or Cat's eyesAlmost exclusively blue, these are placed on one side or the other of the centre line of the road, to indicate on which side of the road the hydrant lies. They are visible for several hundred meters at night in heavy rain, further in clear conditions.
In Germany the hydrant marker plates follow the style of other marker plates pointing to underground installations. Fire hydrant marker plates have a red border. Other water hydrants may have a blue border. A gas hydrant would have a yellow background instead of a white one for fire hydrants. All of them have large central T with the installation identification on top of itan "H" or older "UH" is located in the ground, a "OH" is above ground, followed by the pipe inner diameter in millimeters (with a small 80 mm in residential areas). The numbers around the T allow to locate the installation in reference to the plate's locationthe number left of the T is in meter left of the sign, the number right of the T is in meter right of the sign, and number below the T tells the distance in meter in front of the sign, where a negative number would point to a place behind the sign. The distance numbers are always given with a comma decimeter precision. If it is not a common fire hydrant type then another identification may be used, for example "300 m³" would point to a cistern usable to pump water from.
In East Asia (China, Japan and South Korea) and former Socialist countries of Eastern Europe, there are two types of fire hydrants, of which one is on the public ground and the other inside a building. The ones inside a building are installed on a wall. They are big, rectangular boxes that also provide alarms (sirens), a fire extinguisher and, at certain times, emergency kits.
Inspection and maintenance
In most areas, fire hydrants require annual inspections and maintenance; they normally only have a one-year warranty, but some have 5- or even 10-year warranties, although the longer warranty does not remove the need for periodic inspections or maintenance. These inspections are generally performed by the local municipalities or fire departments, but they often do not inspect hydrants that are identified as private. Private hydrants are usually located on larger properties to adequately protect large buildings in case of a fire and in order to comply with the fire code. Such hydrants have met the requirements of insurance underwriters and are often referred to as UL/FM hydrants. Usually, companies are contracted out to inspect private fire hydrants, unless the municipality has undertaken that task.
Some fire hydrant manufacturers recommend lubricating the head mechanism and restoring the head gaskets and O-rings annually in order that the fire hydrant perform the service expected of them, while others have incorporated proprietary features to provide long-term lubrication of the hydrant's operating mechanism. In any case, periodic inspection of lubricants is recommended. Lubrication is generally done with a food-grade non-petroleum lubricant to avoid contamination of the distribution system.
Occasionally a stone or foreign object will mar the seat gasket. In this case, most hydrants have a special seat wrench that allows removal of the seat to replace the gasket or other broken parts without removing the hydrant from the ground. Hydrant extensions are also available for raising a hydrant if the grade around the hydrant changes. Without extending the height, the wrenches to remove caps would not clear and the break flanges for traffic models would not be located correctly in case they were hit. Hydrant repair kits are also available to repair sacrificial parts designed to break when hit by a vehicle.
Many fire departments use the hydrants for flushing out water line sediments. When doing so, they often use a hydrant diffuser, a device that diffuses the water so that it does not damage property and is less dangerous to bystanders than a solid stream. Some diffusers also dechlorinate the water to avoid ground contamination. Hydrants are also sometimes used as entry or exit points for pipe cleaning pigs.
In 2011, Code for America developed an "Adopt a Hydrant" website, which enables volunteers to sign up to shovel out fire hydrants after snowstorms. As of 2014, the system has been implemented in Boston; Providence, Rhode Island; Anchorage, Alaska; and Chicago.
Non-pressurized (dry) Hydrants
In rural areas where municipal water systems are not available, dry hydrants are used to supply water for fighting fires. A dry hydrant is analogous to a standpipe. A dry hydrant is usually an unpressurized, permanently installed pipe that has one end below the water level of a lake or pond. This end usually has a strainer to prevent debris or wildlife, such as fish, from entering the pipe. The other end is above ground and has a hard sleeve connector.
When needed, a pumper fire engine will pump from the lake or pond by drafting water. This is done by vacuuming the air out of the dry hydrant, hard sleeve, and the fire engine pump with a primer. Because lower pressure now exists at the pump intake, atmospheric pressure on the water and the weight of the water forces water into the above-water portion of the dry hydrant, into the hard sleeve, and finally into the pump. This water can then be pumped by the engine's centrifugal pump.
Other types
Water wells are also sometimes classified as fire hydrants if they can supply enough water volume and pressure.
Standpipes are connections for firehoses within a building and serve the same purpose inside larger structures as fire hydrants do outdoors. Standpipes may be "dry" or "wet" (permanently filled with water); a dry standpipe requires an external source of water such as firefighting equipment.
History
Before piped mains supplies, water for firefighting had to be kept in buckets and cauldrons ready for use by 'bucket-brigades' or brought with a horse-drawn fire-pump. From the 16th century, as wooden mains water systems were installed, firefighters would dig down to the pipes and drill a hole for water to fill a “wet well” for the buckets or pumps. This had to be filled and plugged afterwards, hence the common US term for a hydrant, 'fireplug'. A marker would be left to indicate where a 'plug' had already been drilled to enable firefighters to find ready-drilled holes. Later wooden systems had pre-drilled holes and plugs.
When cast iron pipes replaced the wood, permanent underground access points were included for the fire fighters. Some countries provide access covers to these points, while others attach fixed above-ground hydrantsthe first cast iron ones were patented in 1801 by Frederick Graff, then chief-engineer of the Philadelphia Water Works. Invention since then has targeted problems such as tampering, freezing, connection, reliability etc.
See also
Active fire protection
Birdsill Holly
Fire extinguisher
Fire hose
Fire protection
Fire sprinkler
Flushing hydrant
Hydrant wrench
Portable water tank
References
Sources
(29/5/2024) https://www.thesaurus.com/browse/fire-hydrant
Further reading
— Issued to John Jorden on September 8, 1838, for "...a new and useful improvement in Fire-Plugs and Hydrants...". This is an early wooden bodied hydrant, the earliest hydrant patent extant; the patent office itself burned to the ground in 1836, taking with it all prior hydrant patents.
— Issued to Richard Stileman on January 20, 1863. An early iron hydrant, believed the first patented hydrant with the larger size steamer port to supply steam fire engines. Manufactured and marketed as the Stileman hydrant. See also Stileman page at FireHydrant.org.
— Issued to Zebulon Erastus Coffin on July 21, 1868. This is a cast iron hydrant very similar to modern fire hydrants, it was produced by Boston Machine Co. See also Boston Machine page at FireHydrant.org.
— Issued to Birdsill Holly on September 14, 1869. See also Holly page at FireHydrant.org.
Wohleber, Curt. "The Fire Hydrant". In The American Heritage of Invention & Technology. Winter 2002. Provides a history with dates of fire hydrant development. (Archived from the original on April 28, 2010)
External links
American inventions
Hydrant
Water industry
Street furniture
Infrastructure
Firefighting
Water supply
Urban planning
Public safety | Fire hydrant | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,461 | [
"Hydrology",
"Construction",
"Urban planning",
"Water industry",
"Environmental engineering",
"Water supply",
"Infrastructure",
"Architecture"
] |
281,260 | https://en.wikipedia.org/wiki/Protogalaxy | In physical cosmology, a protogalaxy, which could also be called a "primeval galaxy", is a cloud of gas which is forming into a galaxy. It is believed that the rate of star formation during this period of galactic evolution will determine whether a galaxy is a spiral or elliptical galaxy; a slower star formation tends to produce a spiral galaxy. The smaller clumps of gas in a protogalaxy form into stars.
The term "protogalaxy" itself is generally accepted to mean "Progenitors of the present day (normal) galaxies, in the early stages of formation." However, the "early stages of formation" is not a clearly defined phrase. It could be defined as: "The first major burst of star formation in a progenitor of a present day elliptical galaxy"; "The peak merging epoch of dark halos of the fragments which assemble to produce an average galaxy today"; "A still gaseous body before any star formation has taken place."; or " an over-dense region of dark matter in the very early universe, destined to become gravitationally bound and to collapse."
Formation
It is thought that the early universe began with a nearly uniform distribution (each particle an equal distance from the next) of matter and dark matter. The dark matter then began to clump together under gravitational attraction due to the initial density perturbation spectrum caused by quantum fluctuations. This derives from Heisenberg's uncertainty principle which shows that there can be tiny temporary changes in the amount of energy in empty space. Particle/antiparticle pairs can form from this energy through mass–energy equivalence, and gravitational pull causes other nearby particles to move towards it, disturbing the even distribution and creating a centre of gravity, pulling nearby particles closer. When this happens at the universe's present size it is negligible, but the state of these tiny fluctuations as the universe began expanding from a single point left an impression which scaled up as the universe expanded, resulting in large areas of increased density. The gravity of these denser clumps of dark matter then caused nearby matter to start falling into the denser region. This sort of process was reportedly observed and analysed by Nilsson et al. in 2006. This resulted in the formation of clouds of gas, predominantly hydrogen, and the first stars began to form within these clouds. These clouds of gas and early stars, many times smaller than our galaxy, were the first protogalaxies.
The established theory is that the groups of small protogalaxies were attracted together by gravity and collided, which resulted in the formation of the much larger "adult" galaxies we have today. This follows the process of hierarchical assembly, which is an ongoing process where larger bodies are continually formed from the merging of smaller ones.
Properties
Composition
Since there had been no previous star formation to create other elements, protogalaxies would have been made up almost entirely of hydrogen and helium. The hydrogen would bond to form H2 molecules, with some exceptions. This would change as star formation began and produced more elements through the process of nuclear fusion.
Mechanics
Once a protogalaxy begins to form, all particles bound by its gravity begin to free fall towards it. The time taken for this free-fall to conclude can be approximated using the free-fall equations. Most galaxies have completed this free-fall stage to become stable elliptical or disk galaxies, the disks taking longer to fully form. The formation of galaxy clusters takes much longer and is still in progress now. This stage is also where galaxies acquire most of their angular momentum. A protogalaxy acquires this due to gravitational influence from neighbouring dense clumps in the early universe, and the further the gas is away from the centre, the more spin it gets.
Luminosity
The luminosity of protogalaxies comes from two sources. First and foremost is the radiation from nuclear fusion of Hydrogen into helium in early stars. This early burst of star formation is thought to have made a protogalaxy's luminosity comparable to a present-day starburst galaxy or a quasar. The other is the release of excess gravitational binding energy.
The primary wavelength expected from a protogalaxy is a variety of UV called Lyman-alpha, which is the wavelength emitted by Hydrogen gas when it is ionised by radiation from a star.
Detection
Protogalaxies can theoretically still be seen today, as the light from the farthest reaches of the universe takes a very long time to reach Earth, in some places long enough that we see them at the stage where they are populated by protogalaxies.
There have been many attempts to find protogalaxies with telescopes over the last 30 years because of the value of such a discovery in confirming how galaxies form, but the sheer distance any light would have to travel for it to be old enough to come from a protogalaxy is very large. This, coupled with the fact that the Lyman-alpha wavelength is quite readily absorbed by dust, made some astronomers think protogalaxies may be too faint to detect.
In 1996, a protogalaxy candidate was discovered by Yee et al. using the Canadian Network for Observational Cosmology (CNOC). The object was a disk-like galaxy at high redshift with a very high luminosity. It was later debated that the incredible luminosity was caused by the gravitational lensing of a foreground galactic cluster.
In 2006, K. Nilsson et al. reported finding a "blob" emitting Lyman alpha UV radiation. Analysis concluded that this was a giant cloud of hydrogen gas falling onto a clump of dark matter in the early universe, creating a protogalaxy.
In 2007, Michael Rauch et al. were using the VLT to search for a signal from intergalactic gas, when they spotted dozens of discrete objects emitting large amounts of the Lyman-alpha type UV radiation. They concluded that these 27 objects were examples of protogalaxies from 11 billion years ago.
See also
Big Bang
Galaxy formation and evolution
GOODS-N-774
References
External links
Universe-Review:Formation and Evolution of Galaxy
Galaxies
Protogalaxies
Physical cosmology | Protogalaxy | [
"Physics",
"Astronomy"
] | 1,273 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
19,826,297 | https://en.wikipedia.org/wiki/Electrohydraulic%20servo%20valve | An electrohydraulic servo valve (EHSV) is an electrically-operated valve that controls how hydraulic fluid is sent to an actuator. Servo valves are often used to control powerful hydraulic cylinders with a very small electrical signal. Servo valves can provide precise control of position, velocity, pressure, and force with good post-movement damping characteristics.
History of electrohydraulic servo valves
The electrohydraulic servo valve first appeared in World War II. The EHSVs in use during the 1940s were characterized by poor accuracy and slow response times due to the inability to rapidly convert electrical signals into hydraulic flows. The first two-stage servo valve used a solenoid to actuate a first stage spool which in turn drove a rotating main stage. The servo valves of the World War II-era were similar to this — using a solenoid to drive a spool valve.
Advancement of EHSVs took off in the 1950s, largely due to the adoption of permanent magnet torque motors as the first stage (as opposed to solenoids). This resulted in greatly improved response times and a reduction in power used to control the valves.
Description
Types
Electrohydraulic servo valves may consist of one or more stages. A single-stage servo valve uses a torque motor to directly position a spool valve. Single-stage servo valves suffer from limitations in flow capability and stability due to torque motor power requirements. Two-stage servo valves may use flapper, jet pipe, or deflector jet valves as hydraulic amplifier first stages to position a second-stage spool valve. This design results in significant increases in servo valve flow capability, stability, and force output. Similarly, three-stage servo valves may use an intermediate stage spool valve to position a larger third stage spool valve. Three-stage servo valves are limited to very high power applications, where significant flows are required.
Furthermore, two-stage servo valves may be classified by the type of feedback used for the second stage; which may be spool position, load pressure, or load flow feedback. Most commonly, two-stage servo valves use position feedback; which may further be classified by direct feedback, force feedback, or spring centering.
Control
A servo valve receives pressurized hydraulic fluid from a source, typically a hydraulic pump. It then transfers the fluid to a hydraulic cylinder in a closely controlled manner. Typically, the valve will move the spool proportionnaly to an electrical signal that it receives, indirectly controlling flow rate. Simple hydraulic control valves are binary, they are either on or off. Servo valves are different in that they can continuously vary the flow they supply from zero up to their rated maximum flow, or until the output pressure reaches the supplied pressure. More complex servo valves can control other parameters. For instance, some have internal feedback so that the input signal effectively control flow or output pressure, rather than spool position.
Servo valves are often used in a feedback control where the position or force on a hydraulic cylinder is measured, and fed back into a controller that varies the signal sent to the servo valve. This allows very precise control of the cylinder.
Examples of usage
Manufacturing
One example of servo valve use is in blow molding where the servo valve controls the wall thickness of extruded plastic making up the bottle or container by use of a deformable die. The mechanical feedback has been replaced by an electric feedback with a position transducer. Integrated electronics close the position loop for the spool. These valves are suitable for electrohydraulic position, velocity, pressure or force control systems with extremely high dynamic response requirements.
Aircraft
Servo valves are used to regulate the flow of fuel into a turbofan engine governed by FADEC. In fly-by-wire aircraft the control surfaces are often moved by servo valves connected to hydraulic cylinders. The signals to the servo valves are controlled by a flight control computer that receives commands from the pilot and monitors the flight of the aircraft.
References
Control devices
Valves
Hydraulics | Electrohydraulic servo valve | [
"Physics",
"Chemistry",
"Engineering"
] | 837 | [
"Control devices",
"Physical systems",
"Valves",
"Hydraulics",
"Control engineering",
"Piping",
"Fluid dynamics"
] |
19,827,148 | https://en.wikipedia.org/wiki/Elasticity%20coefficient | In chemistry, the rate of a chemical reaction is influenced by many different factors, such as temperature, pH, reactant, the concentration of products, and other effectors. The degree to which these factors change the reaction rate is described by the elasticity coefficient. This coefficient is defined as follows:
where denotes the reaction rate and denotes the substrate concentration. Be aware that the notation will use lowercase roman letters, such as to indicate concentrations.
The partial derivative in the definition indicates that the elasticity is measured with respect to changes in a factor S while keeping all other factors constant. The most common factors include substrates, products, enzyme, and effectors. The scaling of the coefficient ensures that it is dimensionless and independent of the units used to measure the reaction rate and magnitude of the factor. The elasticity coefficient is an integral part of metabolic control analysis and was introduced in the early 1970s and possibly earlier by Henrik Kacser and Burns in Edinburgh and Heinrich and Rapoport in Berlin.
The elasticity concept has also been described by other authors, most notably Savageau in Michigan and Clarke at Edmonton. In the late 1960s Michael Savageau developed an innovative approach called biochemical systems theory that uses power-law expansions to approximate the nonlinearities in biochemical kinetics. The theory is very similar to metabolic control analysis and has been very successfully and extensively used to study the properties of different feedback and other regulatory structures in cellular networks. The power-law expansions used in the analysis invoke coefficients called kinetic orders, which are equivalent to the elasticity coefficients.
Bruce Clarke in the early 1970s, developed a sophisticated theory on analyzing the dynamic stability in chemical networks. As part of his analysis, Clarke also introduced the notion of kinetic orders and a power-law approximation that was somewhat similar to Savageau's power-law expansions. Clarke's approach relied heavily on certain structural characteristics of networks, called extreme currents (also called elementary modes in biochemical systems). Clarke's kinetic orders are also equivalent to elasticities.
Elasticities can also be usefully interpreted as the means by which signals propagate up or down a given pathway.
The fact that different groups independently introduced the same concept implies that elasticities, or their equivalent, kinetic orders, are most likely a fundamental concept in the analysis of complex biochemical or chemical systems.
Calculating elasticity coefficients
Elasticity coefficients can be calculated either algebraically or by numerical means.
Algebraic calculation of elasticity coefficients
Given the definition of the elasticity coefficient in terms of a partial derivative, it is possible, for example, to determine the elasticity of an arbitrary rate law by differentiating the rate law by the independent variable and scaling. For example, the elasticity coefficient for a mass-action rate law such as:
where is the reaction rate, the reaction rate constant, is the ith chemical species involved in the reaction and the ith reaction order, then the elasticity, can be obtained by differentiating the rate law with respect to and scaling:
That is, the elasticity for a mass-action rate law is equal to the order of reaction of the species.
For example the elasticity of A in the reaction where the rate of reaction is given by: , the elasticity can be evaluated using:
Elasticities can also be derived for more complex rate laws such as the Michaelis–Menten rate law. If
then it can be easily shown than
This equation illustrates the idea that elasticities need not be constants (as with mass-action laws) but can be a function of the reactant concentration. In this case, the elasticity approaches unity at low reactant concentration (s) and zero at high reactant concentration.
For the reversible Michaelis–Menten rate law:
where is the forward , the forward , the equilibrium constant and the reverse , two elasticity coefficients can be calculated, one with respect to substrate, S, and another with respect to product, P. Thus:
where is the mass-action ratio, that is . Note that when p = 0, the equations reduce to the case for the irreversible Michaelis–Menten law.
As a final example, consider the Hill equation:
where n is the Hill coefficient and is the half-saturation coefficient (cf. Michaelis–Menten rate law), then the elasticity coefficient is given by:
Note that at low concentrations of S the elasticity approaches n. At high concentrations of S the elasticity approaches zero. This means the elasticity is bounded between zero and the Hill coefficient.
Summation property of elasticity coefficients
The elasticities for a reversible uni-uni enzyme catalyzed reaction was previously given by:
An interesting result can be obtained by evaluating the sum . This can be shown to equal:
Two extremes can be considered. At high saturation (), the right-hand term tends to zero so that:
That is the absolute magnitudes of the substrate and product elasticities tends to equal each other. However, it is unlikely that a given enzyme will have both substrate and product concentrations much greater than their respective Kms. A more plausible scenario is when the enzyme is working under sub-saturating conditions (). Under these conditions we obtain the simpler result:
Expressed in a different way we can state:
That is, the absolute value for the substrate elasticity will be greater than the absolute value for the product elasticity. This means that a substrate will have a great influence over the forward reaction rate than the corresponding product.
This result has important implications for the distribution of flux control in a pathway with sub-saturated reaction steps. In general, a perturbation near the start of a pathway will have more influence over the steady state flux than steps downstream. This is because a perturbation that travels downstream is determined by all the substrate elasticities, whereas a perturbation downstream that has to travel upstream if determined by the product elasticities. Since we have seen that the substrate elasticities tends to be larger than the product elasticities, it means that perturbations traveling downstream will be less attenuated than perturbations traveling upstream. The net effect is that flux control tends to be more concentrated at upstream steps compared to downstream steps.
The table below summarizes the extreme values for the elasticities given a reversible Michaelis-Menten rate law. Following Westerhoff et al. the table is split into four cases that include one 'reversible' type, and three 'irreversible' types.
Elasticity with respect to enzyme concentration
The elasticity for an enzyme catalyzed reaction with respect to the enzyme concentration has special significance. The Michaelis model of enzyme action means that the reaction rate for an enzyme catalyzed reaction is a linear function of enzyme concentration. For example, the irreversible Michaelis rate law is given below there the maximal velocity, is explicitly given by the product of the and total enzyme concentration, :
In general we can expresion this relationship as the product of the enzyme concentration and a saturation function, :
This form is applicable to many enzyme mechanisms. The elasticity coefficient can be derived as follows:
It is this result that gives rise to the control coefficient summation theorems.
Numerical calculation of elasticity coefficients
Elasticities coefficient can also be computed numerically, something that is often done in simulation software.
For example, a small change (say 5%) can be made to the chosen reactant concentration, and the change in the reaction rate recorded. To illustrate this, assume that the reference reaction rate is , and the reference reactant concentration, . If we increase the reactant concentration by and record the new reaction rate as , then the elasticity can be estimated by using Newton's difference quotient:
A much better estimate for the elasticity can be obtained by doing two separate perturbations in . One perturbation to increase and another to decrease . In each case, the new reaction rate is recorded; this is called the two-point estimation method. For example, if is the reaction rate
when we increase , and is the reaction rate when we decrease , then
we can use the following two-point formula to estimate the elasticity:
Interpretation of the log form
Consider a variable to be some function , that is . If increases from to then the change in the value of will be given by . The proportional change, however, is given by:
The rate of proportional change at the point is given by the above expression divided by the step change in the value, namely :
Rate of proportional change
Using calculus, we know that
,
therefore the rate of proportional change equals:
This quantity serves as a measure of the rate of proportional change of the function . Just as measures the gradient of the curve plotted on a linear scale, measures the slope of the curve when plotted on a semi-logarithmic scale, that is the rate of proportional change. For example, a value of means that the curve increases at per unit .
The same argument can be applied to the case when we plot a function on both and logarithmic scales. In such a case, the following result is true:
Differentiating in log space
An approach that is amenable to algebraic calculation by computer algebra methods is to differentiate in log space. Since the elasticity can be defined logarithmically, that is:
differentiating in log space is an obvious approach. Logarithmic differentiation is particularly convenient in algebra software such as Mathematica or Maple, where logarithmic differentiation rules can be defined.
A more detailed examination and the rules differentiating in log space can be found at Elasticity of a function.
Elasticity matrix
The unscaled elasticities can be depicted in matrix form, called the unscaled elasticity matrix, . Given a network with molecular species and reactions, the unscaled elasticity matrix is defined as:
Likewise, is it also possible to define the matrix of scaled elasticities:
See also
Control coefficient (biochemistry)
Metabolic control analysis
References
Further reading
Chemical kinetics
Biochemistry methods
Metabolism
Mathematical and theoretical biology
Systems biology | Elasticity coefficient | [
"Chemistry",
"Mathematics",
"Biology"
] | 2,052 | [
"Biochemistry methods",
"Chemical reaction engineering",
"Mathematical and theoretical biology",
"Applied mathematics",
"Cellular processes",
"Biochemistry",
"Chemical kinetics",
"Metabolism",
"Systems biology"
] |
19,828,158 | https://en.wikipedia.org/wiki/Trimethylsilyl%20trifluoromethanesulfonate | Trimethylsilyl trifluoromethanesulfonate (TMSOTf) is an organosilicon compound with the formula . It is a colorless moisture-sensitive liquid. It is the trifluoromethanesulfonate derivative of trimethylsilyl. It is mainly used to activate ketones and aldehydes in organic synthesis.
Reactions
TMSOTf is quite sensitive toward hydrolysis:
It is far more electrophilic than trimethylsilyl chloride.
Related to its tendency to hydrolyze, TMSOTf is effective for silylation of alcohols:
A common use of is for the preparation of silyl enol ethers. One example involves the synthesis of the silyl enol ether of camphor:
It was also used in Takahashi Taxol total synthesis and in chemical glycosylation reactions.
Trimethylsilyl trifluoromethanesulfonate has a variety of other specialized uses. It has been used to install tert-alkyl groups on phosphine (R = alkyl):
PH3 + R3C–OAc + Me3SiOTf → [(R3C)2PH2]OTf
Deprotection of Boc-protected amines can be achieved using trimethylsilyl trifluoromethanesulfonate and triethylamine or 2,6-lutidine.
TMSOTf is also a useful reagent to replace metal-halogen bonds with a covalent M-O(SO2CF3) bond, the by-product being the highly volatile TMSCl which is easily removed.
References
Trimethylsilyl compounds
Triflate esters | Trimethylsilyl trifluoromethanesulfonate | [
"Chemistry"
] | 363 | [
"Functional groups",
"Trimethylsilyl compounds"
] |
6,360,807 | https://en.wikipedia.org/wiki/Nucleic%20acid%20metabolism | Nucleic acid metabolism is a collective term that refers to the variety of chemical reactions by which nucleic acids (DNA and/or RNA) are either synthesized or degraded. Nucleic acids are polymers (so-called "biopolymers") made up of a variety of monomers called nucleotides. Nucleotide synthesis is an anabolic mechanism generally involving the chemical reaction of phosphate, pentose sugar, and a nitrogenous base. Degradation of nucleic acids is a catabolic reaction and the resulting parts of the nucleotides or nucleobases can be salvaged to recreate new nucleotides. Both synthesis and degradation reactions require multiple enzymes to facilitate the event. Defects or deficiencies in these enzymes can lead to a variety of diseases.
Synthesis of nucleotides
Nucleotides are the monomers which polymerize into nucleic acids. All nucleotides contain a sugar, a phosphate, and a nitrogenous base. The bases found in nucleic acids are either purines or pyrimidines. In the more complex multicellular animals, they are both primarily produced in the liver but the two different groups are synthesized in different ways. However, all nucleotide synthesis requires the use of phosphoribosyl pyrophosphate (PRPP) which donates the ribose and phosphate necessary to create a nucleotide.
Purine synthesis
Adenine and guanine are the two nucleotides classified as purines. In purine synthesis, PRPP is turned into inosine monophosphate, or IMP. Production of IMP from PRPP requires glutamine, glycine, aspartate, and 6 ATP, among other things. IMP is then converted to AMP (adenosine monophosphate) using GTP and aspartate, which is converted into fumarate. While IMP can be directly converted to AMP, synthesis of GMP (guanosine monophosphate) requires an intermediate step, in which NAD+ is used to form the intermediate xanthosine monophosphate, or XMP. XMP is then converted into GMP by using the hydrolysis of 1 ATP and the conversion of glutamine to glutamate. AMP and GMP can then be converted into ATP and GTP, respectively, by kinases that add additional phosphates.
ATP stimulates production of GTP, while GTP stimulates production of ATP. This cross regulation keeps the relative amounts of ATP and GTP the same. Excess of either nucleotide could increase the likelihood of DNA mutations, where the wrong purine nucleotide is inserted.
Lesch–Nyhan syndrome is caused by a deficiency in hypoxanthine-guanine phosphoribosyltransferase or HGPRT, the enzyme that catalyzes the reversible reaction of producing guanine from GMP. This is a sex-linked congenital defect that causes overproduction of uric acid along with mental retardation, spasticity, and an urge to self-mutilate.
Pyrimidine synthesis
Pyrimidine nucleosides include cytidine, uridine, and thymidine. The synthesis of any pyrimidine nucleotide begins with the formation of uridine. This reaction requires aspartate, glutamine, bicarbonate, and 2 ATP molecules (to provide energy), as well as PRPP which provides ribose-monophosphate. Unlike in purine synthesis, the sugar/phosphate group from PRPP is not added to the nitrogenous base until towards the end of the process. After synthesizing uridine-monophosphate, it can react with 2 ATP to form uridine-triphosphate or UTP. UTP can be converted to CTP (cytidine-triphosphate) in a reaction catalyzed by CTP synthetase. Thymidine synthesis first requires reduction of the uridine to deoxyuridine (see next section), before the base can be methylated to produce thymidine.
ATP, a purine nucleotide, is an activator of pyrimidine synthesis, while CTP, a pyrimidine nucleotide, is an inhibitor of pyrimidine synthesis. This regulation helps to keep the purine/pyrimidine amounts similar, which is beneficial because equal amounts of purines and pyrimidines are required for DNA synthesis.
Deficiencies of enzymes involved in pyrimidine synthesis can lead to the genetic disease Orotic aciduria which causes excessive excretion of orotic acid in the urine.
Converting nucleotides to deoxynucleotides
Nucleotides are initially made with ribose as the sugar component, which is a feature of RNA. DNA, however, requires deoxyribose, which is missing the 2'-hydroxyl (-OH group) on the ribose. The reaction to remove this -OH is catalyzed by ribonucleotide reductase. This enzyme converts NDPs (nucleoside-diphosphate) to dNDPs (deoxynucleoside-diphosphate). The nucleotides must be in the diphosphate form for the reaction to occur.
In order to synthesize thymidine, a component of DNA which only exists in the deoxy form, uridine is converted to deoxyuridine (by ribonucleotide reductase), and then is methylated by thymidylate synthase to create thymidine.
Degradation of nucleic acids
The breakdown of DNA and RNA is occurring continuously in the cell. Purine and pyrimidine nucleosides can either be degraded to waste products and excreted or can be salvaged as nucleotide components.
Pyrimidine catabolism
Cytosine and uracil are converted into beta-alanine and later to malonyl-CoA which is needed for fatty acid synthesis, among other things. Thymine, on the other hand, is converted into β-aminoisobutyric acid which is then used to form methylmalonyl-CoA. The leftover carbon skeletons such as acetyl-CoA and Succinyl-CoA can then be oxidized by the citric acid cycle. Pyrimidine degradation ultimately ends in the formation of ammonium, water, and carbon dioxide. The ammonium can then enter the urea cycle which occurs in the cytosol and the mitochondria of cells.
Pyrimidine bases can also be salvaged. For example, the uracil base can be combined with ribose-1-phosphate to create uridine monophosphate or UMP. A similar reaction can also be done with thymine and deoxyribose-1-phosphate.
Deficiencies in enzymes involved in pyrimidine catabolism can lead to diseases such as Dihydropyrimidine dehydrogenase deficiency which has negative neurological effects.
Purine catabolism
Purine degradation takes place mainly in the liver of humans and requires an assortment of enzymes to degrade purines to uric acid. First, the nucleotide will lose its phosphate through 5'-nucleotidase. The nucleoside, adenosine, is then deaminated and hydrolyzed to form hypoxanthine via adenosine deaminase and nucleosidase respectively. Hypoxanthine is then oxidized to form xanthine and then uric acid through the action of xanthine oxidase. The other purine nucleoside, guanosine, is cleaved to form guanine. Guanine is then deaminated via guanine deaminase to form xanthine which is then converted to uric acid. Oxygen is the final electron acceptor in the degradation of both purines. Uric acid is then excreted from the body in different forms depending on the animal.
Free purine and pyrimidine bases that are released into the cell are typically transported intercellularly across membranes and salvaged to create more nucleotides via nucleotide salvage.
For example, adenine + PRPP --> AMP + PPi. This reaction requires the enzyme adenine phosphoribosyltransferase. Free guanine is salvaged in the same way except it requires hypoxanthine-guanine phosphoribosyltransferase.
Defects in purine catabolism can result in a variety of diseases including gout, which stems from an accumulation of uric acid crystals in various joints, and adenosine deaminase deficiency, which causes immunodeficiency.
Interconversion of nucleotides
Once the nucleotides are synthesized they can exchange phosphates among one another in order to create mono-, di-, and tri-phosphate molecules. The conversion of a nucleoside-diphosphate (NDP) to a nucleoside-triphosphate (NTP) is catalyzed by nucleoside diphosphate kinase, which uses ATP as the phosphate donor. Similarly, nucleoside-monophosphate kinase carries out the phosphorylation of nucleoside-monophosphates. Adenylate kinase is a specific nucleotide kinase used for regulating cellular energy fluctuations by the interconversion of 2ADP ⇔ ATP + AMP.
See also
Carbohydrate metabolism
DNA
Nucleic acid
Protein metabolism
Purine nucleotide cycle
RNA
References
External links
Nucleic Acids Book (free online book on the chemistry and biology of nucleic acids)
Interactive overview of nucleic acid metabolism.
Nucleic acids
Purines
Pyrimidines
Metabolic pathways
Metabolism | Nucleic acid metabolism | [
"Chemistry",
"Biology"
] | 2,075 | [
"Biomolecules by chemical classification",
"Biochemistry",
"Cellular processes",
"Metabolic pathways",
"Metabolism",
"Nucleic acids"
] |
488,140 | https://en.wikipedia.org/wiki/Charge%20carrier | In solid state physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current.
The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.
In conductors
In conducting mediums, particles serve to carry charge. In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.
In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobile electron cloud is generated by a heated metal cathode, by a process called thermionic emission. When an electric field is applied strongly enough to draw the electrons into a beam, this may be referred to as a cathode ray, and is the basis of the cathode-ray tube display widely used in televisions and computer monitors until the 2000s.
In semiconductors, which are the materials used to make electronic components like transistors and integrated circuits, two types of charge carrier are possible. In p-type semiconductors, "effective particles" known as electron holes with positive charge move through the crystal lattice, producing an electric current. The "holes" are, in effect, electron vacancies in the valence-band electron population of the semiconductor and are treated as charge carriers because they are mobile, moving from atom site to atom site. In n-type semiconductors, electrons in the conduction band move through the crystal, resulting in an electric current.
In some conductors, such as ionic solutions and plasmas, positive and negative charge carriers coexist, so in these cases an electric current consists of the two types of carrier moving in opposite directions. In other conductors, such as metals, there are only charge carriers of one polarity, so an electric current in them simply consists of charge carriers moving in one direction.
In semiconductors
There are two recognized types of charge carriers in semiconductors. One is electrons, which carry a negative electric charge. In addition, it is convenient to treat the traveling vacancies in the valence band electron population (holes) as a second type of charge carrier, which carry a positive charge equal in magnitude to that of an electron.
Carrier generation and recombination
When an electron meets with a hole, they recombine and these free carriers effectively vanish. The energy released can be either thermal, heating up the semiconductor (thermal recombination, one of the sources of waste heat in semiconductors), or released as photons (optical recombination, used in LEDs and semiconductor lasers). The recombination means an electron which has been excited from the valence band to the conduction band falls back to the empty state in the valence band, known as the holes. The holes are the empty states created in the valence band when an electron gets excited after getting some energy to pass the energy gap.
Majority and minority carriers
The more abundant charge carriers are called majority carriers, which are primarily responsible for current transport in a piece of semiconductor. In n-type semiconductors they are electrons, while in p-type semiconductors they are holes. The less abundant charge carriers are called minority carriers; in n-type semiconductors they are holes, while in p-type semiconductors they are electrons.
In an intrinsic semiconductor, which does not contain any impurity, the concentrations of both types of carriers are ideally equal. If an intrinsic semiconductor is doped with a donor impurity then the majority carriers are electrons. If the semiconductor is doped with an acceptor impurity then the majority carriers are holes.
Minority carriers play an important role in bipolar transistors and solar cells. Their role in field-effect transistors (FETs) is a bit more complex: for example, a MOSFET has p-type and n-type regions. The transistor action involves the majority carriers of the source and drain regions, but these carriers traverse the body of the opposite type, where they are minority carriers. However, the traversing carriers hugely outnumber their opposite type in the transfer region (in fact, the opposite type carriers are removed by an applied electric field that creates an inversion layer), so conventionally the source and drain designation for the carriers is adopted, and FETs are called "majority carrier" devices.
Free carrier concentration
Free carrier concentration is the concentration of free carriers in a doped semiconductor. It is similar to the carrier concentration in a metal and for the purposes of calculating currents or drift velocities can be used in the same way. Free carriers are electrons (holes) that have been introduced into the conduction band (valence band) by doping. Therefore, they will not act as double carriers by leaving behind holes (electrons) in the other band. In other words, charge carriers are particles that are free to move, carrying the charge. The free carrier concentration of doped semiconductors shows a characteristic temperature dependence.
In superconductors
Superconductors have zero electrical resistance and are therefore able to carry current indefinitely. This type of conduction is possible by the formation of Cooper pairs. At present, superconductors can only be achieved at very low temperatures, for instance by using cryogenic chilling. As yet, achieving superconductivity at room temperature remains challenging; it is still a field of ongoing research and experimentation. Creating a superconductor that functions at ambient temperature would constitute an important technological break-through, which could potentially contribute to much higher energy efficiency in grid distribution of electricity.
In quantum situations
Under exceptional circumstances, positrons, muons, anti-muons, taus and anti-taus may potentially also carry electric charge. This is theoretically possible, yet the very short life-time of these charged particles would render such a current very challenging to maintain at the current state of technology. It might be possible to artificially create this type of current, or it might occur in nature during very short lapses of time.
In plasmas
Plasmas consist of ionized gas. Electric charge can cause the formation of electromagnetic fields in plasmas, which can lead to the formation of currents or even multiple currents. This phenomenon is used in nuclear fusion reactors. It also occurs naturally in the cosmos, in the form of jets, nebula winds or cosmic filaments that carry charged particles. This cosmic phenomenon is called Birkeland current. Considered in general, the electric conductivity of plasmas is a subject of plasma physics.
See also
Carrier lifetime
Free charge
Molecular diffusion
References
Particle physics | Charge carrier | [
"Physics",
"Materials_science"
] | 1,621 | [
"Physical phenomena",
"Charge carriers",
"Electrical phenomena",
"Condensed matter physics",
"Particle physics"
] |
488,182 | https://en.wikipedia.org/wiki/Pentagrid%20converter | The pentagrid converter is a type of radio receiving valve (vacuum tube) with five grids used as the frequency mixer stage of a superheterodyne radio receiver.
The pentagrid was part of a line of development of valves that were able to take an incoming RF signal and change its frequency to a fixed intermediate frequency, which was then amplified and detected in the remainder of the receiver circuitry. The device was generically referred to as a frequency changer or just mixer.
Origins
The first devices designed to change frequency in the manner described above seem to have been developed by the French, who simply put two grids into what would otherwise have been an ordinary triode valve (the bi-grille or bi-grid). Although technically a four electrode device, neither the term tetrode nor the tetrode valve as it is known today had yet appeared. The bi-grid differed from the later tetrode because the second (outer) grid was coarsely wound compared with the tetrode's screen grid which had to be finely wound to provide its screening effect. Each grid was able to accept one of the incoming signals, and the non-linearity of the device produced the sum and difference frequencies. The valve would have been very inefficient, but, most importantly, the capacitive coupling between the two grids would have been very large. It would therefore have been quite impossible to prevent the signal from one grid coupling out of the other. At least one reference claims that the bi-grille was self-oscillating, but this has not been confirmed.
In 1918, Edwin Armstrong used only triodes when he invented the superheterodyne receiver. One triode operated in a conventional oscillator circuit. Another triode acted as a mixer by coupling the oscillator signal into the mixer's cathode and the received signal to the grid. The sum and difference frequencies were then available in the mixer's anode circuit. Once again, the problem of coupling between the circuits would be ever present.
Shortly after Armstrong invented the superheterodyne, a triode mixer stage design was developed that not only mixed the incoming signal with the local oscillator, but the same valve doubled as the oscillator. This was known as the autodyne mixer. Early examples had difficulty oscillating across the frequency range because the oscillator feedback was via the first intermediate frequency transformer primary tuning capacitor, which was too small to give good feedback. Keeping the oscillator signal out of the antenna circuit was also difficult.
The invention of the tetrode demonstrated the idea of screening electrodes from each other by using additional earthed (grounded) grids (at least, as far as the signal was concerned). In 1926, Philips invented a technique of adding yet another grid to combat the secondary emission that the tetrode suffered from. All the ingredients for the pentagrid were now in place.
Pentagrid
The development of the pentagrid or heptode (seven-electrode) valve was a novel development in the mixer story. The idea was to produce a single valve that not only mixed the oscillator signal and the received signal and produced its own oscillator signal at the same time but, importantly, did the mixing and the oscillating in different parts of the same valve.
The invention of the device at first sight doesn't seem to be obscure, but it would appear that it was developed in both America and the United Kingdom, more or less at the same time. However, the UK device is different from its American counterpart.
It is known that Donald G. Haines of RCA applied for a patent for the pentagrid on 28 March 1933 (subsequently granted on 29 March 1939) under US patent number 2,148,266. The pentagrid also featured in a UK patent (GB426802) granted on 10 April 1935. However, the Ferranti company of Great Britain entered the valve business with the first known UK-produced pentagrid, the VHT4, late in 1933 (though it must have been in development, and would certainly have existed as a prototype well before that time).
The pentagrid proved to be a much better mixer. Since the oscillator circuit was more or less self-contained, good feedback for reliable oscillation across the frequency range was easy to obtain. Some manufacturers that had adopted the autodyne mixer converted some, if not all, of their designs to pentagrid mixers.
What was the goal to develop a reliable self-oscillating mixer? The reasons were to differ from the UK to America. The UK radio manufacturers had to pay a royalty of £1 per valve holder to the British Valve Association to cover use of their members' patent rights. Further, they dictated that not more than one electrode structure could be contained in a single envelope (which would have evaded the royalty - at least in part). The Americans appeared to be driven by the desire to produce a low-cost 'every expense spared' design which was to lead to the All American Five. By making the mixer self-oscillate, the necessity of providing a separate oscillator valve is avoided. The All American Five was to use a pentagrid converter from when it first appeared in 1934, right up until valves became obsolete when transistors took over.
In the UK, the five grids operated thus. Grid 1 acted as the oscillator grid in conjunction with grid 2 which acted as its anode. Grid 4 accepted the incoming signal with the remaining two grids, 3 and 5 connected together (usually internally) which acted as screen grids to screen the anode, grid 4 and grid 2 from each other. Because grid 2 was a 'leaky' anode in that it allowed part of the modulated electron stream through, the oscillator was coupled into the mixing section of the valve. In fact, in some designs, grid 2 consisted of just the support rods, the actual grid wire itself being omitted.
In America, the configuration was different. Grid 1 acted as the oscillator grid as before, but in this case, grids 2 and 4 were connected together (again usually internally). Grid 2 functioned as both a screen and the oscillator anode; in this case the grid wire had to be present to provide the screening. Grid 3 accepted the incoming signal. Grid 4 screened this from the anode, and grid 5 was a suppressor grid to suppress secondary emission. This configuration limited the oscillator design to one where the oscillator 'anode' was operated from the HT+ (B+) rail. This was often accomplished by using a Hartley Oscillator circuit and taking the cathode to the tap on the coil.
The UK version would have had significant secondary emission and would also have had a tetrode kink. This was exploited in providing the non linearity necessary to produce good sum and difference signals. The American devices although having no secondary emission due to the suppressor grid, nevertheless were able to get the required non linearity by biasing the oscillator such that the valve was overdriven. The American version was also a little more sensitive because the grid that accepted the signal was closer to the cathode increasing the amplification factor.
The pentagrid converter in either guise operated extremely well, but it suffered from the limitation that a strong signal was able to 'pull' the oscillator frequency away from a weaker signal. This was not considered a major problem in broadcast receivers where the signals were likely to be strong, but it became a problem when trying to receive weak signals that were close to strong signals. Some short wave radios managed quite satisfactorily with these devices. Special high frequency versions appeared after World War II for the 100 MHz FM bands. Examples are the 6SB7Y (1946) and the 6BA7 (1948). The pulling effect had a beneficial side effect in that it gave a degree of automatic tuning.
Another disadvantage was that in spite of the presence of the screen grids, the electron beam, modulated by the oscillator electrodes, still had to pass through the signal grid, and coupling of the oscillator into the signal circuit was inevitable. The American Federal Communications Commission (FCC) started requiring radio manufacturers to certify that their products avoided this interference under Part 15 of their rules. In the UK, the Postmaster General (who was responsible for radio licensing at this time), laid down a set of stringent rules concerning radio interference.
Hexode
The hexode (six-electrode) was actually developed after the heptode or pentagrid. It was developed in Germany as a mixer but was designed from the start to be used with a separate triode oscillator. Thus the grid configuration was grid 1, signal input; grids 2 and 4 screen grids (connected together - again, usually internally) and grid 3 was the oscillator input. The device had no suppressor grid. A major advantage was that by using grid 1 as the signal input grid, the device was more sensitive to weak signals.
It was not long before the triode and hexode structures were placed in the same glass envelope - by no means a new idea. The triode grid was usually internally connected to the hexode grid 3, but this practice was dropped in later designs when the mixer section operated as a straight IF amplifier in AM/FM sets when operating on FM, the mixing being carried out in a dedicated FM frequency changing section.
The UK manufacturers were initially unable to use this type of mixer because of the BVA prohibition on multiple structures (and indeed unwilling to use separate valves because of the levy). One UK company, MOV, successfully enforced the cartel rules against the German Lissen company in 1934 when they attempted to market a radio in the UK which had the triode-hexode mixer.
Following pressure from the UK manufacturers, the BVA were compelled to relax the rules and the UK started to adopt triode-hexode mixers. The Mullard ECH35 was a popular choice.
One company, Osram, made an ingenious move. One of their popular pentagrid converter designs was the MX40, initially marketed in 1934. They put on sale in 1936, the X41 triode-hexode frequency changer. The clever bit was that the X41 was a direct plug-in pin-compatible replacement for the MX40. Thus a pentagrid radio could easily be converted to a triode-hexode without any other circuit modifications.
America never really adopted the triode-hexode and it was seldom used, even though the 6K8 triode-hexode was available to manufacturers in 1938.
In some designs, a suppressor grid was added to produce yet another heptode design. Mullard's ECH81 became popular with the move to miniature nine-pin valves.
Octode
Although not strictly a pentagrid (in that it has more than five grids), the octode (eight-electrode) nevertheless operates on the pentagrid principle. It resulted simply from the addition of an extra screen grid to the UK version of the pentagrid heptode. This was done mainly to improve the antenna/oscillator separation and to reduce the power consumption for use in radio sets operated by dry-cell batteries that were becoming increasingly popular.
In North America, the only octode manufactured was the 7A8. Introduced by Sylvania in 1939 (and used mostly by Philco), this valve was the product of adding a suppressor grid to type 7B8, which was the loctal version of type 6A7. Adding the suppressor allowed Sylvania to lower the current of the 6.3-volt heater from 320 milliamperes to 150 milliamperes while maintaining the same conversion transconductance (550 microsiemens). This allowed Philco to use this valve in every line of radio throughout the 1940s.
The Philips EK3 octode was designated as a "beam octode". The novel part about the design was that grids 2 and 3 were constructed as beam-forming plates. This was done in such a way that Philips claimed that the oscillator electron beam and the mixer electron beams were separated as much as possible and thus the pulling effect was minimised. No information is available as to the degree of success. The manufacturer's information also notes that the valve's high performance comes at a cost of a high heater current of 600 mA – double that of more conventional types.
Pentode
The use of a pentode would seem an unlikely choice for a frequency converter because it only has one control grid. However, during the Great Depression, many American radio manufacturers used pentode types 6C6, 6D6, 77 and 78 in their lowest priced AC/DC receivers because they were cheaper than pentagrid type 6A7. In these circuits, the suppressor (grid 3) acted as the oscillator grid, and the valve operated in a similar manner to a true pentagrid.
One UK company, Mazda/Ediswan, produced a triode-pentode frequency changer, the AC/TP. Designed for low-cost AC radios, the device was deliberately designed to allow strong signals to pull the oscillator without the risk of radiating the oscillator signal from the aerial. The cathode was common to both sections of the valve. The cathode was connected to a secondary coil on the oscillator coil and thus coupled the oscillator into the pentode mixer section, the signal being applied to grid 1 in the conventional manner. The AC/TP was one of the AC/ range of valves designed for low-cost radios. They were considered durable for their time (even the AC/TP frequency changer, which was normally troublesome). Any AC/ valves encountered today are likely to be brand new as service shops stocked up on spares which were seldom required.
Nomenclature
In order to distinguish between the two versions of the heptode, manufacturers data often describes them as "heptode of the hexode type" for a heptode without a suppressor grid, and a "heptode of the octode type", where a suppressor grid is present.
Examples
True pentagrids
2A7 and 6A7 – The first of the RCA pentagrids, 1933
VHT1 – Ferranti pentagrid, 1933
MX40 – Osram pentagrid, 1934
6SA7 and 6BE6/EK90 – Pentagrids produced by RCA, Mullard, etc.
6SB7Y and 6BA7 – VHF pentagrids, 1946
1LA6 and later 1L6 – Battery pentagrid for Zenith Trans-Oceanic and other high-end portable short-wave radios
DK91/1R5, DK92/1AC6, DK96/1AB6, DK192 – Battery pentagrids
1C8,1E8 - Subminiature battery pentagrids
Octodes (operating on the pentagrid principle)
EK3 – Beam octode produced by Philips
7A8 – The only octode produced in America by Sylvania, 1939
Triode/hexode types (not operating on the pentagrid principle)
X41 – Osram triode-hexode, 1936; plug-in replacement for MX40 above
ECH35 – Mullard triode-hexode
ECH81 (Soviet 6И1П) – Mullard triode-heptode of the octode type
6K8 – American triode-hexode, 1938
This list is by no means exhaustive.
See also
Beam deflection tube
Notes
References
Valve Manuals
General Electric Essential Characteristics, 1970
Sylvania Technical Manual, 1958
Other Books
Sibley, Ludwell, "Tube Lore", 1996
Stokes, John W, "70 Years of Radio Tubes and Valves" 1997
Thrower, Keith, "History of the British Radio Valve to 1940"
External links
Mullard FC4
Octode
AC/TP data sheet
2A7 data sheet
6SA7/12SA7 datasheet
12BE6 datsheet
12BA7 Datasheet
Reflex Converter patent Oct 12 1933
Frequency mixers
Vacuum tubes
de:Elektronenröhre#Heptode | Pentagrid converter | [
"Physics",
"Engineering"
] | 3,461 | [
"Radio electronics",
"Vacuum tubes",
"Vacuum",
"Frequency mixers",
"Matter"
] |
488,361 | https://en.wikipedia.org/wiki/Mining%20engineering | Mining in the engineering discipline is the extraction of minerals from the ground. Mining engineering is associated with many other disciplines, such as mineral processing, exploration, excavation, geology, metallurgy, geotechnical engineering and surveying. A mining engineer may manage any phase of mining operations, from exploration and discovery of the mineral resources, through feasibility study, mine design, development of plans, production and operations to mine closure.
History of mining engineering
From prehistoric times to the present, mining has played a significant role in the existence of the human race. Since the beginning of civilization, people have used stone and ceramics and, later, metals found on or close to the Earth's surface. These were used to manufacture early tools and weapons. For example, high-quality flint found in northern France and southern England were used to set fire and break rock. Flint mines have been found in chalk areas where seams of the stone were followed underground by shafts and galleries. The oldest known mine on the archaeological record is the "Lion Cave" in Eswatini. At this site, which radiocarbon dating indicates to be about 43,000 years old, paleolithic humans mined mineral hematite, which contained iron and was ground to produce the red pigment ochre.
The ancient Romans were innovators of mining engineering. They developed large-scale mining methods, such as the use of large volumes of water brought to the minehead by aqueducts for hydraulic mining. The exposed rock was then attacked by fire-setting, where fires were used to heat the rock, which would be quenched with a stream of water. The thermal shock cracked the rock, enabling it to be removed. In some mines, the Romans utilized water-powered machinery such as reverse overshot water-wheels. These were used extensively in the copper mines at Rio Tinto in Spain, where one sequence comprised 16 such wheels arranged in pairs, lifting water about .
Black powder was first used in mining in Banská Štiavnica, Kingdom of Hungary (present-day Slovakia) in 1627. This allowed blasting of rock and earth to loosen and reveal ore veins, which was much faster than fire-setting. The Industrial Revolution saw further advances in mining technologies, including improved explosives and steam-powered pumps, lifts, and drills.
Education
Becoming an accredited mining engineer requires a university or college degree. Training includes a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Technology (B.Tech.) or Bachelor of Applied Science (B.A.Sc.) in mining engineering. Depending on the country and jurisdiction, to be licensed as a mining engineer may require a Master of Engineering (M.Eng.), Master of Science (M.Sc or M.S.) or Master of Applied Science (M.A.Sc.) degree.
Some mining engineers who have come from other disciplines, primarily from engineering fields (e.g.: mechanical, civil, electrical, geomatics or environmental engineering) or from science fields (e.g.: geology, geophysics, physics, geomatics, earth science, or mathematics), typically completing a graduate degree such as M.Eng, M.S., M.Sc. or M.A.Sc. in mining engineering after graduating from a different quantitative undergraduate program.
The fundamental subjects of mining engineering study usually include:
mathematics; calculus, algebra, numerical analysis, statistics
geoscience; geochemistry, geophysics, mineralogy, geomatics
mechanics; rock mechanics, soil Mechanics, geomechanics
thermodynamics; heat transfer, mass transfer
hydrogeology
fluid mechanics; fluid statics, fluid dynamics
Geostatistics; spatial analysis
control engineering; control theory, instrumentation
surface mining; open-pit mining
underground mining (soft rock)
underground mining (hard rock)
computing; DATAMINE, MATLAB, Maptek (Vulcan), Golden Software (Surfer), MicroStation, Carlson
drilling and blasting
solid mechanics; fracture mechanics
In the United States, about 14 universities offer a B.S. degree in mining and mineral engineering. The top rated universities include West Virginia University, South Dakota School of Mines and Technology, Virginia Tech, the University of Kentucky, the University of Arizona, Montana Tech, and Colorado School of Mines. Most of these universities offer M.S. and Ph.D. degrees.
In Canada, there are 19 undergraduate degree programs in mining engineering or equivalent. McGill University Faculty of Engineering offers both undergraduate (B.Sc., B.Eng.) and graduate (M.Sc., Ph.D.) degrees in Mining Engineering. and the University of British Columbia in Vancouver offers a Bachelor of Applied Science (B.A.Sc.) in Mining Engineering and also graduate degrees (M.A.Sc. or M.Eng and Ph.D.) in Mining Engineering.
In Europe, most programs are integrated (B.S. plus M.S. into one) after the Bologna Process and take five years to complete. In Portugal, the University of Porto offers an M.Eng. in Mining and Geo-Environmental Engineering and in Spain the Technical University of Madrid offers degrees in Mining Engineering with tracks in Mining Technology, Mining Operations, Fuels and Explosives, Metallurgy. In the United Kingdom, The Camborne School of Mines offers a wide choice of BEng and MEng degrees in Mining engineering and other Mining related disciplines. This is done through the University of Exeter. In Romania, the University of Petroșani (formerly known as the Petroşani Institute of Mines, or rarely as the Petroşani Institute of Coal) is the only university that offers a degree in Mining Engineering, Mining Surveying or Underground Mining Constructions, albeit, after the closure of Jiu Valley coal mines, those degrees had fallen out of interest for most high-school graduates.
In South Africa, leading institutions include the University of Pretoria, offering a 4-year Bachelor of Engineering (B.Eng in Mining Engineering) as well as post-graduate studies in various specialty fields such as rock engineering and numerical modelling, explosives engineering, ventilation engineering, underground mining methods and mine design; and the University of the Witwatersrand offering a 4-year Bachelor of Science in Engineering (B.Sc.(Eng.)) in Mining Engineering as well as graduate programs (M.Sc.(Eng.) and Ph.D.) in Mining Engineering.
Some mining engineers go on to pursue Doctorate degree programs such as Doctor of Philosophy (Ph.D., DPhil), Doctor of Engineering (D.Eng., Eng.D.). These programs involve a significant original research component and are usually seen as entry points into academia.
In the Russian Federation, 85 universities across all federal districts are training specialists for the mineral resource sector. 36 universities are training specialists for extracting and processing solid minerals (mining). 49 are training specialists for extracting, primary processing, and transporting liquid and gaseous minerals (oil and gas). 37 are training specialists for geological exploration (applied geology, geological exploration). Among the universities that train specialists for the mineral resource sector, 7 are federal universities, and 13 are national research universities of Russia. Personnel training for the mineral resource sector in Russian universities is currently carried out in the following main specializations of training (specialist's degree): "Applied Geology" with the qualification of mining engineer (5 years of training); "Geological Exploration" with the qualification of mining engineer (5 years of training); "Mining" with the qualification of mining engineer (5.5 years of training); "Physical Processes in Mining or Oil and Gas Production" with the qualification of mining engineer (5.5 years of training); "Oil and Gas Engineering and Technologies" with the qualification of mining engineer (5.5 years of training). Universities develop and implement the main professional educational programs of higher education in the directions and specializations of training by forming their profile (name of the program). For example, within the framework of the specialization "Mining", universities often adhere to the classical names of the programs "Open-pit mining", "Underground mining of mineral deposits", "Surveying", "Mineral enrichment", "Mining machines", "Technological safety and mine rescue", "Mine and underground construction", "Blasting work", "Electrification of the mining industry", etc. In the last ten years, under the influence of various factors, new names of programs have begun to appear, such as: "Mining and geological information systems", "Mining ecology", etc. Thus, universities, using their freedom to form new training programs for specialists, can look to the future and try to foresee new professions of mining engineers. After the specialist's degree, you can immediately enrol in postgraduate school (analogue of Doctorate degree programs, four years of training).
Salary and statistics
Mining salaries are usually determined by the level of skill required, where the position is, and what kind of organization the engineer works for.
Mining engineers in India earn relatively high salaries in comparison to many other professions, with an average salary of $15,250 . However, in comparison to mining engineer salaries in other regions, such as Canada, the United States, Australia, and the United Kingdom, Indian salaries are low. In the United States, there are an estimated 6,150 employed mining engineers, with a mean yearly wage of US$103,710.
Pre-mining
As there is considerable capital expenditure required for mining operations, an array of pre-mining activities are normally carried out to assess whether a mining operation would be worthwhile.
Mineral exploration is the process of locating minerals and assessing their concentrations (grade) and quantities (tonnage), to determine if they are commercially viable ores for mining. Mineral exploration is much more intensive, organized, involved, and professional than mineral prospecting – though it frequently utilizes services exploration, enlisting geologists and surveyors in the necessary pre-feasibility study of the possible mining operation. Mineral exploration and estimation of the reserve can determine the profitability conditions and advocate the form and type of mining required.
Mineral discovery
Mineral discovery can be made from research of mineral maps, academic geological reports, or government geological reports. Other sources of information include property assays and local word of mouth. Mineral research usually includes sampling and analysing sediments, soil, and drill cores. Soil sampling and analysis is one of the most popular mineral exploration tools. Other common tools include satellite and aerial surveys or airborne geophysics, including magneto-metric and gamma-spectrometric maps. Unless the mineral exploration is done on public property, the owners of the property may play a significant role in the exploration process and might be the original discoverers of the mineral deposit.
Mineral determination
After a prospective mineral is located, the mining geologist and engineer determine the ore properties. This may involve chemical analysis of the ore to determine the sample's composition. Once the mineral properties are identified, the next step is determining the quantity of the ore. This involves determining the extent of the deposit and the purity of the ore. The geologist drills additional core samples to find the limits of the deposit or seam and estimates the quantity of valuable material present.
Feasibility study
Once the mineral identification and reserve amount are reasonably determined, the next step is to determine the feasibility of recovering the mineral deposit. A preliminary survey shortly after the discovery of the deposit examines the market conditions, such as the supply and demand of the mineral, the amount of ore needed to be moved to recover a certain quantity of that mineral, and analysis of the cost associated with the operation. This pre-feasibility study determines whether the mining project is likely to be profitable; if so, a more in-depth analysis of the deposit is undertaken. After the full extent of the ore body is known and has been examined by engineers, the feasibility study examines the cost of initial capital investment, methods of extraction, the cost of operation, an estimated length of time to pay back the investment, the gross revenue and net profit margin, any possible resale price of the land, the total life of the reserve, the full value of the account, investment in future projects, and the property owner or owners' contract. In addition, environmental impact, reclamation, possible legal ramifications, and all government permitting are considered. These steps of analysis determine whether the mining company and its investors should proceed with the extraction of the minerals or whether the project should be abandoned. The mining company may decide to sell the rights to the reserve to a third party rather than develop it themselves. Alternatively, the decision to proceed with extraction may be postponed indefinitely until market conditions become favourable.
Mining operation
Mining engineers working in an established mine may work as an engineer for operations improvement, further mineral exploration, and operation capitalization by determining where in the mine to add equipment and personnel. The engineer may also work in supervision and management or as an equipment and mineral salesperson. In addition to engineering and operations, the mining engineer may work as an environmental, health, and safety manager or design engineer.
The act of mining requires different methods of extraction depending on the mineralogy, geology, and location of the resources. Characteristics such as mineral hardness, the mineral stratification, and access to that mineral will determine the method of extraction.
Generally, mining is either done from the surface or underground. Mining can also occur with surface and covert operations on the same reserve. Mining activity varies as to what method is employed to remove the mineral.
Surface mining
Surface mining comprises 90% of the world's mineral tonnage output. Also called open pit mining, surface mining removes minerals in formations near the surface. Ore retrieval is done by material removal from the land in its natural state. Surface mining often alters the land's characteristics, shape, topography, and geological makeup.
Surface mining involves quarrying and excavating minerals through cutting, cleaving, and breaking machinery. Explosives are usually used to facilitate breakage. Hard rocks such as limestone, sand, gravel, and slate are generally quarried into benches.
Using mechanical shovels, track dozers, and front-end loaders, strip mining is done on softer minerals such as clays and phosphate removed. Smoother coal seams can also be extracted this way.
With placer mining, dredge mining can also remove minerals from the bottoms of lakes, rivers, streams, and even the ocean. In addition, in-situ mining can be done from the surface using dissolving agents on the ore body and retrieving the ore via pumping. The pumped material is then set to leach for further processing. Hydraulic mining is utilized as water jets to wash away either overburden or the ore itself.
Mining process
Blasting
Explosives are used to break up a rock formation and aid in the collection of ore in a process called blasting. Blasting generally the heat and immense pressure of the detonated explosives to shatter and fracture a rock mass. The type of explosives used in mining is high explosives, which vary in composition and performance properties. The mining engineer is responsible for selecting and properly placing these explosives to maximize efficiency and safety. Blasting occurs in many phases of the mining process, such as the development of infrastructure and the production of the ore. An alternative to high explosives are Cardox blasting cartridges, invented in 1931, and extensively used from 1932 in coal mines. The cartridge contains an 'energizer' which heats liquid carbon dioxide until it ruptures a bursting disk; then, a physical explosion of the supercritical fluid.
Leaching
Leaching is the loss or extraction of certain materials from a carrier into a liquid (usually, but not always, a solvent). Mostly used in rare-earth metal extraction.
Flotation
Flotation (also spelled floatation) involves phenomena related to the relative buoyancy of minerals. It is the most widely used metal separating method.
Electrostatic separation
Separating minerals by electro-characteristic differences.
Gravity separation
Gravity separation is an industrial method of separating two components, either a suspension or dry granular mixture, where separating the components with gravity is sufficiently practical.
Magnetic separation
Magnetic separation is a process in which magnetically susceptible material is extracted from a mixture using a magnetic force.
Hydraulic separation
Hydraulic separation is a process that uses the density difference to separate minerals. Before hydraulic separation, minerals were crushed into uniform sizes; minerals with uniform sizes and densities will have different settling velocities in water, which can be used to separate target minerals.
Mining health and safety
Legal attention to health and safety in mining began in the late 19th century. In the 20th century, it progressed to a comprehensive and stringent codification of enforcement and mandatory health and safety regulation. In whatever role, a mining engineer must follow all mine safety laws.
United States
The United States Congress, through the passage of the Federal Mine Safety and Health Act of 1977, known as the Miner's Act, created the Mine Safety and Health Administration (MSHA) under the US Department of Labour. The act provides miners with rights against retaliation for reporting violations, consolidated regulation of coal mines with metallic and non-metallic mines, and created the independent Federal Mine Safety and Health Review Commission to review violations reported to MSHA.
The act codified in Code of Federal Regulations § 30 (CFR § 30) covers all miners at an active mine. When a mining engineer works at an active mine, they are subject to the same rights, violations, mandatory health and safety regulations, and compulsory training as any other worker at the mine. The mining engineer can be legally identified as a "miner".
The act establishes the rights of miners. The miner may report at any time a hazardous condition and request an inspection. The miners may elect a miners' representative to participate during an inspection, pre-inspection meeting, and post-inspection conference. The miners and miners' representatives shall be paid for their time during all inspections and investigations.
Environmental concerns
Waste and uneconomic material generated from the mineral extraction process are the primary source of pollution in the vicinity of mines. Mining activities, by their nature, cause a disturbance of the natural environment in and around which the minerals are located. Mining engineers should therefore be concerned not only with the production and processing of mineral commodities but also with the mitigation of damage to the environment both during and after mining as a result of the change in the mining area.
See also
School of mines
Underground construction
Automated mining
Geological engineering
Mining machinery engineering
Footnotes
Further reading
Eric C. Nystrom, Seeing Underground: Maps, Models, and Mining Engineering in America. Reno, NV: University of Reno Press, 2014.
Franklin White. Miner with a Heart of Gold: a biography of a mineral science and engineering educator. Friesen Press, Victoria. 2020. ISBN 978-1-5255-7765-9 (Hardcover) ISBN 978-1-5255-7766-6 (Paperback) ISBN 978-1-5255-7767-3 (eBook)
External links
SME (Society for Mining, Metallurgy, and Exploration), publishes the monthly magazine Mining Engineering
U.S. Department of Labor: Mining and geological engineers
British Geological Survey Mineral Processing
Turkısh Mining Engineers
Mineral Exploration Properties of Turkey
DATAMINE (Datamine is a provider of the technology and the services required to plan and manage mining operations seamlessly.) Mining Software
Mineral Exploration Mapping
Mining Science and Technologies in Russia
Engineering disciplines | Mining engineering | [
"Engineering"
] | 3,953 | [
"Mining engineering",
"nan"
] |
488,382 | https://en.wikipedia.org/wiki/Trilemma | A trilemma is a difficult choice from three options, each of which is (or appears) unacceptable or unfavourable. There are two logically equivalent ways in which to express a trilemma: it can be expressed as a choice among three unfavourable options, one of which must be chosen, or as a choice among three favourable options, only two of which are possible at the same time.
The term derives from the much older term dilemma, a choice between two or more difficult or unfavourable alternatives. The earliest recorded use of the term was by the British preacher Philip Henry in 1672, and later, apparently independently, by the preacher Isaac Watts in 1725.
In religion
Epicurus' trilemma
One of the earliest uses of the trilemma formulation is that of the Greek philosopher Epicurus, rejecting the idea of an omnipotent and omnibenevolent God (as summarised by David Hume):
If God is unable to prevent evil, then he is not all-powerful.
If God is not willing to prevent evil, then he is not all-good.
If God is both willing and able to prevent evil, then why does evil exist?
Although traditionally ascribed to Epicurus and called Epicurus' trilemma, it has been suggested that it may actually be the work of an early skeptic writer, possibly Carneades.
In studies of philosophy, discussions, and debates related to this trilemma are often referred to as being about the problem of evil.
Apologetic trilemma
One well-known trilemma is sometimes used by Christian apologists considered a proof of the divinity of Jesus, and is most commonly known in the version by C. S. Lewis. It proceeds from the premise that Jesus claimed to be God, and that therefore one of the following must be true:
Lunatic: Jesus was not God, but he mistakenly believed that he was.
Liar: Jesus was not God, and he knew it, but he said so anyway.
Lord: Jesus is God.
The trilemma, usually in Lewis' formulation, is often used in works of popular apologetics, although it is almost completely absent from discussions about the status of Jesus by professional theologians and biblical scholars.
In law
The "cruel trilemma"
The "cruel trilemma" was an English ecclesiastical and judicial weapon developed in the first half of the 17th century, and used as a form of coercion and persecution. The format was a religious oath to tell the truth, imposed upon the accused prior to questioning. The accused, if guilty, would find themselves trapped between:
A breach of religious oath if they lied (taken extremely seriously in that era, a mortal sin), as well as perjury;
Self-incrimination if they told the truth; or
Contempt of court if they said nothing and were silent.
Outcry over this process led to the foundation of the right to not incriminate oneself being established in common law and was the direct precursor of the right to silence and non-self-incrimination in the Fifth Amendment to the United States Constitution.
In philosophy
The Münchhausen trilemma
In the theory of knowledge the Münchhausen trilemma is an argument against the possibility of proving any certain truth even in the fields of logic and mathematics. Its name is going back to a logical proof of the German philosopher Hans Albert.
This proof runs as follows: All of the only three possible attempts to get a certain justification must fail:
All justifications in pursuit of certain knowledge have also to justify the means of their justification and doing so they have to justify anew the means of their justification. Therefore, there can be no end. We are faced with the hopeless situation of an infinite regression.
One can stop at self-evidence or common sense or fundamental principles or speaking ex cathedra or at any other evidence, but in doing so the intention to install certain justification is abandoned.
The third horn of the trilemma is the application of a circular argument.
The trilemma of censorship
In John Stuart Mill's On Liberty, as a part of his argument against the suppression of free speech, he describes the trilemma facing those attempting to justify such suppression (although he does not refer to it as a trilemma, Leo Parker-Rees (2009) identified it as such).
If free speech is suppressed, the opinion suppressed is either:
True – in which case society is robbed of the chance to exchange error for truth;
False – in which case the opinion would create a 'livelier impression' of the truth, allowing people to justify the correct view;
Half-true – in which case it would contain a forgotten element of the truth, that is important to rediscover, with the eventual aim of a synthesis of the conflicting opinions that is the whole truth.
Buddhist Trilemma
The Buddhist philosopher Nagarjuna uses the trilemma in his Verses on the Middle Way, giving the example that:
a cause cannot follow its effect
a cause cannot be coincident with its effect
a cause cannot precede its effect
In economics
"The Uneasy Triangle"
In 1952, the British magazine The Economist published a series of articles on an "Uneasy Triangle", which described "the three-cornered incompatibility between a stable price level, full employment, and ... free collective bargaining". The context was the difficulty maintaining external balance without sacrificing two sacrosanct political values: jobs for all and unrestricted labor rights. Inflation resulting from labor militancy in the context of full employment had put powerful downward pressure on the pound sterling. Runs on the pound then triggered a long series of economically and politically disruptive "stop-go" policies (deflation followed by reflation). John Maynard Keynes had anticipated the severe problem associated with reconciling full employment with stable prices without sacrificing democracy and the associational rights of labor. The same incompatibilities were also elaborated upon in Charles E. Lindblom's 1949 book, Unions and Capitalism.
The "impossible trinity"
In 1962 and 1963, a trilemma (or "impossible trinity") was introduced by the economists Robert Mundell and Marcus Fleming in articles discussing the problems with creating a stable international financial system. It refers to the trade-offs among the following three goals: a fixed exchange rate, national independence in monetary policy, and capital mobility. According to the Mundell–Fleming model of 1962 and 1963, a small, open economy cannot achieve all three of these policy goals at the same time: in pursuing any two of these goals, a nation must forgo the third.
Wage policy trilemmas
In 1989 Peter Swenson posited the existence of "wage policy trilemmas" encountered by trade unions trying to achieve three egalitarian goals simultaneously. One involved attempts to compress wages within a bargaining sector while compressing wages between sectors and maximizing access to employment in the sector. A variant of this "horizontal" trilemma was the "vertical" wage policy trilemma associated with trying simultaneously to compress wages, increase the wage share of value added at the expense of profits, and maximize employment. These trilemmas helped explain instability in unions' wage policies and their political strategies seemingly designed to resolve the incompatibilities.
The Pinker social trilemma
Steven Pinker proposed another social trilemma in his books How the Mind Works and The Blank Slate: that a society cannot be simultaneously "fair", "free", and "equal". If it is "fair", individuals who work harder will accumulate more wealth; if it is "free", parents will leave the bulk of their inheritance to their children; but then it will not be "equal", as people will begin life with different fortunes.
The political trilemma of the world economy
Economist Dani Rodrik argues in his book, The Globalization Paradox, that democracy, national sovereignty, and global economic integration are mutually incompatible. Democratic states pose obstacles to global integration (e. g. regulatory laws, taxes and tariffs) to protect their own economies. Therefore, if we need to achieve complete economic integration, it is necessary to also remove democratic nations states. A government of some nation state could possibly pursue the goal of global integration on the expense of its own population, but that would require an authoritarian regime. Otherwise, the government would be likely replaced in the next elections.
Holmström's theorem
In Moral Hazard in Teams, economist Bengt Holmström demonstrated a trilemma that arises from incentive systems. For any team of risk-neutral agents, no incentive system of revenue distribution can satisfy all three of the following conditions: Pareto efficiency, balanced budget, and Nash stability. This entails three optimized outcomes:
Martyrdom: the incentive system distributes all revenue, and no agent can improve their take by changing their strategy, but at least one agent is not receiving reward in proportion to their effort.
Instability: the incentive system distributes all revenue, and all agents are rewarded in proportion to their effort, but at least one agent could increase their take by changing strategies.
Insolvency: all agents are rewarded in proportion to their effort, and no shift in strategy would improve any agent's take, but not all revenue is distributed.
Arrow's impossibility theorem
In social choice theory, economist Kenneth Arrow proved that it is impossible to create a social welfare function that simultaneously satisfies three key criteria: Pareto efficiency, non-dictatorship and independence of irrelevant alternatives.
In politics
The Brexit trilemma
Following the Brexit referendum, the first May government decided that not only should the United Kingdom leave the European Union but also that it should leave the European Union Customs Union and the European Single Market. This meant that a customs and regulatory border would arise between the UK and the EU. Whilst the sea border between Great Britain and continental Europe was expected to present manageable challenges, the UK/EU border in Ireland was recognised as having rather more intractable issues. These were summarised in what became known as the "Brexit trilemma", because of three competing objectives: no hard border on the island; no customs border in the Irish Sea; and no British participation in the European Single Market and the European Union Customs Union. It is not possible to have all three.
The Zionist trilemma
Zionists have often desired that Israel be democratic, have a Jewish identity, and encompass (at least) the land of Mandatory Palestine. However, these desires (or "desiderata") seemingly form an inconsistent triad, and thus a trilemma. Palestine has an Arab majority, so any democratic state encompassing all of Palestine would likely have a binational or Arab identity.
However, Israel could be:
Democratic and Jewish, but not in all of Palestine.
Democratic and in all of Palestine, but not Jewish.
Jewish and in all of Palestine, but not democratic.
This observation appears in "From Beirut to Jerusalem" (1989), by Thomas Friedman, who attributes it to the political scientist (historically, the 'trilema' is inexact since early Zionist activists often (a) believed that Jews would migrate to Palestine in sufficiently large numbers; (b) proposed forms of bi-national governance; (c) preferred forms of communism over democracy).
The Žižek trilemma
The "Žižek trilemma" is a humorous formulation on the incompatibility of certain personal virtues under a constraining ideological framework. Often attributed to the philosopher Slavoj Žižek, it is actually quoted by him as the product of an anonymous source:
One cannot but recall here a witty formula of life under a hard Communist regime: Of the three features—personal honesty, sincere support of the regime and intelligence—it was possible to combine only two, never all three. If one were honest and supportive, one was not very bright; if one were bright and supportive, one was not honest; if one were honest and bright, one was not supportive.
In business
The project-management trilemma
Arthur C. Clarke cited a management trilemma encountered when trying to achieve production quickly and cheaply while maintaining high quality. In the software industry, this means that one can pick any two of: fastest time to market, highest software quality (fewest defects), and lowest cost (headcount). This is the basis of the popular project management aphorism "Quick, Cheap, Good: Pick two," conceptualized as the project management triangle or "quality, cost, delivery".
The trilemma of an encyclopedia
The Stanford Encyclopedia of Philosophy is said to have overcome the trilemma that an encyclopedia cannot be authoritative, comprehensive and up-to-date all at the same time for any significant duration.
In computing and technology
In data storage
The RAID technology may offer two of three desirable values: (relative) inexpensiveness, speed or reliability (RAID 0 is fast and cheap, but unreliable; RAID 6 is extremely expensive and reliable, with correct performance and so on). A common phrase in data storage, which is the same in project management, is "fast, cheap, good: choose two".
The same saying has been pastiched in silent computing as "fast, cheap, quiet: choose two".
In researching magnetic recording, used in hard drive storage, a trilemma arises due to the competing requirements of readability, writeability and stability (known as the Magnetic Recording Trilemma). Reliable data storage means that for very small bit sizes the magnetic medium must be made of a material with a very high coercivity (ability to maintain its magnetic domains and withstand any undesired external magnetic influences). But this coercivity must be overridden by the drive head when data is written, which means an extremely strong magnetic field in a very tiny space, but the size occupied by one bit of data eventually becomes so small that the strongest magnetic field able to be created in the space available, is not strong enough to allow data writing. In effect, a point exists at which it becomes impractical or impossible to make a working disk drive because magnetic writing activity is no longer possible on such a small scale. Heat-assisted magnetic recording (HAMR) and Microwave Assisted Magnetic Recording (MAMR) are technologies that aim to modify coercivity during writing only, to work around the trilemma..
In anonymous communication protocols
Anonymous communication protocols can offer two of the three desirable properties: strong anonymity, low bandwidth overhead, low latency overhead.
Some anonymous communication protocols offer anonymity at the cost of high bandwidth overhead, that means the number of messages exchanged between the protocol parties is very high. Some offer anonymity with the expense of latency overhead (there is a high delay between when the message is sent by the sender and when it is received by the receiver). There are protocols which aims to keep the bandwidth overhead and latency overhead low, but they can only provide a weak form of anonymity.
In clustering algorithms
Kleinberg demonstrated through an axiomatic approach to clustering that no clustering method can satisfy all three of the following fundamental properties at the same time:
Scale Invariance: The clustering results remain the same when distances between data points are proportionally scaled.
Richness: The method can produce any possible partition of the data.
Consistency: Changes in distances that align with the clustering structure (e.g., making closer points even closer) do not alter the results.
Other (technology)
The CAP theorem, covering guarantees provided by distributed systems, and Zooko's triangle concerning naming of participants in network protocols, are both examples of other trilemmas in technology.
See also
Ternary plot
Trichotomy (philosophy)
Inconsistent triad
Condorcet paradox
Tetralemma
References
External links
Christian apologetics
Christian philosophy
Lemmas
Rhetoric
Theodicy | Trilemma | [
"Mathematics"
] | 3,254 | [
"Mathematical theorems",
"Mathematical problems",
"Lemmas"
] |
488,391 | https://en.wikipedia.org/wiki/Jacobson%20density%20theorem | In mathematics, more specifically non-commutative ring theory, modern algebra, and module theory, the Jacobson density theorem is a theorem concerning simple modules over a ring .
The theorem can be applied to show that any primitive ring can be viewed as a "dense" subring of the ring of linear transformations of a vector space. This theorem first appeared in the literature in 1945, in the famous paper "Structure Theory of Simple Rings Without Finiteness Assumptions" by Nathan Jacobson. This can be viewed as a kind of generalization of the Artin-Wedderburn theorem's conclusion about the structure of simple Artinian rings.
Motivation and formal statement
Let be a ring and let be a simple right -module. If is a non-zero element of , (where is the cyclic submodule of generated by ). Therefore, if are non-zero elements of , there is an element of that induces an endomorphism of transforming to . The natural question now is whether this can be generalized to arbitrary (finite) tuples of elements. More precisely, find necessary and sufficient conditions on the tuple and separately, so that there is an element of with the property that for all . If is the set of all -module endomorphisms of , then Schur's lemma asserts that is a division ring, and the Jacobson density theorem answers the question on tuples in the affirmative, provided that the are linearly independent over .
With the above in mind, the theorem may be stated this way:
The Jacobson density theorem. Let be a simple right -module, , and a finite and -linearly independent set. If is a -linear transformation on then there exists such that for all in .
Proof
In the Jacobson density theorem, the right -module is simultaneously viewed as a left -module where , in the natural way: . It can be verified that this is indeed a left module structure on . As noted before, Schur's lemma proves is a division ring if is simple, and so is a vector space over .
The proof also relies on the following theorem proven in p. 185:
Theorem. Let be a simple right -module, , and a finite set. Write for the annihilator of in . Let be in with . Then is in ; the -span of .
Proof of the Jacobson density theorem
We use induction on . If is empty, then the theorem is vacuously true and the base case for induction is verified.
Assume is non-empty, let be an element of and write If is any -linear transformation on , by the induction hypothesis there exists such that for all in . Write . It is easily seen that is a submodule of . If , then the previous theorem implies that would be in the -span of , contradicting the -linear independence of , therefore . Since is simple, we have: . Since , there exists in such that .
Define and observe that for all in we have:
Now we do the same calculation for :
Therefore, for all in , as desired. This completes the inductive step of the proof. It follows now from mathematical induction that the theorem is true for finite sets of any size.
Topological characterization
A ring is said to act densely on a simple right -module if it satisfies the conclusion of the Jacobson density theorem. There is a topological reason for describing as "dense". Firstly, can be identified with a subring of by identifying each element of with the linear transformation it induces by right multiplication. If is given the discrete topology, and if is given the product topology, and is viewed as a subspace of and is given the subspace topology, then acts densely on if and only if is dense set in with this topology.
Consequences
The Jacobson density theorem has various important consequences in the structure theory of rings. Notably, the Artin–Wedderburn theorem's conclusion about the structure of simple right Artinian rings is recovered. The Jacobson density theorem also characterizes right or left primitive rings as dense subrings of the ring of -linear transformations on some -vector space , where is a division ring.
Relations to other results
This result is related to the Von Neumann bicommutant theorem, which states that, for a *-algebra of operators on a Hilbert space , the double commutant can be approximated by on any given finite set of vectors. In other words, the double commutant is the closure of in the weak operator topology. See also the Kaplansky density theorem in the von Neumann algebra setting.
Notes
References
External links
PlanetMath page
Theorems in ring theory
Module theory
Articles containing proofs | Jacobson density theorem | [
"Mathematics"
] | 955 | [
"Articles containing proofs",
"Fields of abstract algebra",
"Module theory"
] |
488,419 | https://en.wikipedia.org/wiki/Newtonian%20limit | In physics, the Newtonian limit is a mathematical approximation applicable to physical systems exhibiting (1) weak gravitation, (2) objects moving slowly compared to the speed of light, and (3) slowly changing (or completely static) gravitational fields. Under these conditions, Newton's law of universal gravitation may be used to obtain values that are accurate. In general, and in the presence of significant gravitation, the general theory of relativity must be used.
In the Newtonian limit, spacetime is approximately flat and the Minkowski metric may be used over finite distances. In this case 'approximately flat' is defined as space in which gravitational effect approaches 0, mathematically actual spacetime and Minkowski space are not identical, Minkowski space is an idealized model.
Special relativity
In special relativity, Newtonian behaviour can in most cases be obtained by performing the limit . In this limit, the often appearing gamma factor becomes 1
and the Lorentz transformations between reference frames turn into Galileo transformations
General relativity
The geodesic equation for a free particle on curved spacetime with metric can be derived from the action
If the spacetime-metric is
then, ignoring all contributions of order the action becomes
which is the action that reproduces the Newtonian equations of motion of a particle in a gravitational potential
See also
Classical limit
References
Special relativity
Dynamical systems | Newtonian limit | [
"Physics",
"Mathematics"
] | 274 | [
"Special relativity",
"Mechanics",
"Relativity stubs",
"Theory of relativity",
"Dynamical systems"
] |
488,605 | https://en.wikipedia.org/wiki/Outer%20measure | In the mathematical field of measure theory, an outer measure or exterior measure is a function defined on all subsets of a given set with values in the extended real numbers satisfying some additional technical conditions. The theory of outer measures was first introduced by Constantin Carathéodory to provide an abstract basis for the theory of measurable sets and countably additive measures. Carathéodory's work on outer measures found many applications in measure-theoretic set theory (outer measures are for example used in the proof of the fundamental Carathéodory's extension theorem), and was used in an essential way by Hausdorff to define a dimension-like metric invariant now called Hausdorff dimension. Outer measures are commonly used in the field of geometric measure theory.
Measures are generalizations of length, area and volume, but are useful for much more abstract and irregular sets than intervals in or balls in . One might expect to define a generalized measuring function on that fulfills the following requirements:
Any interval of reals has measure
The measuring function is a non-negative extended real-valued function defined for all subsets of .
Translation invariance: For any set and any real , the sets and have the same measure
Countable additivity: for any sequence of pairwise disjoint subsets of
It turns out that these requirements are incompatible conditions; see non-measurable set. The purpose of constructing an outer measure on all subsets of is to pick out a class of subsets (to be called measurable) in such a way as to satisfy the countable additivity property.
Outer measures
Given a set let denote the collection of all subsets of including the empty set An outer measure on is a set function
such that
:
: for arbitrary subsets of
Note that there is no subtlety about infinite summation in this definition. Since the summands are all assumed to be nonnegative, the sequence of partial sums could only diverge by increasing without bound. So the infinite sum appearing in the definition will always be a well-defined element of If, instead, an outer measure were allowed to take negative values, its definition would have to be modified to take into account the possibility of non-convergent infinite sums.
An alternative and equivalent definition. Some textbooks, such as Halmos (1950) and Folland (1999), instead define an outer measure on to be a function such that
:
: if and are subsets of with then
for arbitrary subsets of
Measurability of sets relative to an outer measure
Let be a set with an outer measure One says that a subset of is -measurable (sometimes called Carathéodory-measurable relative to , after the mathematician Carathéodory) if and only if
for every subset of
Informally, this says that a -measurable subset is one which may be used as a building block, breaking any other subset apart into pieces (namely, the piece which is inside of the measurable set together with the piece which is outside of the measurable set). In terms of the motivation for measure theory, one would expect that area, for example, should be an outer measure on the plane. One might then expect that every subset of the plane would be deemed "measurable," following the expected principle that
whenever and are disjoint subsets of the plane. However, the formal logical development of the theory shows that the situation is more complicated. A formal implication of the axiom of choice is that for any definition of area as an outer measure which includes as a special case the standard formula for the area of a rectangle, there must be subsets of the plane which fail to be measurable. In particular, the above "expected principle" is false, provided that one accepts the axiom of choice.
The measure space associated to an outer measure
It is straightforward to use the above definition of -measurability to see that
if is -measurable then its complement is also -measurable.
The following condition is known as the "countable additivity of on measurable subsets."
if are -measurable pairwise-disjoint ( for ) subsets of , then one has
A similar proof shows that:
if are -measurable subsets of then the union and intersection are also -measurable.
The properties given here can be summarized by the following terminology:
One thus has a measure space structure on arising naturally from the specification of an outer measure on This measure space has the additional property of completeness, which is contained in the following statement:
Every subset such that is -measurable.
This is easy to prove by using the second property in the "alternative definition" of outer measure.
Restriction and pushforward of an outer measure
Let be an outer measure on the set .
Pushforward
Given another set and a map define by
One can verify directly from the definitions that is an outer measure on .
Restriction
Let be a subset of . Define by
One can check directly from the definitions that is another outer measure on .
Measurability of sets relative to a pushforward or restriction
If a subset of is -measurable, then it is also -measurable for any subset of .
Given a map and a subset of , if is -measurable then is -measurable. More generally, is -measurable if and only if is -measurable for every subset of .
Regular outer measures
Definition of a regular outer measure
Given a set , an outer measure on is said to be regular if any subset can be approximated 'from the outside' by -measurable sets. Formally, this is requiring either of the following equivalent conditions:
There exists a -measurable subset of which contains and such that .
It is automatic that the second condition implies the first; the first implies the second by taking the countable intersection of with
The regular outer measure associated to an outer measure
Given an outer measure on a set , define by
Then is a regular outer measure on which assigns the same measure as to all -measurable subsets of . Every -measurable subset is also -measurable, and every -measurable subset of finite -measure is also -measurable.
So the measure space associated to may have a larger σ-algebra than the measure space associated to . The restrictions of and to the smaller σ-algebra are identical. The elements of the larger σ-algebra which are not contained in the smaller σ-algebra have infinite -measure and finite -measure.
From this perspective, may be regarded as an extension of .
Outer measure and topology
Suppose is a metric space and an outer measure on . If has the property that
whenever
then is called a metric outer measure.
Theorem. If is a metric outer measure on , then every Borel subset of is -measurable. (The Borel sets of are the elements of the smallest -algebra generated by the open sets.)
Construction of outer measures
There are several procedures for constructing outer measures on a set. The classic Munroe reference below describes two particularly useful ones which are referred to as Method I and Method II.
Method I
Let be a set, a family of subsets of which contains the empty set and a non-negative extended real valued function on which vanishes on the empty set.
Theorem. Suppose the family and the function are as above and define
That is, the infimum extends over all sequences of elements of which cover , with the convention that the infimum is infinite if no such sequence exists. Then is an outer measure on .
Method II
The second technique is more suitable for constructing outer measures on metric spaces, since it yields metric outer measures. Suppose is a metric space. As above is a family of subsets of which contains the empty set and a non-negative extended real valued function on which vanishes on the empty set. For each , let
and
Obviously, when since the infimum is taken over a smaller class as decreases. Thus
exists (possibly infinite).
Theorem. is a metric outer measure on .
This is the construction used in the definition of Hausdorff measures for a metric space.
See also
Inner measure
Notes
References
External links
Outer measure at Encyclopedia of Mathematics
Caratheodory measure at Encyclopedia of Mathematics
Measures (measure theory) | Outer measure | [
"Physics",
"Mathematics"
] | 1,672 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
488,687 | https://en.wikipedia.org/wiki/Fisher%20information%20metric | In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability distributions. It can be used to calculate the distance between probability distributions.
The metric is interesting in several aspects. By Chentsov’s theorem, the Fisher information metric on statistical models is the only Riemannian metric (up to rescaling) that is invariant under sufficient statistics.
It can also be understood to be the infinitesimal form of the relative entropy (i.e., the Kullback–Leibler divergence); specifically, it is the Hessian of the divergence. Alternately, it can be understood as the metric induced by the flat space Euclidean metric, after appropriate changes of variable. When extended to complex projective Hilbert space, it becomes the Fubini–Study metric; when written in terms of mixed states, it is the quantum Bures metric.
Considered purely as a matrix, it is known as the Fisher information matrix. Considered as a measurement technique, where it is used to estimate hidden parameters in terms of observed random variables, it is known as the observed information.
Definition
Given a statistical manifold with coordinates , one writes for the likelihood, that is the probability density of x as a function of . Here is drawn from the value space R for a (discrete or continuous) random variable X. The likelihood is normalized over but not :
.
The Fisher information metric then takes the form:
The integral is performed over all values x in R. The variable is now a coordinate on a Riemann manifold. The labels j and k index the local coordinate axes on the manifold.
When the probability is derived from the Gibbs measure, as it would be for any Markovian process, then can also be understood to be a Lagrange multiplier; Lagrange multipliers are used to enforce constraints, such as holding the expectation value of some quantity constant. If there are n constraints holding n different expectation values constant, then the dimension of the manifold is n dimensions smaller than the original space. In this case, the metric can be explicitly derived from the partition function; a derivation and discussion is presented there.
Substituting from information theory, an equivalent form of the above definition is:
To show that the equivalent form equals the above definition note that
and apply on both sides.
Examples
The Fisher information metric is particularly simple for the exponential family, which has The metric is The metric has a particularly simple form if we are using the natural parameters. In this case, , so the metric is just .
Normal distribution
Multivariate normal distribution Let be the precision matrix.
The metric splits to a mean part and a precision/variance part, because . The mean part is the precision matrix: . The precision part is .
In particular, for single variable normal distribution, . Let , then . This is the Poincaré half-plane model.
The shortest paths (geodesics) between two univariate normal distributions are either parallel to the axis, or half circular arcs centered on the -axis.
The geodesic connecting has formula where , and the arc-length parametrization is .
Relation to the Kullback–Leibler divergence
Alternatively, the metric can be obtained as the second derivative of the relative entropy or Kullback–Leibler divergence. To obtain this, one considers two probability distributions and , which are infinitesimally close to one another, so that
with an infinitesimally small change of in the j direction. Then, since the Kullback–Leibler divergence has an absolute minimum of 0 when , one has an expansion up to second order in of the form
.
The symmetric matrix is positive (semi) definite and is the Hessian matrix of the function at the extremum point . This can be thought of intuitively as: "The distance between two infinitesimally close points on a statistical differential manifold is the informational difference between them."
Relation to Ruppeiner geometry
The Ruppeiner metric and Weinhold metric are the Fisher information metric calculated for Gibbs distributions as the ones found in equilibrium statistical mechanics.
Change in free entropy
The action of a curve on a Riemannian manifold is given by
The path parameter here is time t; this action can be understood to give the change in free entropy of a system as it is moved from time a to time b. Specifically, one has
as the change in free entropy. This observation has resulted in practical applications in chemical and processing industry: in order to minimize the change in free entropy of a system, one should follow the minimum geodesic path between the desired endpoints of the process. The geodesic minimizes the entropy, due to the Cauchy–Schwarz inequality, which states that the action is bounded below by the length of the curve, squared.
Relation to the Jensen–Shannon divergence
The Fisher metric also allows the action and the curve length to be related to the Jensen–Shannon divergence. Specifically, one has
where the integrand dJSD is understood to be the infinitesimal change in the Jensen–Shannon divergence along the path taken. Similarly, for the curve length, one has
That is, the square root of the Jensen–Shannon divergence is just the Fisher metric (divided by the square root of 8).
As Euclidean metric
For a discrete probability space, that is, a probability space on a finite set of objects, the Fisher metric can be understood to simply be the Euclidean metric restricted to a positive orthant (e.g. "quadrant" in ) of a unit sphere, after appropriate changes of variable.
Consider a flat, Euclidean space, of dimension , parametrized by points . The metric for Euclidean space is given by
where the are 1-forms; they are the basis vectors for the cotangent space. Writing as the basis vectors for the tangent space, so that
,
the Euclidean metric may be written as
The superscript 'flat' is there to remind that, when written in coordinate form, this metric is with respect to the flat-space coordinate .
An N-dimensional unit sphere embedded in (N + 1)-dimensional Euclidean space may be defined as
This embedding induces a metric on the sphere, it is inherited directly from the Euclidean metric on the ambient space. It takes exactly the same form as the above, taking care to ensure that the coordinates are constrained to lie on the surface of the sphere. This can be done, e.g. with the technique of Lagrange multipliers.
Consider now the change of variable . The sphere condition now becomes the probability normalization condition
while the metric becomes
The last can be recognized as one-fourth of the Fisher information metric. To complete the process, recall that the probabilities are parametric functions of the manifold variables , that is, one has . Thus, the above induces a metric on the parameter manifold:
or, in coordinate form, the Fisher information metric is:
where, as before,
The superscript 'fisher' is present to remind that this expression is applicable for the coordinates ; whereas the non-coordinate form is the same as the Euclidean (flat-space) metric. That is, the Fisher information metric on a statistical manifold is simply (four times) the Euclidean metric restricted to the positive orthant of the sphere, after appropriate changes of variable.
When the random variable is not discrete, but continuous, the argument still holds. This can be seen in one of two different ways. One way is to carefully recast all of the above steps in an infinite-dimensional space, being careful to define limits appropriately, etc., in order to make sure that all manipulations are well-defined, convergent, etc. The other way, as noted by Gromov, is to use a category-theoretic approach; that is, to note that the above manipulations remain valid in the category of probabilities. Here, one should note that such a category would have the Radon–Nikodym property, that is, the Radon–Nikodym theorem holds in this category. This includes the Hilbert spaces; these are square-integrable, and in the manipulations above, this is sufficient to safely replace the sum over squares by an integral over squares.
As Fubini–Study metric
The above manipulations deriving the Fisher metric from the Euclidean metric can be extended to complex projective Hilbert spaces. In this case, one obtains the Fubini–Study metric. This should perhaps be no surprise, as the Fubini–Study metric provides the means of measuring information in quantum mechanics. The Bures metric, also known as the Helstrom metric, is identical to the Fubini–Study metric, although the latter is usually written in terms of pure states, as below, whereas the Bures metric is written for mixed states. By setting the phase of the complex coordinate to zero, one obtains exactly one-fourth of the Fisher information metric, exactly as above.
One begins with the same trick, of constructing a probability amplitude, written in polar coordinates, so:
Here, is a complex-valued probability amplitude; and are strictly real. The previous calculations are obtained by
setting . The usual condition that probabilities lie within a simplex, namely that
is equivalently expressed by the idea the square amplitude be normalized:
When is real, this is the surface of a sphere.
The Fubini–Study metric, written in infinitesimal form, using quantum-mechanical bra–ket notation, is
In this notation, one has that and integration over the entire measure space X is written as
The expression can be understood to be an infinitesimal variation; equivalently, it can be understood to be a 1-form in the cotangent space. Using the infinitesimal notation, the polar form of the probability above is simply
Inserting the above into the Fubini–Study metric gives:
Setting in the above makes it clear that the first term is (one-fourth of) the Fisher information metric. The full form of the above can be made slightly clearer by changing notation to that of standard Riemannian geometry, so that the metric becomes a symmetric 2-form acting on the tangent space. The change of notation is done simply replacing and and noting that the integrals are just expectation values; so:
The imaginary term is a symplectic form, it is the Berry phase or geometric phase. In index notation, the metric is:
Again, the first term can be clearly seen to be (one fourth of) the Fisher information metric, by setting . Equivalently, the Fubini–Study metric can be understood as the metric on complex projective Hilbert space that is induced by the complex extension of the flat Euclidean metric. The difference between this, and the Bures metric, is that the Bures metric is written in terms of mixed states.
Continuously-valued probabilities
A slightly more formal, abstract definition can be given, as follows.
Let X be an orientable manifold, and let be a measure on X. Equivalently, let be a probability space on , with sigma algebra and probability .
The statistical manifold S(X) of X is defined as the space of all measures on X (with the sigma-algebra held fixed). Note that this space is infinite-dimensional, and is commonly taken to be a Fréchet space. The points of S(X) are measures.
Pick a point and consider the tangent space . The Fisher information metric is then an inner product on the tangent space. With some abuse of notation, one may write this as
Here, and are vectors in the tangent space; that is, . The abuse of notation is to write the tangent vectors as if they are derivatives, and to insert the extraneous d in writing the integral: the integration is meant to be carried out using the measure over the whole space X. This abuse of notation is, in fact, taken to be perfectly normal in measure theory; it is the standard notation for the Radon–Nikodym derivative.
In order for the integral to be well-defined, the space S(X) must have the Radon–Nikodym property, and more specifically, the tangent space is restricted to those vectors that are square-integrable. Square integrability is equivalent to saying that a Cauchy sequence converges to a finite value under the weak topology: the space contains its limit points. Note that Hilbert spaces possess this property.
This definition of the metric can be seen to be equivalent to the previous, in several steps. First, one selects a submanifold of S(X) by considering only those measures that are parameterized by some smoothly varying parameter . Then, if is finite-dimensional, then so is the submanifold; likewise, the tangent space has the same dimension as .
With some additional abuse of language, one notes that the exponential map provides a map from vectors in a tangent space to points in an underlying manifold. Thus, if is a vector in the tangent space, then is the corresponding probability associated with point (after the parallel transport of the exponential map to .) Conversely, given a point , the logarithm gives a point in the tangent space (roughly speaking, as again, one must transport from the origin to point ; for details, refer to original sources). Thus, one has the appearance of logarithms in the simpler definition, previously given.
See also
Cramér–Rao bound
Fisher information
Hellinger distance
Information geometry
Notes
References
Garvesh Raskutti Sayan Mukherjee, (2014). The information geometry of mirror descent https://arxiv.org/pdf/1310.7780.pdf
Shun'ichi Amari (1985) Differential-geometrical methods in statistics, Lecture Notes in Statistics, Springer-Verlag, Berlin.
Shun'ichi Amari, Hiroshi Nagaoka (2000) Methods of information geometry, Translations of mathematical monographs; v. 191, American Mathematical Society.
Paolo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, (2009) Algebraic and Geometric Methods in Statistics, Cambridge U. Press, Cambridge.
Differential geometry
Information geometry
Statistical distance | Fisher information metric | [
"Physics",
"Mathematics"
] | 2,916 | [
"Mathematical structures",
"Physical quantities",
"Statistical distance",
"Distance",
"Category theory",
"Information geometry"
] |
488,743 | https://en.wikipedia.org/wiki/Carbon%20monoxide%20poisoning | Carbon monoxide poisoning typically occurs from breathing in carbon monoxide (CO) at excessive levels. Symptoms are often described as "flu-like" and commonly include headache, dizziness, weakness, vomiting, chest pain, and confusion. Large exposures can result in loss of consciousness, arrhythmias, seizures, or death. The classically described "cherry red skin" rarely occurs. Long-term complications may include chronic fatigue, trouble with memory, and movement problems.
CO is a colorless and odorless gas which is initially non-irritating. It is produced during incomplete burning of organic matter. This can occur from motor vehicles, heaters, or cooking equipment that run on carbon-based fuels. Carbon monoxide primarily causes adverse effects by combining with hemoglobin to form carboxyhemoglobin (symbol COHb or HbCO) preventing the blood from carrying oxygen and expelling carbon dioxide as carbaminohemoglobin. Additionally, many other hemoproteins such as myoglobin, Cytochrome P450, and mitochondrial cytochrome oxidase are affected, along with other metallic and non-metallic cellular targets.
Diagnosis is typically based on a HbCO level of more than 3% among nonsmokers and more than 10% among smokers. The biological threshold for carboxyhemoglobin tolerance is typically accepted to be 15% COHb, meaning toxicity is consistently observed at levels in excess of this concentration. The FDA has previously set a threshold of 14% COHb in certain clinical trials evaluating the therapeutic potential of carbon monoxide. In general, 30% COHb is considered severe carbon monoxide poisoning. The highest reported non-fatal carboxyhemoglobin level was 73% COHb.
Efforts to prevent poisoning include carbon monoxide detectors, proper venting of gas appliances, keeping chimneys clean, and keeping exhaust systems of vehicles in good repair. Treatment of poisoning generally consists of giving 100% oxygen along with supportive care. This procedure is often carried out until symptoms are absent and the HbCO level is less than 3%/10%.
Carbon monoxide poisoning is relatively common, resulting in more than 20,000 emergency room visits a year in the United States. It is the most common type of fatal poisoning in many countries. In the United States, non-fire related cases result in more than 400 deaths a year. Poisonings occur more often in the winter, particularly from the use of portable generators during power outages. The toxic effects of CO have been known since ancient history. The discovery that hemoglobin is affected by CO emerged with an investigation by James Watt and Thomas Beddoes into the therapeutic potential of hydrocarbonate in 1793, and later confirmed by Claude Bernard between 1846 and 1857.
Background
Carbon monoxide is not toxic to all forms of life, and the toxicity is a classical dose-dependent example of hormesis. Small amounts of carbon monoxide are naturally produced through many enzymatic and non-enzymatic reactions across phylogenetic kingdoms where it can serve as an important neurotransmitter (subcategorized as a gasotransmitter) and a potential therapeutic agent. In the case of prokaryotes, some bacteria produce, consume and respond to carbon monoxide whereas certain other microbes are susceptible to its toxicity. Currently, there are no known adverse effects on photosynthesizing plants.
The harmful effects of carbon monoxide are generally considered to be due to tightly binding with the prosthetic heme moiety of hemoproteins that results in interference with cellular operations, for example: carbon monoxide binds with hemoglobin to form carboxyhemoglobin which affects gas exchange and cellular respiration. Inhaling excessive concentrations of the gas can lead to hypoxic injury, nervous system damage, and even death.
As pioneered by Esther Killick, different species and different people across diverse demographics may have different carbon monoxide tolerance levels. The carbon monoxide tolerance level for any person is altered by several factors, including genetics (hemoglobin mutations), behavior such as activity level, rate of ventilation, a pre-existing cerebral or cardiovascular disease, cardiac output, anemia, sickle cell disease and other hematological disorders, geography and barometric pressure, and metabolic rate.
Physiology
Carbon monoxide is produced naturally by many physiologically relevant enzymatic and non-enzymatic reactions best exemplified by heme oxygenase catalyzing the biotransformation of heme (an iron protoporphyrin) into biliverdin and eventually bilirubin. Aside from physiological signaling, most carbon monoxide is stored as carboxyhemoglobin at non-toxic levels below 3% HbCO.
Therapeutics
Small amounts of CO are beneficial and enzymes exist that produce it at times of oxidative stress. A variety of drugs are being developed to introduce small amounts of CO, these drugs are commonly called carbon monoxide-releasing molecules. Historically, the therapeutic potential of factitious airs, notably carbon monoxide as hydrocarbonate, was investigated by Thomas Beddoes, James Watt, Tiberius Cavallo, James Lind, Humphry Davy, and others in many labs such as the Pneumatic Institution.
Signs and symptoms
On average, exposures at 100 ppm or greater is dangerous to human health. The WHO recommended levels of indoor CO exposure in 24 hours is 4 mg/m3. Acute exposure should not exceed 10 mg/m3 in 8 hours, 35 mg/m3 in one hour and 100 mg/m3 in 15 minutes.
Acute poisoning
The main manifestations of carbon monoxide poisoning develop in the organ systems most dependent on oxygen use, the central nervous system and the heart. The initial symptoms of acute carbon monoxide poisoning include headache, nausea, malaise, and fatigue. These symptoms are often mistaken for a virus such as influenza or other illnesses such as food poisoning or gastroenteritis. Headache is the most common symptom of acute carbon monoxide poisoning; it is often described as dull, frontal, and continuous. Increasing exposure produces cardiac abnormalities including fast heart rate, low blood pressure, and cardiac arrhythmia; central nervous system symptoms include delirium, hallucinations, dizziness, unsteady gait, confusion, seizures, central nervous system depression, unconsciousness, respiratory arrest, and death. Less common symptoms of acute carbon monoxide poisoning include myocardial ischemia, atrial fibrillation, pneumonia, pulmonary edema, high blood sugar, lactic acidosis, muscle necrosis, acute kidney failure, skin lesions, and visual and auditory problems. Carbon monoxide exposure may lead to a significantly shorter life span due to heart damage.
One of the major concerns following acute carbon monoxide poisoning is the severe delayed neurological manifestations that may occur. Problems may include difficulty with higher intellectual functions, short-term memory loss, dementia, amnesia, psychosis, irritability, a strange gait, speech disturbances, Parkinson's disease-like syndromes, cortical blindness, and a depressed mood. Depression may occur in those who did not have pre-existing depression. These delayed neurological sequelae may occur in up to 50% of poisoned people after 2 to 40 days. It is difficult to predict who will develop delayed sequelae; however, advanced age, loss of consciousness while poisoned, and initial neurological abnormalities may increase the chance of developing delayed symptoms.
Chronic poisoning
Chronic exposure to relatively low levels of carbon monoxide may cause persistent headaches, lightheadedness, depression, confusion, memory loss, nausea, hearing disorders and vomiting. It is unknown whether low-level chronic exposure may cause permanent neurological damage. Typically, upon removal from exposure to carbon monoxide, symptoms usually resolve themselves, unless there has been an episode of severe acute poisoning. However, one case noted permanent memory loss and learning problems after a three-year exposure to relatively low levels of carbon monoxide from a faulty furnace.
Chronic exposure may worsen cardiovascular symptoms in some people. Chronic carbon monoxide exposure might increase the risk of developing atherosclerosis. Long-term exposures to carbon monoxide present the greatest risk to persons with coronary heart disease and in females who are pregnant.
In experimental animals, carbon monoxide appears to worsen noise-induced hearing loss at noise exposure conditions that would have limited effects on hearing otherwise. In humans, hearing loss has been reported following carbon monoxide poisoning. Unlike the findings in animal studies, noise exposure was not a necessary factor for the auditory problems to occur.
Fatal poisoning
One classic sign of carbon monoxide poisoning is more often seen in the dead rather than the living – people have been described as looking red-cheeked and healthy. However, since this "cherry-red" appearance is more common in the dead, it is not considered a useful diagnostic sign in clinical medicine. In autopsy examinations, the appearance of carbon monoxide poisoning is notable because unembalmed dead persons are normally bluish and pale, whereas dead carbon-monoxide poisoned people may appear unusually lifelike in coloration. The colorant effect of carbon monoxide in such postmortem circumstances is thus analogous to its use as a red colorant in the commercial meat-packing industry.
Epidemiology
The true number of cases of carbon monoxide poisoning is unknown, since many non-lethal exposures go undetected. From the available data, carbon monoxide poisoning is the most common cause of injury and death due to poisoning worldwide. Poisoning is typically more common during the winter months. This is due to increased domestic use of gas furnaces, gas or kerosene space heaters, and kitchen stoves during the winter months, which if faulty and/or used without adequate ventilation, may produce excessive carbon monoxide. Carbon monoxide detection and poisoning also increases during power outages, when electric heating and cooking appliances become inoperative and residents may temporarily resort to fuel-burning space heaters, stoves, and grills (some of which are safe only for outdoor use but nonetheless are errantly burned indoors).
It has been estimated that more than 40,000 people per year seek medical attention for carbon monoxide poisoning in the United States. 95% of carbon monoxide poisoning deaths in Australia are due to gas space heaters. In many industrialized countries, carbon monoxide is the cause of more than 50% of fatal poisonings. In the United States, approximately 200 people die each year from carbon monoxide poisoning associated with home fuel-burning heating equipment. Carbon monoxide poisoning contributes to the approximately 5,613 smoke inhalation deaths each year in the United States. The CDC reports, "Each year, more than 500 Americans die from unintentional carbon monoxide poisoning, and more than 2,000 commit suicide by intentionally poisoning themselves." For the 10-year period from 1979 to 1988, 56,133 deaths from carbon monoxide poisoning occurred in the United States, with 25,889 of those being suicides, leaving 30,244 unintentional deaths. A report from New Zealand showed that 206 people died from carbon monoxide poisoning in the years of 2001 and 2002. In total carbon monoxide poisoning was responsible for 43.9% of deaths by poisoning in that country. In South Korea, 1,950 people had been poisoned by carbon monoxide with 254 deaths from 2001 through 2003. A report from Jerusalem showed 3.53 per 100,000 people were poisoned annually from 2001 through 2006. In Hubei, China, 218 deaths from poisoning were reported over a 10-year period with 16.5% being from carbon monoxide exposure.
Causes
Carbon monoxide is a product of combustion of organic matter under conditions of restricted oxygen supply, which prevents complete oxidation to carbon dioxide (CO2). Sources of carbon monoxide include cigarette smoke, house fires, faulty furnaces, heaters, wood-burning stoves, internal combustion vehicle exhaust, electrical generators, propane-fueled equipment such as portable stoves, and gasoline-powered tools such as leaf blowers, lawn mowers, high-pressure washers, concrete cutting saws, power trowels, and welders. Exposure typically occurs when equipment is used in buildings or semi-enclosed spaces.
Riding in the back of pickup trucks has led to poisoning in children. Idling automobiles with the exhaust pipe blocked by snow has led to the poisoning of car occupants. Any perforation between the exhaust manifold and shroud can result in exhaust gases reaching the cabin. Generators and propulsion engines on boats, notably houseboats, have resulted in fatal carbon monoxide exposures.
Poisoning may also occur following the use of a self-contained underwater breathing apparatus (SCUBA) due to faulty diving air compressors.
In caves carbon monoxide can build up in enclosed chambers due to the presence of decomposing organic matter. In coal mines incomplete combustion may occur during explosions resulting in the production of afterdamp. The gas is up to 3% CO and may be fatal after just a single breath. Following an explosion in a colliery, adjacent interconnected mines may become dangerous due to the afterdamp leaking from mine to mine. Such an incident followed the Trimdon Grange explosion which killed men in the Kelloe mine.
Another source of poisoning is exposure to the organic solvent dichloromethane, also known as methylene chloride, found in some paint strippers, as the metabolism of dichloromethane produces carbon monoxide. In November 2019, an EPA ban on dichloromethane in paint strippers for consumer use took effect in the United States.
Prevention
Detectors
Prevention remains a vital public health issue, requiring public education on the safe operation of appliances, heaters, fireplaces, and internal-combustion engines, as well as increased emphasis on the installation of carbon monoxide detectors. Carbon monoxide is tasteless, odourless, and colourless, and therefore can not be detected by visual cues or smell.
The United States Consumer Product Safety Commission has stated, "carbon monoxide detectors are as important to home safety as smoke detectors are," and recommends each home have at least one carbon monoxide detector, and preferably one on each level of the building. These devices, which are relatively inexpensive and widely available, are either battery- or AC-powered, with or without battery backup. In buildings, carbon monoxide detectors are usually installed around heaters and other equipment. If a relatively high level of carbon monoxide is detected, the device sounds an alarm, giving people the chance to evacuate and ventilate the building. Unlike smoke detectors, carbon monoxide detectors do not need to be placed near ceiling level.
The use of carbon monoxide detectors has been standardized in many areas. In the US, NFPA 720–2009, the carbon monoxide detector guidelines published by the National Fire Protection Association, mandates the placement of carbon monoxide detectors/alarms on every level of the residence, including the basement, in addition to outside sleeping areas. In new homes, AC-powered detectors must have battery backup and be interconnected to ensure early warning of occupants at all levels. NFPA 720-2009 is the first national carbon monoxide standard to address devices in non-residential buildings. These guidelines, which now pertain to schools, healthcare centers, nursing homes, and other non-residential buildings, include three main points:
1. A secondary power supply (battery backup) must operate all carbon monoxide notification appliances for at least 12 hours,
2. Detectors must be on the ceiling in the same room as permanently installed fuel-burning appliances, and
3. Detectors must be located on every habitable level and in every HVAC zone of the building.
Gas organizations will often recommend getting gas appliances serviced at least once a year.
Legal requirements
The NFPA standard is not necessarily enforced by law. As of April 2006, the US state of Massachusetts requires detectors to be present in all residences with potential CO sources, regardless of building age and whether they are owner-occupied or rented. This is enforced by municipal inspectors and was inspired by the death of 7-year-old Nicole Garofalo in 2005 due to snow blocking a home heating vent. Other jurisdictions may have no requirement or only mandate detectors for new construction or at time of sale.
World Health Organization recommendations
The following guideline values (ppm values rounded) and periods of time-weighted average exposures have been determined in such a way that the carboxyhemoglobin (COHb) level of 2.5% is not exceeded, even when a normal subject engages in light or moderate exercise:
100 mg/m3 (87 ppm) for 15 min
60 mg/m3 (52 ppm) for 30 min
30 mg/m3 (26 ppm) for 1 h
10 mg/m3 (9 ppm) for 8 h
7 mg/m3 (6 ppm) for 24 h (for indoor air quality, so as not to exceed 2% COHb for chronic exposure)
Diagnosis
As many symptoms of carbon monoxide poisoning also occur with many other types of poisonings and infections (such as the flu), the diagnosis is often difficult. A history of potential carbon monoxide exposure, such as being exposed to a residential fire, may suggest poisoning, but the diagnosis is confirmed by measuring the levels of carbon monoxide in the blood. This can be determined by measuring the amount of carboxyhemoglobin compared to the amount of hemoglobin in the blood.
The ratio of carboxyhemoglobin to hemoglobin molecules in an average person may be up to 5%, although cigarette smokers who smoke two packs per day may have levels up to 9%. In symptomatic poisoned people they are often in the 10–30% range, while persons who die may have postmortem blood levels of 30–90%.
As people may continue to experience significant symptoms of CO poisoning long after their blood carboxyhemoglobin concentration has returned to normal, presenting to examination with a normal carboxyhemoglobin level (which may happen in late states of poisoning) does not rule out poisoning.
Measuring
Carbon monoxide may be quantitated in blood using spectrophotometric methods or chromatographic techniques in order to confirm a diagnosis of poisoning in a person or to assist in the forensic investigation of a case of fatal exposure.
A CO-oximeter can be used to determine carboxyhemoglobin levels. Pulse CO-oximeters estimate carboxyhemoglobin with a non-invasive finger clip similar to a pulse oximeter. These devices function by passing various wavelengths of light through the fingertip and measuring the light absorption of the different types of hemoglobin in the capillaries. The use of a regular pulse oximeter is not effective in the diagnosis of carbon monoxide poisoning as these devices may be unable to distinguish carboxyhemoglobin from oxyhemoglobin.
Breath CO monitoring offers an alternative to pulse CO-oximetry. Carboxyhemoglobin levels have been shown to have a strong correlation with breath CO concentration. However, many of these devices require the user to inhale deeply and hold their breath to allow the CO in the blood to escape into the lung before the measurement can be made. As this is not possible in people who are unresponsive, these devices may not appropriate for use in on-scene emergency care detection of CO poisoning.
Differential diagnosis
There are many conditions to be considered in the differential diagnosis of carbon monoxide poisoning. The earliest symptoms, especially from low level exposures, are often non-specific and readily confused with other illnesses, typically flu-like viral syndromes, depression, chronic fatigue syndrome, chest pain, and migraine or other headaches. Carbon monoxide has been called a "great mimicker" due to the presentation of poisoning being diverse and nonspecific. Other conditions included in the differential diagnosis include acute respiratory distress syndrome, altitude sickness, lactic acidosis, diabetic ketoacidosis, meningitis, methemoglobinemia, or opioid or toxic alcohol poisoning.
Treatment
Initial treatment for carbon monoxide poisoning is to immediately remove the person from the exposure without endangering further people. Those who are unconscious may require CPR on site. Administering oxygen via non-rebreather mask shortens the half-life of carbon monoxide from 320 minutes, when breathing normal air, to only 80 minutes. Oxygen hastens the dissociation of carbon monoxide from carboxyhemoglobin, thus turning it back into hemoglobin. Due to the possible severe effects in the baby, pregnant women are treated with oxygen for longer periods of time than non-pregnant people.
Hyperbaric oxygen
Hyperbaric oxygen is also used in the treatment of carbon monoxide poisoning, as it may hasten dissociation of CO from carboxyhemoglobin and cytochrome oxidase to a greater extent than normal oxygen. Hyperbaric oxygen at three times atmospheric pressure reduces the half life of carbon monoxide to 23 minutes, compared to 80 minutes for oxygen at regular atmospheric pressure. It may also enhance oxygen transport to the tissues by plasma, partially bypassing the normal transfer through hemoglobin. However, it is controversial whether hyperbaric oxygen actually offers any extra benefits over normal high flow oxygen, in terms of increased survival or improved long-term outcomes. There have been randomized controlled trials in which the two treatment options have been compared; of the six performed, four found hyperbaric oxygen improved outcome and two found no benefit for hyperbaric oxygen. Some of these trials have been criticized for apparent flaws in their implementation. A review of all the literature concluded that the role of hyperbaric oxygen is unclear and the available evidence neither confirms nor denies a medically meaningful benefit. The authors suggested a large, well designed, externally audited, multicentre trial to compare normal oxygen with hyperbaric oxygen. While hyperbaric oxygen therapy is used for severe poisonings, the benefit over standard oxygen delivery is unclear.
Other
Further treatment for other complications such as seizure, hypotension, cardiac abnormalities, pulmonary edema, and acidosis may be required. Hypotension requires treatment with intravenous fluids; vasopressors may be required to treat myocardial depression. Cardiac dysrhythmias are treated with standard advanced cardiac life support protocols. If severe, metabolic acidosis is treated with sodium bicarbonate. Treatment with sodium bicarbonate is controversial as acidosis may increase tissue oxygen availability. Treatment of acidosis may only need to consist of oxygen therapy. The delayed development of neuropsychiatric impairment is one of the most serious complications of carbon monoxide poisoning. Brain damage is confirmed following MRI or CAT scans. Extensive follow up and supportive treatment is often required for delayed neurological damage. Outcomes are often difficult to predict following poisoning, especially people who have symptoms of cardiac arrest, coma, metabolic acidosis, or have high carboxyhemoglobin levels. One study reported that approximately 30% of people with severe carbon monoxide poisoning will have a fatal outcome. It has been reported that electroconvulsive therapy (ECT) may increase the likelihood of delayed neuropsychiatric sequelae (DNS) after carbon monoxide (CO) poisoning. A device that also provides some carbon dioxide to stimulate faster breathing (sold under the brand name ClearMate) may also be used.
Pathophysiology
The precise mechanisms by which the effects of carbon monoxide are induced upon bodily systems are complex and not yet fully understood. Known mechanisms include carbon monoxide binding to hemoglobin, myoglobin and mitochondrial cytochrome c oxidase and restricting oxygen supply, and carbon monoxide causing brain lipid peroxidation.
Hemoglobin
Carbon monoxide has a higher diffusion coefficient compared to oxygen, and the main enzyme in the human body that produces carbon monoxide is heme oxygenase, which is located in nearly all cells and platelets. Most endogenously produced CO is stored bound to hemoglobin as carboxyhemoglobin. The simplistic understanding for the mechanism of carbon monoxide toxicity is based on excess carboxyhemoglobin decreasing the oxygen-delivery capacity of the blood to tissues throughout the body. In humans, the affinity between hemoglobin and carbon monoxide is approximately 240 times stronger than the affinity between hemoglobin and oxygen. However, certain mutations such as the Hb-Kirklareli mutation has a relative 80,000 times greater affinity for carbon monoxide than oxygen resulting in systemic carboxyhemoglobin reaching a sustained level of 16% COHb.
Hemoglobin is a tetramer with four prosthetic heme groups to serve as oxygen binding sites. The average red blood cell contains 250 million hemoglobin molecules, therefore 1 billion heme sites capable of binding gas. The binding of carbon monoxide at any one of these sites increases the oxygen affinity of the remaining three sites, which causes the hemoglobin molecule to retain oxygen that would otherwise be delivered to the tissue; therefore carbon monoxide binding at any site may be as dangerous as carbon monoxide binding to all sites. Delivery of oxygen is largely driven by the Bohr effect and Haldane effect. To provide a simplified synopsis of the molecular mechanism of systemic gas exchange in layman's terms, upon inhalation of air it was widely thought oxygen binding to any of the heme sites triggers a conformational change in the globin/protein unit of hemoglobin which then enables the binding of additional oxygen to each of the other vacant heme sites. Upon arrival to the cell/tissues, oxygen release into the tissue is driven by "acidification" of the local pH (meaning a relatively higher concentration of 'acidic' protons/hydrogen ions) caused by an increase in the biotransformation of carbon dioxide waste into carbonic acid via carbonic anhydrase. In other words, oxygenated arterial blood arrives at cells in the "hemoglobin R-state" which has deprotonated/unionized amino acid residues (regarding nitrogen/amines) due to the less-acidic arterial pH environment (arterial blood averages pH 7.407 whereas venous blood is slightly more acidic at pH 7.371). The "T-state" of hemoglobin is deoxygenated in venous blood partially due to protonation/ionization caused by the acidic environment hence causing a conformation unsuited for oxygen-binding (in other words, oxygen is 'ejected' upon arrival to the cell because acid "attacks" the amines of hemoglobin causing ionization/protonation of the amine residues resulting in a conformation change unsuited for retaining oxygen). Furthermore, the mechanism for formation of carbaminohemoglobin generates additional 'acidic' hydrogen ions that may further stabilize the protonated/ionized deoxygenated hemoglobin. Upon return of venous blood into the lung and subsequent exhalation of carbon dioxide, the blood is "de-acidified" (see also: hyperventilation) allowing for the deprotonation/unionization of hemoglobin to then re-enable oxygen-binding as part of the transition to arterial blood (note this process is complex due to involvement of chemoreceptors and other physiological functionalities). Carbon monoxide is not 'ejected' due to acid, therefore carbon monoxide poisoning disturbs this physiological process hence the venous blood of poisoning patients is bright red akin to arterial blood since the carbonyl/carbon monoxide is retained. Hemoglobin is dark in deoxygenated venous blood, but it has a bright red color when carrying blood in oxygenated arterial blood and when converted into carboxyhemoglobin in both arterial and venous blood, so poisoned cadavers and even commercial meats treated with carbon monoxide acquire an unnatural lively reddish hue.
At toxic concentrations, carbon monoxide as carboxyhemoglobin significantly interferes with respiration and gas exchange by simultaneously inhibiting acquisition and delivery of oxygen to cells and preventing formation of carbaminohemoglobin which accounts for approximately 30% of carbon dioxide exportation. Therefore, a patient with carbon monoxide poisoning may experience severe hypoxia and acidosis (potentially both respiratory acidosis and metabolic acidosis) in addition to the toxicities of excess carbon monoxide inhibiting numerous hemoproteins, metallic and non-metallic targets which affect cellular machinery.
Myoglobin
Carbon monoxide also binds to the hemeprotein myoglobin. It has a high affinity for myoglobin, about 60 times greater than that of oxygen. Carbon monoxide bound to myoglobin may impair its ability to utilize oxygen. This causes reduced cardiac output and hypotension, which may result in brain ischemia. A delayed return of symptoms have been reported. This results following a recurrence of increased carboxyhemoglobin levels; this effect may be due to a late release of carbon monoxide from myoglobin, which subsequently binds to hemoglobin.
Cytochrome oxidase
Another mechanism involves effects on the mitochondrial respiratory enzyme chain that is responsible for effective tissue utilization of oxygen. Carbon monoxide binds to cytochrome oxidase with less affinity than oxygen, so it is possible that it requires significant intracellular hypoxia before binding. This binding interferes with aerobic metabolism and efficient adenosine triphosphate synthesis. Cells respond by switching to anaerobic metabolism, causing anoxia, lactic acidosis, and eventual cell death. The rate of dissociation between carbon monoxide and cytochrome oxidase is slow, causing a relatively prolonged impairment of oxidative metabolism.
Central nervous system effects
The mechanism that is thought to have a significant influence on delayed effects involves formed blood cells and chemical mediators, which cause brain lipid peroxidation (degradation of unsaturated fatty acids). Carbon monoxide causes endothelial cell and platelet release of nitric oxide, and the formation of oxygen free radicals including peroxynitrite. In the brain this causes further mitochondrial dysfunction, capillary leakage, leukocyte sequestration, and apoptosis. The result of these effects is lipid peroxidation, which causes delayed reversible demyelination of white matter in the central nervous system known as Grinker myelinopathy, which can lead to edema and necrosis within the brain. This brain damage occurs mainly during the recovery period. This may result in cognitive defects, especially affecting memory and learning, and movement disorders. These disorders are typically related to damage to the cerebral white matter and basal ganglia. Hallmark pathological changes following poisoning are bilateral necrosis of the white matter, globus pallidus, cerebellum, hippocampus and the cerebral cortex.
Pregnancy
Carbon monoxide poisoning in pregnant women may cause severe adverse fetal effects. Poisoning causes fetal tissue hypoxia by decreasing the release of maternal oxygen to the fetus. Carbon monoxide also crosses the placenta and combines with fetal hemoglobin, causing more direct fetal tissue hypoxia. Additionally, fetal hemoglobin has a 10 to 15% higher affinity for carbon monoxide than adult hemoglobin, causing more severe poisoning in the fetus than in the adult. Elimination of carbon monoxide is slower in the fetus, leading to an accumulation of the toxic chemical. The level of fetal morbidity and mortality in acute carbon monoxide poisoning is significant, so despite mild maternal poisoning or following maternal recovery, severe fetal poisoning or death may still occur.
History
Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Primitive cavemen probably discovered the toxicity of carbon monoxide upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind with carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals.
Early civilizations developed mythological tales to explain the origin of fire, such as Vulcan, Pkharmat, and Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and symptoms of CO poisoning appeared in Cassius Iatrosophista's Quaestiones Medicae et Problemata Naturalia circa 130 AD. Julian the Apostate, Caelius Aurelianus, and several others similarly documented early knowledge of the toxicity symptoms of carbon monoxide poisoning as caused by coal fumes in the ancient era.
Documented cases by Livy and Cicero allude to carbon monoxide being used as a method of suicide in ancient Rome. Emperor Lucius Verus used smoke to execute prisoners. Many deaths have been linked to carbon monoxide poisoning including Emperor Jovian, Empress Fausta, and Seneca. The most high-profile death by carbon monoxide poisoning may possibly have been Cleopatra or Edgar Allan Poe.
In the fifteenth century, coal miners believed sudden death was caused by evil spirits; carbon monoxide poisoning has been linked to supernatural and paranormal experiences, witchcraft, etc. throughout the following centuries including in the modern present day exemplified by Carrie Poppy's investigations.
Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716, notably rejecting villagers attributing death to demonic superstition. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s. Joseph Priestley is credited with first synthesizing carbon monoxide in 1772 which he had called heavy inflammable air, and Carl Wilhelm Scheele isolated carbon monoxide from coal in 1773 suggesting it to be the toxic entity.
The dose-dependent risk of carbon monoxide poisoning as hydrocarbonate was investigated in the late 1790s by Thomas Beddoes, James Watt, Tiberius Cavallo, James Lind, Humphry Davy, and many others in the context of inhalation of factitious airs, much of which occurred at the Pneumatic Institution.
William Cruickshank discovered carbon monoxide as a molecule containing one carbon and one oxygen atom in 1800, thereby initiating the modern era of research exclusively focused on carbon monoxide. The mechanism for toxicity was first suggested by James Watt in 1793, followed by Adrien Chenot in 1854 and finally demonstrated by Claude Bernard after 1846 as published in 1857 and also independently published by Felix Hoppe-Seyler in the same year.
The first controlled clinical trial studying the toxicity of carbon monoxide occurred in 1973.
Historical detection
Carbon monoxide poisoning has plagued coal miners for many centuries. In the context of mining, carbon monoxide is widely known as whitedamp. John Scott Haldane identified carbon monoxide as the lethal constituent of afterdamp, the gas created by combustion, after examining many bodies of miners killed in pit explosions. By 1911, Haldane introduced the use of small animals for miners to detect dangerous levels of carbon monoxide underground, either white mice or canaries which have little tolerance for carbon monoxide thereby offering an early warning, i.e. canary in a coal mine. The canary in British pits was replaced in 1986 by the electronic gas detector.
The first qualitative analytical method to detect carboxyhemoglobin emerged in 1858 with a colorimetric method developed by Felix Hoppe-Seyler, and the first quantitative analysis method emerged in 1880 with Josef von Fodor.
Historical treatment
The use of oxygen emerged with anecdotal reports such as Humphry Davy having been treated with oxygen in 1799 upon inhaling three quarts of hydrocarbonate (water gas). Samuel Witter developed an oxygen inhalation protocol in response to carbon monoxide poisoning in 1814. Similarly, an oxygen inhalation protocol was recommend for malaria (literally translated to "bad air") in 1830 based on malaria symptoms aligning with carbon monoxide poisoning. Other oxygen protocols emerged in the late 1800s. The use of hyperbaric oxygen in rats following poisoning was studied by Haldane in 1895 while its use in humans began in the 1960s.
Incidents
The worst accidental mass poisoning from carbon monoxide was the Balvano train disaster which occurred on 3 March 1944 in Italy, when a freight train with many illegal passengers stalled in a tunnel, leading to the death of over 500 people.
Over 50 people are suspected to have died from smoke inhalation as a result of the Branch Davidian Massacre during the Waco siege in 1993.
On 14 December 2024 12 individuals died by carbon monoxide poisoning in Gudauri (Georgia) as electric generators using fuel oil were placed in a closed area near their rooms.
Weaponization
In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.
The extermination of stray dogs by a carbon monoxide gas chamber was described in 1874. In 1884, an article appeared in Scientific American describing the use of a carbon monoxide gas chamber for slaughterhouse operations as well as euthanizing a variety of animals.
As part of the Holocaust during World War II, the Nazis used gas vans at Chelmno extermination camp and elsewhere to murder an estimated 700,000 or more people by carbon monoxide poisoning. This method was also used in the gas chambers of several death camps such as Treblinka, Sobibor, and Belzec. Gassing with carbon monoxide started in Action T4. The gas was supplied by IG Farben in pressurized cylinders and fed by tubes into the gas chambers built at various mental hospitals, such as Hartheim Euthanasia Centre. Exhaust fumes from tank engines, for example, were used to supply the gas to the chambers.
References
External links
Centers for Disease Control and Prevention (CDC) – Carbon Monoxide – NIOSH Workplace Safety and Health Topic
International Programme on Chemical Safety (1999). Carbon Monoxide, Environmental Health Criteria 213, Geneva: WHO
Poisoning
Accidents
Industrial hygiene
Medical emergencies
Natural gas safety
Suicide by poison
Toxic effects of substances chiefly nonmedicinal as to source
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | Carbon monoxide poisoning | [
"Chemistry",
"Environmental_science"
] | 8,008 | [
"Toxic effects of substances chiefly nonmedicinal as to source",
"Natural gas safety",
"Natural gas technology",
"Toxicology"
] |
488,812 | https://en.wikipedia.org/wiki/Silver%20sulfide | Silver sulfide is an inorganic compound with the formula . A dense black solid, it is the only sulfide of silver. It is useful as a photosensitizer in photography. It constitutes the tarnish that forms over time on silverware and other silver objects. Silver sulfide is insoluble in most solvents, but is degraded by strong acids. Silver sulfide is a network solid made up of silver (electronegativity of 1.98) and sulfur (electronegativity of 2.58) where the bonds have low ionic character (approximately 10%).
Formation
Silver sulfide naturally occurs as the tarnish on silverware. When combined with silver, hydrogen sulfide gas creates a layer of black silver sulfide patina on the silver, protecting the inner silver from further conversion to silver sulfide. Silver whiskers can form when silver sulfide forms on the surface of silver electrical contacts operating in an atmosphere rich in hydrogen sulfide and high humidity. Such atmospheres can exist in sewage treatment and paper mills.
Structure and properties
Three forms are known: monoclinic acanthite (α-form), stable below 179 °C, body centered cubic so-called argentite (β-form), stable above 180 °C, and a high temperature face-centred cubic (γ-form) stable above 586 °C. The higher temperature forms are electrical conductors. It is found in nature as relatively low temperature mineral acanthite. Acanthite is an important ore of silver. The acanthite, monoclinic, form features two kinds of silver centers, one with two and the other with three near neighbour sulfur atoms. Argentite refers to a cubic form, which, due to instability in "normal" temperatures, is found in form of the pseudomorphosis of acanthite after argentite.
Exceptional ductility of α-Ag2S
Relative to most inorganic materials, α-Ag2S displays exceptional ductility at room temperature. This material can undergo extensive deformation, akin to metals, without fracturing. Such behavior is evident in various mechanical tests; for instance, α-Ag2S can be easily machined into cylindrical or bar shapes and can withstand substantial deformation under compression, three-point bending, and tensile stresses. The material sustains over 50% engineering strain in compression tests and up to 20% or more in bending tests.
The intrinsic ductility of alpha-phase silver sulfide (α-Ag2S) is underpinned by its unique structural and chemical bonding characteristics. At the atomic level, its monoclinic crystal structure, which remains stable up to 451 K, enables the movement of atoms and dislocations along well-defined crystallographic planes known as slip planes. Additionally, the dynamic bonding within the crystal structure supports both the sliding of atomic layers and the maintenance of material integrity during deformation. The interatomic forces within the slip planes are sufficiently strong to prevent the material from cleaving while still allowing for considerable flexibility. Further insights into α-Ag2S's ductility come from density functional theory calculations, which reveal that the primary slip planes align with the [100] direction and slipping occurs along the [001] direction. This arrangement permits atoms to glide over each other under stress through minute adjustments in the interlayer distances, which are energetically favorable as indicated by low slipping energy barriers (ΔEB) and high cleavage energies (ΔEC). These properties ensure significant deformation capability without fracture. Silver and sulfur atoms in α-Ag2S form transient, yet robust interactions that enable the material to retain its integrity while deforming. This behavior is akin to that of metals, where dislocations move with relative ease, providing α-Ag2S with a unique combination of flexibility and strength, making it exceptionally resistant to cracking under mechanical stress.
History
In 1833 Michael Faraday noticed that the resistance of silver sulfide decreased dramatically as temperature increased. This constituted the first report of a semiconducting material.
Silver sulfide is a component of classical qualitative inorganic analysis.
References
External links
Tarnishing of Silver: A Short Review V&A Conservation Journal
Images of silver whiskers NASA
Sulfides
Silver compounds
Semiconductors | Silver sulfide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 869 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
488,815 | https://en.wikipedia.org/wiki/Utility%20frequency | The utility frequency, (power) line frequency (American English) or mains frequency (British English) is the nominal frequency of the oscillations of alternating current (AC) in a wide area synchronous grid transmitted from a power station to the end-user. In large parts of the world this is 50 Hz, although in the Americas and parts of Asia it is typically 60 Hz. Current usage by country or region is given in the list of mains electricity by country.
During the development of commercial electric power systems in the late-19th and early-20th centuries, many different frequencies (and voltages) had been used. Large investment in equipment at one frequency made standardization a slow process. However, as of the turn of the 21st century, places that now use the 50 Hz frequency tend to use 220–240 V, and those that now use 60 Hz tend to use 100–127 V. Both frequencies coexist today (Japan uses both) with no great technical reason to prefer one over the other and no apparent desire for complete worldwide standardization.
Electric clocks
In practice, the exact frequency of the grid varies around the nominal frequency, reducing when the grid is heavily loaded, and speeding up when lightly loaded. However, most utilities will adjust generation onto the grid over the course of the day to ensure a constant number of cycles occur. This is used by some clocks to accurately maintain their time.
Operating factors
Several factors influence the choice of frequency in an AC system. Lighting, motors, transformers, generators, and transmission lines all have characteristics which depend on the power frequency. All of these factors interact and make selection of a power frequency a matter of considerable importance. The best frequency is a compromise among competing requirements.
In the late 19th century, designers would pick a relatively high frequency for systems featuring transformers and arc lights, so as to economize on transformer materials and to reduce visible flickering of the lamps, but would pick a lower frequency for systems with long transmission lines or feeding primarily motor loads or rotary converters for producing direct current. When large central generating stations became practical, the choice of frequency was made based on the nature of the intended load. Eventually improvements in machine design allowed a single frequency to be used both for lighting and motor loads. A unified system improved the economics of electricity production, since system load was more uniform during the course of a day.
Lighting
The first applications of commercial electric power were incandescent lighting and commutator-type electric motors. Both devices operate well on DC, but DC could not be easily changed in voltage, and was generally only produced at the required utilization voltage.
If an incandescent lamp is operated on a low-frequency current, the filament cools on each half-cycle of the alternating current, leading to perceptible change in brightness and flicker of the lamps; the effect is more pronounced with arc lamps, and the later mercury-vapor lamps and fluorescent lamps. Open arc lamps made an audible buzz on alternating current, leading to experiments with high-frequency alternators to raise the sound above the range of human hearing.
Rotating machines
Commutator-type motors do not operate well on high-frequency AC, because the rapid changes of current are opposed by the inductance of the motor field. Though commutator-type universal motors are common in AC household appliances and power tools, they are small motors, less than 1 kW. The induction motor was found to work well on frequencies around 50 to 60 Hz, but with the materials available in the 1890s would not work well at a frequency of, say, 133 Hz. There is a fixed relationship between the number of magnetic poles in the induction motor field, the frequency of the alternating current, and the rotation speed; so, a given standard speed limits the choice of frequency (and the reverse). Once AC electric motors became common, it was important to standardize frequency for compatibility with the customer's equipment.
Generators operated by slow-speed reciprocating engines will produce lower frequencies, for a given number of poles, than those operated by, for example, a high-speed steam turbine. For very slow prime mover speeds, it would be costly to build a generator with enough poles to provide a high AC frequency. As well, synchronizing two generators to the same speed was found to be easier at lower speeds. While belt drives were common as a way to increase speed of slow engines, in very large ratings (thousands of kilowatts) these were expensive, inefficient, and unreliable. After about 1906, generators driven directly by steam turbines favored higher frequencies. The steadier rotation speed of high-speed machines allowed for satisfactory operation of commutators in rotary converters.
The synchronous speed N in RPM is calculated using the formula,
where f is the frequency in hertz and P is the number of poles.
Direct-current power was not entirely displaced by alternating current and was useful in railway and electrochemical processes. Prior to the development of mercury arc valve rectifiers, rotary converters were used to produce DC power from AC. Like other commutator-type machines, these worked better with lower frequencies.
Transmission and transformers
With AC, transformers can be used to step down high transmission voltages to lower customer utilization voltage. The transformer is effectively a voltage conversion device with no moving parts and requiring little maintenance. The use of AC eliminated the need for spinning DC voltage conversion motor-generators that require regular maintenance and monitoring.
Since, for a given power level, the dimensions of a transformer are roughly inversely proportional to frequency, a system with many transformers would be more economical at a higher frequency.
Electric power transmission over long lines favors lower frequencies. The effects of the distributed capacitance and inductance of the line are less at low frequency.
System interconnection
Generators can only be interconnected to operate in parallel if they are of the same frequency and wave-shape. By standardizing the frequency used, generators in a geographic area can be interconnected in a grid, providing reliability and cost savings.
History
Many different power frequencies were used in the 19th century.
Very early isolated AC generating schemes used arbitrary frequencies based on convenience for steam engine, water turbine, and electrical generator design. Frequencies between Hz and Hz were used on different systems. For example, the city of Coventry, England, in 1895 had a unique 87 Hz single-phase distribution system that was in use until 1906. The proliferation of frequencies grew out of the rapid development of electrical machines in the period 1880 through 1900.
In the early incandescent lighting period, single-phase AC was common and typical generators were 8-pole machines operated at 2,000 RPM, giving a frequency of 133 hertz.
Though many theories exist, and quite a few entertaining urban legends, there is little certitude in the details of the history of 60 Hz vs. 50 Hz.
The German company AEG (descended from a company founded by Edison in Germany) built the first German generating facility to run at 50 Hz. At the time, AEG had a virtual monopoly and their standard spread to the rest of Europe. After observing flicker of lamps operated by the 40 Hz power transmitted by the Lauffen-Frankfurt link in 1891, AEG raised their standard frequency to 50 Hz in 1891.
Westinghouse Electric decided to standardize on a higher frequency to permit operation of both electric lighting and induction motors on the same generating system. Although 50 Hz was suitable for both, in 1890 Westinghouse considered that existing arc-lighting equipment operated slightly better on 60 Hz, and so that frequency was chosen. The operation of Tesla's induction motor, licensed by Westinghouse in 1888, required a lower frequency than the 133 Hz common for lighting systems at that time. In 1893 General Electric Corporation, which was affiliated with AEG in Germany, built a generating project at Mill Creek to bring electricity to Redlands, California using 50 Hz, but changed to 60 Hz a year later to maintain market share with the Westinghouse standard.
25 Hz origins
The first generators at the Niagara Falls project, built by Westinghouse in 1895, were 25 Hz, because the turbine speed had already been set before alternating current power transmission had been definitively selected. Westinghouse would have selected a low frequency of 30 Hz to drive motor loads, but the turbines for the project had already been specified at 250 RPM. The machines could have been made to deliver Hz power suitable for heavy commutator-type motors, but the Westinghouse company objected that this would be undesirable for lighting and suggested Hz. Eventually a compromise of 25 Hz, with 12-pole 250 RPM generators, was chosen. Because the Niagara project was so influential on electric power systems design, 25 Hz prevailed as the North American standard for low-frequency AC.
40 Hz origins
A General Electric study concluded that 40 Hz would have been a good compromise between lighting, motor, and transmission needs, given the materials and equipment available in the first quarter of the 20th century. Several 40 Hz systems were built. The Lauffen-Frankfurt demonstration used 40 Hz to transmit power 175 km in 1891. A large interconnected 40 Hz network existed in north-east England (the Newcastle-upon-Tyne Electric Supply Company, NESCO) until the advent of the National Grid (UK) in the late 1920s, and projects in Italy used 42 Hz. The oldest continuously operating commercial hydroelectric power station in the United States, Mechanicville Hydroelectric Plant, still produces electric power at 40 Hz and supplies power to the local 60 Hz transmission system through frequency changers. Industrial plants and mines in North America and Australia sometimes were built with 40 Hz electrical systems which were maintained until too uneconomic to continue. Although frequencies near 40 Hz found much commercial use, these were bypassed by standardized frequencies of 25, 50 and 60 Hz preferred by higher volume equipment manufacturers.
The Ganz Company of Hungary had standardized on 5000 alternations per minute (41 Hz) for their products, so Ganz clients had 41 Hz systems that in some cases ran for many years.
Standardization
In the early days of electrification, so many frequencies were used that no single value prevailed (London in 1918 had ten different frequencies). As the 20th century continued, more power was produced at 60 Hz (North America) or 50 Hz (Europe and most of Asia). Standardization allowed international trade in electrical equipment. Much later, the use of standard frequencies allowed interconnection of power grids. It was not until after World War II – with the advent of affordable electrical consumer goods – that more uniform standards were enacted.
In the United Kingdom, a standard frequency of 50 Hz was declared as early as 1904, but significant development continued at other frequencies. The implementation of the National Grid starting in 1926 compelled the standardization of frequencies among the many interconnected electrical service providers. The 50 Hz standard was completely established only after World War II.
By about 1900, European manufacturers had mostly standardized on 50 Hz for new installations. The German Verband der Elektrotechnik (VDE), in the first standard for electrical machines and transformers in 1902, recommended 25 Hz and 50 Hz as standard frequencies. VDE did not see much application of 25 Hz, and dropped it from the 1914 edition of the standard. Remnant installations at other frequencies persisted until well after the Second World War.
Because of the cost of conversion, some parts of the distribution system may continue to operate on original frequencies even after a new frequency is chosen. 25 Hz power was used in Ontario, Quebec, the northern United States, and for railway electrification. In the 1950s, many 25 Hz systems, from the generators right through to household appliances, were converted and standardized. Until 2006, some 25 Hz generators were still in existence at the Sir Adam Beck 1 (these were retrofitted to 60 Hz) and the Rankine generating stations (until its 2006 closure) near Niagara Falls to provide power for large industrial customers who did not want to replace existing equipment; and some 25 Hz motors and a 25 Hz power station exist in New Orleans for floodwater pumps. The 15 kV AC rail networks, used in Germany, Austria, Switzerland, Sweden, and Norway, still operate at Hz or 16.7 Hz.
In some cases, where most load was to be railway or motor loads, it was considered economic to generate power at 25 Hz and install rotary converters for 60 Hz distribution. Converters for production of DC from alternating current were available in larger sizes and were more efficient at 25 Hz compared with 60 Hz. Remnant fragments of older systems may be tied to the standard frequency system via a rotary converter or static inverter frequency changer. These allow energy to be interchanged between two power networks at different frequencies, but the systems are large, costly, and waste some energy in operation.
Rotating-machine frequency changers used to convert between 25 Hz and 60 Hz systems were awkward to design; a 60 Hz machine with 24 poles would turn at the same speed as a 25 Hz machine with 10 poles, making the machines large, slow-speed, and expensive. A ratio of 60/30 would have simplified these designs, but the installed base at 25 Hz was too large to be economically opposed.
In the United States, Southern California Edison had standardized on 50 Hz. Much of Southern California operated on 50 Hz and did not completely change frequency of their generators and customer equipment to 60 Hz until around 1948. Some projects by the Au Sable Electric Company used 30 Hz at transmission voltages up to 110,000 volts in 1914.
Initially in Brazil, electric machinery were imported from Europe and United States, implying the country had both 50 Hz and 60 Hz standards according to each region. In 1938, the federal government made a law, Decreto-Lei 852, intended to bring the whole country under 50 Hz within eight years. The law did not work, and in the early 1960s it was decided that Brazil would be unified under 60 Hz standard, because most developed and industrialized areas used 60 Hz; and a new law Lei 4.454 was declared in 1964. Brazil underwent a frequency conversion program to 60 Hz that was not completed until 1978.
In Mexico, areas operating on 50 Hz grid were converted during the 1970s, uniting the country under 60 Hz.
In Japan, the western part of the country (Nagoya and west) uses 60 Hz and the eastern part (Tokyo and east) uses 50 Hz. This originates in the first purchases of generators from AEG in 1895, installed for Tokyo, and General Electric in 1896, installed in Osaka. The boundary between the two regions contains four back-to-back HVDC substations which convert the frequency; these are Shin Shinano, Sakuma Dam, Minami-Fukumitsu, and the Higashi-Shimizu Frequency Converter.
Utility frequencies in North America in 1897
Utility frequencies in Europe to 1900
Even by the middle of the 20th century, utility frequencies were still not entirely standardized at the now-common 50 Hz or 60 Hz. In 1946, a reference manual for designers of radio equipment listed the following now obsolete frequencies as in use. Many of these regions also had 50-cycle, 60-cycle, or direct current supplies.
Frequencies in use in 1946 (as well as 50 Hz and 60 Hz)
Where regions are marked (*), this is the only utility frequency shown for that region.
Railways
Other power frequencies are still used. Germany, Austria, Switzerland, Sweden, and Norway use traction power networks for railways, distributing single-phase AC at Hz or 16.7 Hz. A frequency of 25 Hz is used for the Austrian Mariazell Railway, as well as Amtrak and SEPTA's traction power systems in the United States. Other AC railway systems are energized at the local commercial power frequency, 50 Hz or 60 Hz.
Traction power may be derived from commercial power supplies by frequency converters, or in some cases may be produced by dedicated traction powerstations. In the 19th century, frequencies as low as 8 Hz were contemplated for operation of electric railways with commutator motors.
Some outlets in trains carry the correct voltage, but using the original train network frequency like Hz or 16.7 Hz.
400 Hz
Power frequencies as high as 400 Hz are used in aircraft, spacecraft, submarines, server rooms for computer power, military equipment, and hand-held machine tools. Such high frequencies cannot be economically transmitted long distances; the increased frequency greatly increases series impedance due to the inductance of transmission lines, making power transmission difficult. Consequently, 400 Hz power systems are usually confined to a building or vehicle.
Transformers, for example, can be made smaller because the magnetic core can be much smaller for the same power level. Induction motors turn at a speed proportional to frequency, so a high-frequency power supply allows more power to be obtained for the same motor volume and mass. Transformers and motors for 400 Hz are much smaller and lighter than at 50 or 60 Hz, which is an advantage in aircraft and ships. A United States military standard MIL-STD-704 exists for aircraft use of 400 Hz power.
Stability
Time error correction (TEC)
Regulation of power system frequency for timekeeping accuracy was not commonplace until after 1916 with Henry Warren's invention of the Warren Power Station Master Clock and self-starting synchronous motor. Nikola Tesla demonstrated the concept of clocks synchronized by line frequency at the 1893 Chicago Worlds fair. The Hammond Organ also depends on a synchronous AC clock motor to maintain the correct speed of its internal "tone wheel" generator, thus keeping all notes pitch-perfect.
Today, AC power network operators regulate the daily average frequency so that clocks stay within a few seconds of the correct time. In practice the nominal frequency is raised or lowered by a specific percentage to maintain synchronization. Over the course of a day, the average frequency is maintained at a nominal value within a few hundred parts per million. In the synchronous grid of Continental Europe, the deviation between network phase time and UTC (based on International Atomic Time) is calculated at 08:00 each day in a control center in Switzerland. The target frequency is then adjusted by up to ±0.01 Hz (±0.02%) from 50 Hz as needed, to ensure a long-term frequency average of exactly 50 Hz × 60 s/min × 60 min/h × 24 h/d = cycles per day. In North America, whenever the error exceeds 10 seconds for the Eastern Interconnection, 3 seconds for the Texas Interconnection, or 2 seconds for the Western Interconnection, a correction of ±0.02 Hz (0.033%) is applied. Time error corrections start and end either on the hour or on the half-hour.
Real-time frequency meters for power generation in the United Kingdom are available online – an official one for the National Grid, and an unofficial one maintained by Dynamic Demand. Real-time frequency data of the synchronous grid of Continental Europe is available on websites such as . The Frequency Monitoring Network (FNET) at the University of Tennessee measures the frequency of the interconnections within the North American power grid, as well as in several other parts of the world. These measurements are displayed on the FNET website.
US regulations
In the United States, the Federal Energy Regulatory Commission made time error correction mandatory in 2009. In 2011, The North American Electric Reliability Corporation (NERC) discussed a proposed experiment that would relax frequency regulation requirements for electrical grids which would reduce the long-term accuracy of clocks and other devices that use the 60 Hz grid frequency as a time base.
Frequency and load
Modern alternating-current grids use precise frequency control as an out-of-band signal to coordinate generators connected the network. The practice arose because the frequency of a mechanical generator varies with the input force and output load experienced. Excess load withdraws rotational energy from the generator shaft, reducing the frequency of the generated current; excess force deposits rotational energy, increasing frequency. Automatic generation control (AGC) maintains scheduled frequency and interchange power flows by adjusting the generator governor to counteract frequency changes, typically within several decaseconds.
Flywheel physics does not apply to inverter-connected solar farms or other DC-linked power supplies. However, such power plants or storage systems can be programmed to follow the frequency signal. Indeed, a 2017 trial for CAISO discovered that solar plants could respond to the signal faster than traditional generators, because they did not need to accelerate a rotating mass.
Small, temporary frequency changes are an unavoidable consequence of changing demand, but dramatic, rapid frequency shifts often signal that a distribution network is near capacity limits. Exceptional examples have occurred before major outages. During a severe failure of generators or transmission lines, the ensuing load-generation imbalance will induce variation in local power system frequencies. Loss of an interconnection causes system frequency to increase (due to excess generation) upstream of the loss, but may cause a collapse in frequency or voltage (due to excess load) downstream of the loss. Consequently many power system protective relays automatically trigger on severe underfrequency (typically too low, depending on the system's disturbance tolerance and the severity of protection measures). These initiate load shedding or trip interconnection lines to preserve the operation of at least part of the network.
Smaller power systems, not extensively interconnected with many generators and loads, will not maintain frequency with the same degree of accuracy. Where system frequency is not tightly regulated during heavy load periods, system operators may allow system frequency to rise during periods of light load to maintain a daily average frequency of acceptable accuracy. Portable generators, not connected to a utility system, need not tightly regulate their frequency because typical loads are insensitive to small frequency deviations.
Load-frequency control
Load-frequency control (LFC) is a type of integral control that restores the system frequency while respecting contracts for power provision or consumption to surrounding areas. The automatic generation scheme described in establishes a damping that minimizes the magnitude of average frequency error, , where is frequency, refers to the difference between measured and desired values, and overlines indicate time averages.
LFC incorporates power transfer between different areas, known as "net tie-line power", into the minimized quantity. For a particular frequency bias constant , the area control error (ACE) associated with LFC at any moment in time is simply where refers to tie-line power. This instantaneous error is then numerically integrated to give the time average, and governors adjusted to counteract its value. The coefficient traditionally has a negative value, so that when the frequency is lower than the target, area power production should increase; its magnitude is usually on the order of MW/dHz.
Tie-line bias LFC was known since 1930s, but was rarely used until the post-war period. In the 1950s, Nathan Cohn popularized the practice in a series of articles, arguing that load-frequency control minimized the adjustment necessary for changes in load. In particular, Cohn supposed that all regions of the grid shared a common linear regime, with location-invariant frequency change per additional loading (). If the utility selected and one region experienced a temporary fault or other generation-load mismatch, then adjacent generators would observe a decrease in frequency but a counterbalancing increase in outward tieline power flow, giving no ACE. They would thus make no governor adjustments in the (presumed) brief period before the failed region recovered.
Rate of change of frequency
Rate of change of frequency (also RoCoF) is simply a time derivative of the utility frequency (), usually measured in Hz per second, Hz/s. The importance of this parameter increases when the traditional synchronous generators are replaced by the variable renewable energy (VRE) inverter-based resources (IBR). The design of a synchronous generator inherently provides the inertial response that limits the RoCoF. Since the IBRs are not electromechanically coupled into the power grid, a system with high VRE penetration might exhibit large RoCoF values that can cause problems with the operation of the system due to stress placed onto the remaining synchronous generators, triggering of the protection devices and load shedding.
As of 2017, regulations for some grids required the power plants to tolerate RoCoF of 1–4 Hz/s, the upper limit being a very high value, an order of magnitude higher than the design target of a typical older gas turbine generator. Testing high-power (multiple MW) equipment for RoCoF tolerance is hard, as a typical test setup is powered off the grid, and the frequency thus cannot be arbitrarily varied. In the US, the controllable grid interface at the National Renewable Energy Laboratory is the only facility that allows testing of multi-MW units (up to 7 MVA). Testing of large thermal units is not possible.
Audible noise and interference
AC-powered appliances can give off a characteristic hum, often called "mains hum", at the multiples of the frequencies of AC power that they use (see Magnetostriction). It is usually produced by motor and transformer core laminations vibrating in time with the magnetic field. This hum can also appear in audio systems, where the power supply filter or signal shielding of an amplifier is not adequate.
Most countries chose their television vertical synchronization rate to be the same as the local mains supply frequency. This helped to prevent power line hum and magnetic interference from causing visible beat frequencies in the displayed picture of early analogue TV receivers particularly from the mains transformer. Although some distortion of the picture was present, it went mostly un-noticed because it was stationary. The elimination of transformers by the use of AC/DC receivers, and other changes to set design helped minimise the effect and some countries now use a vertical rate that is an approximation to the supply frequency (most notably 60 Hz areas).
Another use of this side effect is as a forensic tool. When a recording is made that captures audio near an AC appliance or socket, the hum is also incidentally recorded. The peaks of the hum repeat every AC cycle (every ms for 50 Hz AC, or every ms for 60 Hz AC). The exact frequency of the hum should match the frequency of a forensic recording of the hum at the exact date and time that the recording is alleged to have been made. Discontinuities in the frequency match or no match at all will betray the authenticity of the recording.
See also
Mains electricity
Maximum demand indicator
Network analyzer (AC power)
Telechron
Further reading
Furfari, F.A., The Evolution of Power-Line Frequencies to 25 Hz, Industry Applications Magazine, IEEE, Sep/Oct 2000, Volume 6, Issue 5, Pages 12–14, .
Rushmore, D.B., Frequency, AIEE Transactions, Volume 31, 1912, pages 955–983, and discussion on pages 974–978.
Blalock, Thomas J., Electrification of a Major Steel Mill – Part II Development of the 25 Hz System, Industry Applications Magazine, IEEE, Sep/Oct 2005, Pages 9–12, .
References
Sources
Electric power | Utility frequency | [
"Physics",
"Engineering"
] | 5,558 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
489,228 | https://en.wikipedia.org/wiki/Tribology | Tribology is the science and engineering of understanding friction, lubrication and wear phenomena for interacting surfaces in relative motion. It is highly interdisciplinary, drawing on many academic fields, including physics, chemistry, materials science, mathematics, biology and engineering. The fundamental objects of study in tribology are tribosystems, which are physical systems of contacting surfaces. Subfields of tribology include biotribology, nanotribology and space tribology. It is also related to other areas such as the coupling of corrosion and tribology in tribocorrosion and the contact mechanics of how surfaces in contact deform.
Approximately 20% of the total energy expenditure of the world is due to the impact of friction and wear in the transportation, manufacturing, power generation, and residential sectors.
This section will provide an overview of tribology, with links to many of the more specialized areas.
Etymology
The word tribology derives from the Greek root τριβ- of the verb , tribo, "I rub" in classic Greek, and the suffix -logy from , -logia "study of", "knowledge of". Peter Jost coined the word in 1966, in the eponymous report which highlighted the cost of friction, wear and corrosion to the UK economy.
History
Early history
Despite the relatively recent naming of the field of tribology, quantitative studies of friction can be traced as far back as 1493, when Leonardo da Vinci first noted the two fundamental 'laws' of friction. According to Leonardo, frictional resistance was the same for two different objects of the same weight but making contact over different widths and lengths. He also observed that the force needed to overcome friction doubles as weight doubles. However, Leonardo's findings remained unpublished in his notebooks.
The two fundamental 'laws' of friction were first published (in 1699) by Guillaume Amontons, with whose name they are now usually associated. They state that:
the force of friction acting between two sliding surfaces is proportional to the load pressing the surfaces together
the force of friction is independent of the apparent area of contact between the two surfaces.
Although not universally applicable, these simple statements hold for a surprisingly wide range of systems. These laws were further developed by Charles-Augustin de Coulomb (in 1785), who noticed that static friction force may depend on the contact time and sliding (kinetic) friction may depend on sliding velocity, normal force and contact area.
In 1798, Charles Hatchett and Henry Cavendish carried out the first reliable test on frictional wear. In a study commissioned by the Privy Council of the UK, they used a simple reciprocating machine to evaluate the wear rate of gold coins. They found that coins with grit between them wore at a faster rate compared to self-mated coins. In 1860, Theodor Reye proposed . In 1953, John Frederick Archard developed the Archard equation which describes sliding wear and is based on the theory of asperity contact.
Other pioneers of tribology research are Australian physicist Frank Philip Bowden and British physicist David Tabor, both of the Cavendish Laboratory at Cambridge University. Together they wrote the seminal textbook The Friction and Lubrication of Solids (Part I originally published in 1950 and Part II in 1964). Michael J. Neale was another leader in the field during the mid-to-late 1900s. He specialized in solving problems in machine design by applying his knowledge of tribology. Neale was respected as an educator with a gift for integrating theoretical work with his own practical experience to produce easy-to-understand design guides. The Tribology Handbook, which he first edited in 1973 and updated in 1995, is still used around the world and forms the basis of numerous training courses for engineering designers.
Duncan Dowson surveyed the history of tribology in his 1997 book History of Tribology (2nd edition). This covers developments from prehistory, through early civilizations (Mesopotamia, ancient Egypt) and highlights the key developments up to the end of the twentieth century.
The Jost report
The term tribology became widely used following The Jost Report published in 1966. The report highlighted the huge cost of friction, wear and corrosion to the UK economy (1.1–1.4% of GDP). As a result, the UK government established several national centres to address tribological problems. Since then the term has diffused into the international community, with many specialists now identifying as "tribologists".
Significance
Despite considerable research since the Jost Report, the global impact of friction and wear on energy consumption, economic expenditure, and carbon dioxide emissions are still considerable. In 2017, Kenneth Holmberg and Ali Erdemir attempted to quantify their impact worldwide. They considered the four main energy consuming sectors: transport, manufacturing, power generation, and residential. The following were concluded:
In total, ~23% of the world's energy consumption originates from tribological contacts. Of that, 20% is to overcome friction and 3% to remanufacture worn parts and spare equipment due to wear and wear-related.
By taking advantage of the new technologies for friction reduction and wear protection, energy losses due to friction and wear in vehicles, machinery and other equipment worldwide could be reduced by 40% in the long term (15 years) and 18% in the short term (8 years). On a global scale, these savings would amount to 1.4% of GDP annually and 8.7% of total energy consumption in the long term.
The largest short term energy savings are envisioned in transport (25%) and in power generation (20%) while the potential savings in the manufacturing and residential sectors are estimated to be ~10%. In the longer term, savings would be 55%, 40%, 25%, and 20%, respectively.
Implementing advanced tribological technologies can also reduce global carbon dioxide emissions by as much as 1,460 million tons of carbon dioxide equivalent (MtCO2) and result in 450,000 million Euros cost savings in the short term. In the long term, the reduction could be as large as 3,140 MtCO2 and the cost savings 970,000 million Euros.
Classical tribology covering such applications as ball bearings, gear drives, clutches, brakes, etc. was developed in the context of mechanical engineering. But in the last decades tribology expanded to qualitatively new fields of applications, in particular micro- and nanotechnology as well as biology and medicine.
Fundamental concepts
Tribosystem
The concept of tribosystems is used to provide a detailed assessment of relevant inputs, outputs and losses to tribological systems. Knowledge of these parameters allows tribologists to devise test procedures for tribological systems.
Tribofilm
Tribofilms are thin films that form on tribologically stressed surfaces. They play an important role in reducing friction and wear in tribological systems.
Stribeck curve
The Stribeck curve shows how friction in fluid-lubricated contacts is a non-linear function of lubricant viscosity, entrainment velocity and contact load.
Physics
Friction
The word friction comes from the Latin "frictionem", which means rubbing. This term is used to describe all those dissipative phenomena, capable of producing heat and of opposing the relative motion between two surfaces. There are two main types of friction:
Static friction Which occurs between surfaces in a fixed state, or relatively stationary.
Dynamic friction Which occurs between surfaces in relative motion.
The study of friction phenomena is a predominantly empirical study and does not allow to reach precise results, but only to useful approximate conclusions. This inability to obtain a definite result is due to the extreme complexity of the phenomenon. If it is studied more closely it presents new elements, which, in turn, make the global description even more complex.
Laws of friction
All the theories and studies on friction can be simplified into three main laws, which are valid in most cases:
First Law of Amontons The frictional force is directly proportional to the normal load.
Second Law of Amontons Friction is independent of the apparent area of contact.
Third Law of Coulomb Dynamic friction is independent of the relative sliding speed.
Coulomb later found deviations from Amontons' laws in some cases. In systems with significant nonuniform stress fields, Amontons' laws are not satisfied macroscopically because local slip occurs before the entire system slides.
Static friction
Consider a block of a certain mass m, placed in a quiet position on a horizontal plane. If you want to move the block, an external force must be applied, in this way we observe a certain resistance to the motion given by a force equal to and opposite to the applied force, which is precisely the static frictional force .
By continuously increasing the applied force, we obtain a value such that the block starts instantly to move. At this point, also taking into account the first two friction laws stated above, it is possible to define the static friction force as a force equal in modulus to the minimum force required to cause the motion of the block, and the coefficient of static friction as the ratio of the static friction force . and the normal force at block , obtaining
Dynamic friction
Once the block has been put into motion, the block experiences a friction force with a lesser intensity than the static friction force . The friction force during relative motion is known as the dynamic friction force . In this case it is necessary to take into account not only the first two laws of Amontons, but also of the law of Coulomb, so as to be able to affirm that the relationship between dynamic friction force , coefficient of dynamic friction k and normal force N is the following:
Static and dynamic friction coefficient
At this point it is possible to summarize the main properties of the static friction coefficients and the dynamic one .
These coefficients are dimensionless quantities, given by the ratio between the intensity of the friction force and the intensity of the applied load , depending on the type of surfaces that are involved in a mutual contact, and in any case, the condition is always valid such that: .
Usually, the value of both coefficients does not exceed the unit and can be considered constant only within certain ranges of forces and velocities, outside of which there are extreme conditions that modify these coefficients and variables.
In systems with significant nonuniform stress fields, the macroscopic static friction coefficient depends on the external pressure, system size, or shape because local slip occurs before the system slides.
The following table shows the values of the static and dynamic friction coefficients for common materials:
Rolling friction
In the case of bodies capable of rolling, there is a particular type of friction, in which the sliding phenomenon, typical of dynamic friction, does not occur, but there is also a force that opposes the motion, which also excludes the case of static friction. This type of friction is called rolling friction. Now we want to observe in detail what happens to a wheel that rolls on a horizontal plane. Initially the wheel is immobile and the forces acting on it are the weight force and the normal force given by the response to the weight of the floor.
At this point the wheel is set in motion, causing a displacement at the point of application of the normal force which is now applied in front of the center of the wheel, at a distance b, which is equal to the value of the rolling friction coefficient. The opposition to the motion is caused by the separation of the normal force and the weight force at the exact moment in which the rolling starts, so the value of the torque given by the rolling friction force isWhat happens in detail at the microscopic level between the wheel and the supporting surface is described in Figure, where it is possible to observe what is the behavior of the reaction forces of the deformed plane acting on an immobile wheel.
Rolling the wheel continuously causes imperceptible deformations of the plane and, once passed to a subsequent point, the plane returns to its initial state. In the compression phase the plane opposes the motion of the wheel, while in the decompression phase it provides a positive contribution to the motion.
The force of rolling friction depends, therefore, on the small deformations suffered by the supporting surface and by the wheel itself, and can be expressed as , where it is possible to express b in relation to the sliding friction coefficient as , with r being the wheel radius.
The surfaces
Going even deeper, it is possible to study not only the most external surface of the metal, but also the immediately more internal states, linked to the history of the metal, its composition and the manufacturing processes undergone by the latter.
it is possible to divide the metal into four different layers:
Crystalline structure – basic structure of the metal, bulk interior form;
Machined layer – layer which may also have inclusions of foreign material and which derives from the processing processes to which the metal has been subjected;
Hardened layer – has a crystalline structure of greater hardness than the inner layers, thanks to the rapid cooling to which they are subjected in the working processes;
Outer layer or oxide layer – layer that is created due to chemical interaction with the metal's environment and from the deposition of impurities.
The layer of oxides and impurities (third body) has a fundamental tribological importance, in fact it usually contributes to reducing friction. Another fact of fundamental importance regarding oxides is that if you could clean and smooth the surface in order to obtain a pure "metal surface", what we would observe is the union of the two surfaces in contact. In fact, in the absence of thin layers of contaminants, the atoms of the metal in question, are not able to distinguish one body from another, thus going to form a single body if put in contact.
The origin of friction
Contact between surfaces is made up of a large number of microscopic regions, in the literature called asperities or junctions of contact, where atom-to-atom contact takes place. The phenomenon of friction, and therefore of the dissipation of energy, is due precisely to the deformations that such regions undergo due to the load and relative movement. Plastic, elastic, or rupture deformations can be observed:
Plastic deformations – permanent deformations of the shape of the bumps;
Elastic deformations – deformations in which the energy expended in the compression phase is almost entirely recovered in the decompression phase (elastic hysteresis);
Break deformations – deformations that lead to the breaking of bumps and the creation of new contact areas.
The energy that is dissipated during the phenomenon is transformed into heat, thus increasing the temperature of the surfaces in contact. The increase in temperature also depends on the relative speed and the roughness of the material, it can be so high as to even lead to the fusion of the materials involved.
In friction phenomena, temperature is fundamental in many areas of application. For example, a rise in temperature may result in a sharp reduction of the friction coefficient, and consequently, the effectiveness of the brakes.
The cohesion theory
The adhesion theory states that in the case of spherical asperities in contact with each other, subjected to a load, a deformation is observed, which, as the load increases, passes from an elastic to a plastic deformation. This phenomenon involves an enlargement of the real contact area , which for this reason can be expressed as:where D is the hardness of the material definable as the applied load divided by the area of the contact surface.
If at this point the two surfaces are sliding between them, a resistance to shear stress t is observed, given by the presence of adhesive bonds, which were created precisely because of the plastic deformations, and therefore the frictional force will be given byAt this point, since the coefficient of friction is the ratio between the intensity of the frictional force and that of the applied load, it is possible to state thatthus relating to the two material properties: shear strength t and hardness. To obtain low value friction coefficients it is possible to resort to materials which require less shear stress, but which are also very hard. In the case of lubricants, in fact, we use a substrate of material with low cutting stress t, placed on a very hard material.
The force acting between two solids in contact will not only have normal components, as implied so far, but will also have tangential components. This further complicates the description of the interactions between roughness, because due to this tangential component plastic deformation comes with a lower load than when ignoring this component. A more realistic description then of the area of each single junction that is created is given bywith constant and a "tangent" force applied to the joint.
To obtain even more realistic considerations, the phenomenon of the third body should also be considered, i.e., the presence of foreign materials, such as moisture, oxides or lubricants, between the two solids in contact. A coefficient c is then introduced which is able to correlate the shear strength t of the pure "material" and that of the third body with 0 < c < 1.
By studying the behavior at the limits it will be that for c = 0, t = 0 and for c = 1 it returns to the condition in which the surfaces are directly in contact and there is no presence of a third body. Keeping in mind what has just been said, it is possible to correct the friction coefficient formula as follows:In conclusion, the case of elastic bodies in interaction with each other is considered.
Similarly to what we have just seen, it is possible to define an equation of the typewhere, in this case, K depends on the elastic properties of the materials. Also for the elastic bodies the tangential force depends on the coefficient c seen above, and it will beand therefore a fairly exhaustive description of the friction coefficient can be obtained
Friction measurements
The simplest and most immediate method for evaluating the friction coefficient of two surfaces is the use of an inclined plane on which a block of material is made to slide. As can be seen in the figure, the normal force of the plane is given by , while the frictional force is equal to . This allows us to state that the coefficient of friction can be calculated very easily, by means of the tangent of the angle in which the block begins to slip. In fact we haveThen from the inclined plane we moved on to more sophisticated systems, which allow us to consider all the possible environmental conditions in which the measurement is made, such as the cross-roller machine or the pin and disk machine. Today there are digital machines such as the "Friction Tester" which allows, by means of a software support, to insert all the desired variables. Another widely used process is the ring compression test. A flat ring of the material to be studied is plastically deformed by means of a press, if the deformation is an expansion in both the inner and the outer circle, then there will be low or zero friction coefficients. Otherwise for a deformation that expands only in the inner circle there will be increasing friction coefficients.
Lubrication
To reduce friction between surfaces and keep wear under control, materials called lubricants are used. Unlike what you might think, these are not just oils or fats, but any fluid material that is characterized by viscosity, such as air and water. Of course, some lubricants are more suitable than others, depending on the type of use they are intended for: air and water, for example, are readily available, but the former can only be used under limited load and speed conditions, while the second can contribute to the wear of materials.
What we try to achieve by means of these materials is a perfect fluid lubrication, or a lubrication such that it is possible to avoid direct contact between the surfaces in question, inserting a lubricant film between them. To do this there are two possibilities, depending on the type of application, the costs to address and the level of "perfection" of the lubrication desired to be achieved, there is a choice between:
Fluidostatic lubrication (or hydrostatic in the case of mineral oils) – which consists in the insertion of lubricating material under pressure between the surfaces in contact;
Fluid fluid lubrication (or hydrodynamics) – which consists in exploiting the relative motion between the surfaces to make the lubricating material penetrate.
Viscosity
The viscosity is the equivalent of friction in fluids, it describes, in fact, the ability of fluids to resist the forces that cause a change in shape.
Thanks to Newton's studies, a deeper understanding of the phenomenon has been achieved. He, in fact, introduced the concept of laminar flow: "a flow in which the velocity changes from layer to layer". It is possible to ideally divide a fluid between two surfaces (, ) of area A, in various layers.
The layer in contact with the surface , which moves with a velocity v due to an applied force F, will have the same velocity as v of the slab, while each immediately following layer will vary this velocity of a quantity dv, up to the layer in contact with the immobile surface , which will have zero speed.
From what has been said, it is possible to state that the force F, necessary to cause a rolling motion in a fluid contained between two plates, is proportional to the area of the two surfaces and to the speed gradient:At this point we can introduce a proportional constant , which corresponds to the dynamic viscosity coefficient of the fluid, to obtain the following equation, known as Newton's lawThe speed varies by the same amount dv of layer in layer and then the condition occurs so that dv / dy = v / L, where L is the distance between the surfaces and , and then we can simplify the equation by writingThe viscosity is high in fluids that strongly oppose the motion, while it is contained for fluids that flow easily.
To determine what kind of flow is in the study, we observe its Reynolds numberThis is a constant that depends on the fluid mass of the fluid, on its viscosity and on the diameter L of the tube in which the fluid flows. If the Reynolds number is relatively low then there is a laminar flow, whereas for the flow becomes turbulent.
To conclude we want to underline that it is possible to divide the fluids into two types according to their viscosity:
Newtonian fluids, or fluids in which viscosity is a function of temperature and fluid pressure only and not of velocity gradient;
Non-Newtonian fluids, or fluids in which viscosity also depends on the velocity gradient.
Viscosity as a function of temperature and pressure
Temperature and pressure are two fundamental factors to evaluate when choosing a lubricant instead of another. Consider the effects of temperature initially.
There are three main causes of temperature variation that can affect the behavior of the lubricant:
Weather conditions;
Local thermal factors (like for car engines or refrigeration pumps);
Energy dissipation due to rubbing between surfaces.
In order to classify the various lubricants according to their viscosity behavior as a function of temperature, in 1929 the viscosity index (V.I.) was introduced by Dean and Davis. These assigned the best lubricant then available, namely the oil of Pennsylvania, the viscosity index 100, and at the worst, the American oil of the Gulf Coast, the value 0. To determine the value of the intermediate oil index, the following procedure is used: two reference oils are chosen so that the oil in question has the same viscosity at 100 °C, and the following equation is used to determine the viscosity indexThis process has some disadvantages:
For mixtures of oils the results are not exact;
There is no information if you are outside the fixed temperature range;
With the advancement of the technologies, oils with V.I. more than 100, which can not be described by the method above.
In the case of oils with V.I. above 100 you can use a different relationship that allows you to get exact resultswhere, in this case, H is the viscosity at of the oil with V.I. = 100 and v is the kinematic viscosity of the study oil at .
We can therefore say, in conclusion, that an increase in temperature leads to a decrease in the viscosity of the oil. It is also useful to keep in mind that, in the same way, an increase in pressure implies an increase in viscosity. To evaluate the effects of pressure on viscosity, the following equation is usedwhere is the pressure viscosity coefficient p, is the viscosity coefficient at atmospheric pressure and is a constant that describes the relationship between viscosity and pressure.
Viscosity measures
To determine the viscosity of a fluid, viscosimeters are used which can be divided into 3 main categories:
Capillary viscometers, in which the viscosity of the fluid is measured by sliding it into a capillary tube;
Solid drop viscometers, in which viscosity is measured by calculating the velocity of a solid that moves in the fluid;
Rotational viscometers, in which viscosity is obtained by evaluating the flow of fluid placed between two surfaces in relative motion.
The first two types of viscometers are mainly used for Newtonian fluids, while the third is very versatile.
Wear
The wear is the progressive involuntary removal of material from a surface in relative motion with another or with a fluid. We can distinguish two different types of wear: moderate wear and severe wear. The first case concerns low loads and smooth surfaces, while the second concerns significantly higher loads and compatible and rough surfaces, in which the wear processes are much more violent. Wear plays a fundamental role in tribological studies, since it causes changes in the shape of the components used in the construction of machinery (for example). These worn parts must be replaced and this entails both a problem of an economic nature, due to the cost of replacement, and a functional problem, since if these components are not replaced in time, more serious damage could occur to the machine in its complex. This phenomenon, however, has not only negative sides, indeed, it is often used to reduce the roughness of some materials, eliminating the asperities. Erroneously we tend to imagine wear in a direct correlation with friction, in reality these two phenomena can not be easily connected. There may be conditions such that low friction can result in significant wear and vice versa. In order for this phenomenon to occur, certain implementation times are required, which may change depending on some variables, such as load, speed, lubrication and environmental conditions, and there are different wear mechanisms, which may occur simultaneously or even combined with each other:
Adhesive wear;
Abrasive wear;
Fatigue wear;
Corrosive wear;
Rubbing wear or fretting;
Erosion wear;
Other minor wear phenomena (wear by impact, cavitation, wear-fusion, wear-spreading).
Adhesive wear
As known, the contact between two surfaces occurs through the interaction between asperities. If a shearing force is applied in the contact area, it may be possible to detach a small part of the weaker material, due to its adhesion to the harder surface. What is described is precisely the mechanism of the adhesive wear represented in the figure. This type of wear is very problematic, since it involves high wear speeds, but at the same time it is possible to reduce adhesion by increasing surface roughness and hardness of the surfaces involved, or by inserting layers of contaminants such as oxygen, oxides, water, or oils. In conclusion, the behavior of the adhesive wear volume can be described by means of three main laws
Law 1 – Distance The mass involved in wear is proportional to the distance traveled in the rubbing between the surfaces.
Law 2 – Load The mass involved in wear is proportional to the applied load.
Law 3 – Hardness The mass involved in wear is inversely proportional to the hardness of the less hard material.
An important aspect of wear is emission of wear particles into the environment which increasingly threatens human health and ecology. The first researcher who investigated this topic was Ernest Rabinowicz.
Abrasive wear
The abrasive wear consists of the cutting effort of hard surfaces that act on softer surfaces and can be caused either by the roughness that as tips cut off the material against which they rub (two-body abrasive wear), or from particles of hard material that interpose between two surfaces in relative motion (three-body abrasive wear). At application levels, the two-body wear is easily eliminated by means of an adequate surface finish, while the three-body wear can bring serious problems and must therefore be removed as much as possible by means of suitable filters, even before of a weighted machine design.
Fatigue wear
The fatigue wear is a type of wear that is caused by alternative loads, which cause local contact forces repeated over time, which in turn lead to deterioration of the materials involved. The most immediate example of this type of wear is that of a comb. If you slide a finger over the teeth of the comb over and over again, it is observed that at some point one or more teeth of the comb come off. This phenomenon can lead to the breaking of the surfaces due to mechanical or thermal causes. The first case is that described above in which a repeated load causes high contact stresses. The second case, however, is caused by the thermal expansion of the materials involved in the process. To reduce this type of wear, therefore, it is good to try to decrease both the contact forces and the thermal cycling, that is the frequency with which different temperatures intervene. For optimal results it is also good to eliminate, as much as possible, impurities between surfaces, local defects and inclusions of foreign materials in the bodies involved.
Corrosive wear
The corrosive wear occurs in the presence of metals that oxidize or corrode. When the pure metal surfaces come into contact with the surrounding environment, oxide films are created on their surfaces because of the contaminants present in the environment itself, such as water, oxygen or acids. These films are continually removed from the abrasive and adhesive wear mechanisms, continually recreated by pure-contaminating metal interactions. Clearly this type of wear can be reduced by trying to create an 'ad hoc' environment, free of pollutants and sensible to minimal thermal changes. Corrosive wear can also be positive in some applications. In fact, the oxides that are created, contribute to decrease the coefficient of friction between the surfaces, or, being in many cases harder than the metal to which they belong, can be used as excellent abrasives.
Rubbing wear or fretting
The rubbing wear occurs in systems subject to more or less intense vibrations, which cause relative movements between the surfaces in contact within the order of nanometers. These microscopic relative movements cause both adhesive wear, caused by the displacement itself, and abrasive wear, caused by the particles produced in the adhesive phase, which remain trapped between the surfaces. This type of wear can be accelerated by the presence of corrosive substances and the increase in temperature.
Erosion wear
The erosion wear occurs when free particles, which can be either solid or liquid, hit a surface, causing abrasion. The mechanisms involved are of various kinds and depend on certain parameters, such as the impact angle, the particle size, the impact velocity and the material of which the particles are made up.
Factors affecting wear
Among the main factors influencing wear we find
Hardness
Mutual Solubility
Crystalline structure
It has been verified that the harder a material is, the more it decreases. In the same way, the less two materials are mutually soluble, the more the wear tends to decrease. Finally, as regards the crystalline structure, it is possible to state that some structures are more suitable to resist the wear of others, such as a hexagonal structure with a compact distribution, which can only deform by slipping along the base planes.
Wear rate
To provide an assessment of the damage caused by wear, we use a dimensionless coefficient called wear rate, given by the ratio between the height change of the body and the length of the relative sliding .This coefficient makes it possible to subdivide, depending on its size, the damage suffered by various materials in different situations, passing from a modest degree of wear, through a medium, to a degree of severe wear.
Instead, to express the volume of wear V it is possible to use the Holm equation
(for adhesive wear)
(for abrasive wear)
where W / H represents the real contact area, l the length of the distance traveled and k and are experimental dimensional factors.
Wear measurement
In experimental measurements of material wear, it is often necessary to recreate fairly small wear rates and to accelerate times. The phenomena, which in reality develop after years, in the laboratory must occur after a few days. A first evaluation of the wear processes is a visual inspection of the superficial profile of the body in the study, including a comparison before and after the occurrence of the wear phenomenon. In this first analysis the possible variations of the hardness and of the superficial geometry of the material are observed. Another method of investigation is that of the radioactive tracer, used to evaluate wear at macroscopic levels. One of the two materials in contact, involved in a wear process, is marked with a radioactive tracer. In this way, the particles of this material, which will be removed, will be easily visible and accessible. Finally, to accelerate wear times, one of the best-known techniques used is that of the high pressure contact tests. In this case, to obtain the desired results it is sufficient to apply the load on a very reduced contact area.
Applications
Transport and manufacturing
Historically, tribology research concentrated on the design and effective lubrication of machine components, particularly for bearings. However, the study of tribology extends into most aspects of modern technology and any system where one material slides over another can be affected by complex tribological interactions.
Traditionally, tribology research in the transport industry focused on reliability, ensuring the safe, continuous operation of machine components. Nowadays, due to an increased focus on energy consumption, efficiency has become increasingly important and thus lubricants have become progressively more complex and sophisticated in order to achieve this. Tribology also plays an important role in manufacturing. For example, in metal-forming operations, friction increases tool wear and the power required to work a piece. This results in increased costs due to more frequent tool replacement, loss of tolerance as tool dimensions shift, and greater forces required to shape a piece.
The use of lubricants which minimize direct surface contact reduces tool wear and power requirements. It is also necessary to know the effects of manufacturing, all manufacturing methods leave a unique system fingerprint (i.e. surface topography) which will influence the tribocontact (e.g. lubricant film formation).
Research
Fields
Tribology research ranges from macro to nano scales, in areas as diverse as the movement of continental plates and glaciers to the locomotion of animals and insects. Tribology research is traditionally concentrated on transport and manufacturing sectors, but this has considerably diversified. Tribology research can be loosely divided into the following fields (with some overlap):
Classical tribology is concerned with friction and wear in machine elements (such as rolling-element bearings, gears, plain bearings, brakes, clutches, wheels and fluid bearings) as well as manufacturing processes (such as metal forming).
Biotribology studies friction, wear and lubrication in biological systems. The field is gaining importance as human lifetime expectancy increases. Human hip and knee joints are typical biotribology systems.
Green tribology aims to minimize the environmental impact of tribological systems along their entire lifecycle. In particular, green tribology aims to reduce tribological losses (e.g., friction and wear) using technologies with minimal environmental impact. This is in contrast to traditional tribology, where the means of reducing tribological losses are not holistically evaluated.
Geotribology studies friction, wear, and lubrication of geological systems, such as glaciers and faults.
Nanotribology studies tribological phenomena at nanoscopic scales. The field is becoming increasingly important as devices become smaller (e.g. micro/nanoelectromechanical systems, MEMS/NEMS), and research has been aided by the invention of Atomic Force Microscopy.
Computational tribology aims to model the behavior of tribological systems through multiphysics simulations, combining disciplines such as contact mechanics, fracture mechanics and computational fluid dynamics.
Space tribology studies tribological systems that can operate under the extreme environmental conditions of outer space. In particular, this requires lubricants with low vapor pressure that can withstand extreme temperature fluctuations.
Open system tribology studies tribological systems that are exposed to and affected by the natural environment.
Triboinformatics is an application of Artificial Intelligence, Machine Learning and Big Data methods to tribological systems.
Recently, intensive studies of superlubricity (phenomenon of vanishing friction) have sparked due to increasing demand for energy savings. Furthermore, the development of new materials, such as graphene and ionic liquids, allows for fundamentally new approaches to solve tribological problems.
Societies
There are now numerous national and international societies, including: the Society of Tribologists and Lubrication Engineers (STLE) in the US, the Institution of Mechanical Engineers and Institute of Physics (IMechE Tribology Group, IOP Tribology Group) in the UK, the German Society for Tribology (Gesellschaft für Tribologie), the Korean Tribology Society (KTS), the Malaysian Tribology Society (MYTRIBOS), the Japanese Society of Tribologists (JAST), the Tribology Society of India (TSI), the Chinese Mechanical Engineering Society (Chinese Tribology Institute) and the International Tribology Council.
Research approach
Tribology research is mostly empirical, which can be explained by the vast number of parameters that influence friction and wear in tribological contacts. Thus, most research fields rely heavily on the use of standardized tribometers and test procedures as well component-level test rigs.
See also
Footnotes
References
External links
Friction
Engineering mechanics
Materials science
Materials degradation
Metallurgy
Mechanical engineering | Tribology | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 7,918 | [
"Mechanical phenomena",
"Tribology",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Applied and interdisciplinary physics",
"Metallurgy",
"Materials science",
"Surface science",
"Civil engineering",
"Mechanical engineering",
"nan",
"Engineering mechanics",
"Materials ... |
490,067 | https://en.wikipedia.org/wiki/Blinding%20%28cryptography%29 | In cryptography, blinding is a technique by which an agent can provide a service to (i.e., compute a function for) a client in an encoded form without knowing either the real input or the real output. Blinding techniques also have applications to preventing side-channel attacks on encryption devices.
More precisely, Alice has an input x and Oscar has a function f. Alice would like Oscar to compute for her without revealing either x or y to him. The reason for her wanting this might be that she doesn't know the function f or that she does not have the resources to compute it.
Alice "blinds" the message by encoding it into some other input E(x); the encoding E must be a bijection on the input space of f, ideally a random permutation. Oscar gives her f(E(x)), to which she applies a decoding D to obtain .
Not all functions allow for blind computation. At other times, blinding must be applied with care. An example of the latter is Rabin–Williams signatures. If blinding is applied to the formatted message but the random value does not honor Jacobi requirements on p and q, then it could lead to private key recovery. A demonstration of the recovery can be seen in discovered by Evgeny Sidorov.
A common application of blinding is in blind signatures. In a blind signature protocol, the signer digitally signs a message without being able to learn its content.
The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a secret key or OTP that she shares with Bob. Bob reverses the blinding after receiving the message. In this example, the function
f is the identity and E and D are both typically the XOR operation.
Blinding can also be used to prevent certain side-channel attacks on asymmetric encryption schemes. Side-channel attacks allow an adversary to recover information about the input to a cryptographic operation, by measuring something other than the algorithm's result, e.g., power consumption, computation time, or radio-frequency emanations by a device. Typically these attacks depend on the attacker knowing the characteristics of the algorithm, as well as (some) inputs. In this setting, blinding serves to alter the algorithm's input into some unpredictable state. Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks.
For example, in RSA blinding involves computing the blinding operation , where r is a random integer between 1 and N and relatively prime to N (i.e. , x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function is applied thus giving . Finally it is unblinded using the function . Multiplying by yields , as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives.
Examples
Blinding in GPG 1.x
References
External links
Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems
Breaking the Rabin-Williams digital signature system implementation in the Crypto++ library
Cryptography | Blinding (cryptography) | [
"Mathematics",
"Engineering"
] | 759 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
490,134 | https://en.wikipedia.org/wiki/Connected%20sum | In mathematics, specifically in topology, the operation of connected sum is a geometric modification on manifolds. Its effect is to join two given manifolds together near a chosen point on each. This construction plays a key role in the classification of closed surfaces.
More generally, one can also join manifolds together along identical submanifolds; this generalization is often called the fiber sum. There is also a closely related notion of a connected sum on knots, called the knot sum or composition of knots.
Connected sum at a point
A connected sum of two m-dimensional manifolds is a manifold formed by deleting a ball inside each manifold and gluing together the resulting boundary spheres.
If both manifolds are oriented, there is a unique connected sum defined by having the gluing map reverse orientation. Although the construction uses the choice of the balls, the result is unique up to homeomorphism. One can also make this operation work in the smooth category, and then the result is unique up to diffeomorphism. There are subtle problems in the smooth case: not every diffeomorphism between the boundaries of the spheres gives the same composite manifold, even if the orientations are chosen correctly. For example, Milnor showed that two 7-cells can be glued along their boundary so that the result is an exotic sphere homeomorphic but not diffeomorphic to a 7-sphere.
However, there is a canonical way to choose the gluing of and which gives a unique well-defined connected sum. Choose embeddings and so that preserves orientation and reverses orientation. Now obtain from the disjoint sum
by identifying with for each unit vector and each . Choose the orientation for which is compatible with and . The fact that this construction is well-defined depends crucially on the disc theorem, which is not at all obvious. For further details, see.
The operation of connected sum is denoted by .
The operation of connected sum has the sphere as an identity; that is, is homeomorphic (or diffeomorphic) to .
The classification of closed surfaces, a foundational and historically significant result in topology, states that any closed surface can be expressed as the connected sum of a sphere with some number of tori and some number of real projective planes.
Connected sum along a submanifold
Let and be two smooth, oriented manifolds of equal dimension and a smooth, closed, oriented manifold, embedded as a submanifold into both and Suppose furthermore that there exists an isomorphism of normal bundles
that reverses the orientation on each fiber. Then induces an orientation-preserving diffeomorphism
where each normal bundle is diffeomorphically identified with a neighborhood of in , and the map
is the orientation-reversing diffeomorphic involution
on normal vectors. The connected sum of and along is then the space
obtained by gluing the deleted neighborhoods together by the orientation-preserving diffeomorphism. The sum is often denoted
Its diffeomorphism type depends on the choice of the two embeddings of and on the choice of .
Loosely speaking, each normal fiber of the submanifold contains a single point of , and the connected sum along is simply the connected sum as described in the preceding section, performed along each fiber. For this reason, the connected sum along is often called the fiber sum.
The special case of a point recovers the connected sum of the preceding section.
Connected sum along a codimension-two submanifold
Another important special case occurs when the dimension of is two less than that of the . Then the isomorphism of normal bundles exists whenever their Euler classes are opposite:
Furthermore, in this case the structure group of the normal bundles is the circle group ; it follows that the choice of embeddings can be canonically identified with the group of homotopy classes of maps from to the circle, which in turn equals the first integral cohomology group . So the diffeomorphism type of the sum depends on the choice of and a choice of element from .
A connected sum along a codimension-two can also be carried out in the category of symplectic manifolds; this elaboration is called the symplectic sum.
Local operation
The connected sum is a local operation on manifolds, meaning that it alters the summands only in a neighborhood of . This implies, for example, that the sum can be carried out on a single manifold containing two disjoint copies of , with the effect of gluing to itself. For example, the connected sum of a 2-sphere at two distinct points of the sphere produces the 2-torus.
Connected sum of knots
There is a closely related notion of the connected sum of two knots. In fact, if one regards a knot merely as a 1-manifold, then the connected sum of two knots is just their connected sum as a 1-dimensional manifold. However, the essential property of a knot is not its manifold structure (under which every knot is equivalent to a circle) but rather its embedding into the ambient space. So the connected sum of knots has a more elaborate definition that produces a well-defined embedding, as follows.
This procedure results in the projection of a new knot, a connected sum (or knot sum, or composition) of the original knots. For the connected sum of knots to be well defined, one has to consider oriented knots in 3-space. To define the connected sum for two oriented knots:
Consider a planar projection of each knot and suppose these projections are disjoint.
Find a rectangle in the plane where one pair of sides are arcs along each knot but is otherwise disjoint from the knots and so that the arcs of the knots on the sides of the rectangle are oriented around the boundary of the rectangle in the same direction.
Now join the two knots together by deleting these arcs from the knots and adding the arcs that form the other pair of sides of the rectangle.
The resulting connected sum knot inherits an orientation consistent with the orientations of the two original knots, and the oriented ambient isotopy class of the result is well-defined, depending only on the oriented ambient isotopy classes of the original two knots.
Under this operation, oriented knots in 3-space form a commutative monoid with unique prime factorization, which allows us to define what is meant by a prime knot. Proof of commutativity can be seen by letting one summand shrink until it is very small and then pulling it along the other knot. The unknot is the unit. The two trefoil knots are the simplest prime knots. Higher-dimensional knots can be added by splicing the -spheres.
In three dimensions, the unknot cannot be written as the sum of two non-trivial knots. This fact follows from additivity of knot genus; another proof relies on an infinite construction sometimes called the Mazur swindle. In higher dimensions (with codimension at least three), it is possible to get an unknot by adding two nontrivial knots.
If one does not take into account the orientations of the knots, the connected sum operation is not well-defined on isotopy classes of (nonoriented) knots. To see this, consider two noninvertible knots K, L which are not equivalent (as unoriented knots); for example take the two pretzel knots K = P(3, 5, 7) and L = P(3, 5, 9). Let K+ and K− be K with its two inequivalent orientations, and let L+ and L− be L with its two inequivalent orientations. There are four oriented connected sums we may form:
A = K+ # L+
B = K− # L−
C = K+ # L−
D = K− # L+
The oriented ambient isotopy classes of these four oriented knots are all distinct, and, when one considers ambient isotopy of the knots without regard to orientation, there are two distinct equivalence classes: {A ~ B} and {C ~ D}. To see that A and B are unoriented equivalent, simply note that they both may be constructed from the same pair of disjoint knot projections as above, the only difference being the orientations of the knots. Similarly, one sees that C and D may be constructed from the same pair of disjoint knot projections.
See also
Band sum
Prime decomposition (3-manifold)
Manifold decomposition
Satellite knot
Further reading
Robert Gompf: A new construction of symplectic manifolds, Annals of Mathematics 142 (1995), 527–595
William S. Massey, A Basic Course in Algebraic Topology, Springer-Verlag, 1991. .
References
Differential topology
Geometric topology
Knot theory
Operations on structures | Connected sum | [
"Mathematics"
] | 1,824 | [
"Topology",
"Differential topology",
"Geometric topology"
] |
490,135 | https://en.wikipedia.org/wiki/Homological%20conjectures%20in%20commutative%20algebra | In mathematics, homological conjectures have been a focus of research activity in commutative algebra since the early 1960s. They concern a number of interrelated (sometimes surprisingly so) conjectures relating various homological properties of a commutative ring to its internal ring structure, particularly its Krull dimension and depth.
The following list given by Melvin Hochster is considered definitive for this area. In the sequel, , and refer to Noetherian commutative rings; will be a local ring with maximal ideal , and and are finitely generated -modules.
The Zero Divisor Theorem. If has finite projective dimension and is not a zero divisor on , then is not a zero divisor on .
Bass's Question. If has a finite injective resolution then is a Cohen–Macaulay ring.
The Intersection Theorem. If has finite length, then the Krull dimension of N (i.e., the dimension of R modulo the annihilator of N) is at most the projective dimension of M.
The New Intersection Theorem. Let denote a finite complex of free R-modules such that has finite length but is not 0. Then the (Krull dimension) .
The Improved New Intersection Conjecture. Let denote a finite complex of free R-modules such that has finite length for and has a minimal generator that is killed by a power of the maximal ideal of R. Then .
The Direct Summand Conjecture. If is a module-finite ring extension with R regular (here, R need not be local but the problem reduces at once to the local case), then R is a direct summand of S as an R-module. The conjecture was proven by Yves André using a theory of perfectoid spaces.
The Canonical Element Conjecture. Let be a system of parameters for R, let be a free R-resolution of the residue field of R with , and let denote the Koszul complex of R with respect to . Lift the identity map to a map of complexes. Then no matter what the choice of system of parameters or lifting, the last map from is not 0.
Existence of Balanced Big Cohen–Macaulay Modules Conjecture. There exists a (not necessarily finitely generated) R-module W such that mRW ≠ W and every system of parameters for R is a regular sequence on W.
Cohen-Macaulayness of Direct Summands Conjecture. If R is a direct summand of a regular ring S as an R-module, then R is Cohen–Macaulay (R need not be local, but the result reduces at once to the case where R is local).
The Vanishing Conjecture for Maps of Tor. Let be homomorphisms where R is not necessarily local (one can reduce to that case however), with A, S regular and R finitely generated as an A-module. Let W be any A-module. Then the map is zero for all .
The Strong Direct Summand Conjecture. Let be a map of complete local domains, and let Q be a height one prime ideal of S lying over , where R and are both regular. Then is a direct summand of Q considered as R-modules.
Existence of Weakly Functorial Big Cohen-Macaulay Algebras Conjecture. Let be a local homomorphism of complete local domains. Then there exists an R-algebra BR that is a balanced big Cohen–Macaulay algebra for R, an S-algebra that is a balanced big Cohen-Macaulay algebra for S, and a homomorphism BR → BS such that the natural square given by these maps commutes.
Serre's Conjecture on Multiplicities. (cf. Serre's multiplicity conjectures.) Suppose that R is regular of dimension d and that has finite length. Then , defined as the alternating sum of the lengths of the modules is 0 if , and is positive if the sum is equal to d. (N.B. Jean-Pierre Serre proved that the sum cannot exceed d.)
Small Cohen–Macaulay Modules Conjecture. If R is complete, then there exists a finitely-generated R-module such that some (equivalently every) system of parameters for R is a regular sequence on M.
References
Homological conjectures, old and new, Melvin Hochster, Illinois Journal of Mathematics Volume 51, Number 1 (2007), 151-169.
On the direct summand conjecture and its derived variant by Bhargav Bhatt.
Commutative algebra
Homological algebra
Conjectures
Unsolved problems in mathematics | Homological conjectures in commutative algebra | [
"Mathematics"
] | 935 | [
"Unsolved problems in mathematics",
"Mathematical structures",
"Fields of abstract algebra",
"Conjectures",
"Category theory",
"Mathematical problems",
"Commutative algebra",
"Homological algebra"
] |
491,022 | https://en.wikipedia.org/wiki/Mass%20in%20special%20relativity | The word "mass" has two meanings in special relativity: invariant mass (also called rest mass) is an invariant quantity which is the same for all observers in all reference frames, while the relativistic mass is dependent on the velocity of the observer. According to the concept of mass–energy equivalence, invariant mass is equivalent to rest energy, while relativistic mass is equivalent to relativistic energy (also called total energy).
The term "relativistic mass" tends not to be used in particle and nuclear physics and is often avoided by writers on special relativity, in favor of referring to the body's relativistic energy. In contrast, "invariant mass" is usually preferred over rest energy. The measurable inertia and the warping of spacetime by a body in a given frame of reference is determined by its relativistic mass, not merely its invariant mass. For example, photons have zero rest mass but contribute to the inertia (and weight in a gravitational field) of any system containing them.
The concept is generalized in mass in general relativity.
Rest mass
The term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles. The more general invariant mass (calculated with a more complicated formula) loosely corresponds to the "rest mass" of a "system". Thus, invariant mass is a natural unit of mass used for systems which are being viewed from their center of momentum frame (COM frame), as when any closed system (for example a bottle of hot gas) is weighed, which requires that the measurement be taken in the center of momentum frame where the system has no net momentum. Under such circumstances the invariant mass is equal to the relativistic mass (discussed below), which is the total energy of the system divided by c2 (the speed of light squared).
The concept of invariant mass does not require bound systems of particles, however. As such, it may also be applied to systems of unbound particles in high-speed relative motion. Because of this, it is often employed in particle physics for systems which consist of widely separated high-energy particles. If such systems were derived from a single particle, then the calculation of the invariant mass of such systems, which is a never-changing quantity, will provide the rest mass of the parent particle (because it is conserved over time).
It is often convenient in calculation that the invariant mass of a system is the total energy of the system (divided by ) in the COM frame (where, by definition, the momentum of the system is zero). However, since the invariant mass of any system is also the same quantity in all inertial frames, it is a quantity often calculated from the total energy in the COM frame, then used to calculate system energies and momenta in other frames where the momenta are not zero, and the system total energy will necessarily be a different quantity than in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, and it is thus conserved, so long as the system is closed to all influences. (The technical term is isolated system meaning that an idealized boundary is drawn around the system, and no mass/energy is allowed across it.)
Relativistic mass
The relativistic mass is the sum total quantity of energy in a body or system (divided by ). Thus, the mass in the formula
is the relativistic mass. For a particle of non-zero rest mass moving at a speed relative to the observer, one finds
In the center of momentum frame, and the relativistic mass equals the rest mass. In other frames, the relativistic mass (of a body or system of bodies) includes a contribution from the "net" kinetic energy of the body (the kinetic energy of the center of mass of the body), and is larger the faster the body moves. Thus, unlike the invariant mass, the relativistic mass depends on the observer's frame of reference. However, for given single frames of reference and for isolated systems, the relativistic mass is also a conserved quantity.
The relativistic mass is also the proportionality factor between velocity and momentum,
Newton's second law remains valid in the form
When a body emits light of frequency and wavelength as a photon of energy , the mass of the body decreases by , which some interpret as the relativistic mass of the emitted photon since it also fulfills . Although some authors present relativistic mass as a fundamental concept of the theory, it has been argued that this is wrong as the fundamentals of the theory relate to space–time. There is disagreement over whether the concept is pedagogically useful. It explains simply and quantitatively why a body subject to a constant acceleration cannot reach the speed of light, and why the mass of a system emitting a photon decreases. In relativistic quantum chemistry, relativistic mass is used to explain electron orbital contraction in heavy elements.
The notion of mass as a property of an object from Newtonian mechanics does not bear a precise relationship to the concept in relativity.
Relativistic mass is not referenced in nuclear and particle physics, and a survey of introductory textbooks in 2005 showed that only 5 of 24 texts used the concept, although it is still prevalent in popularizations.
If a stationary box contains many particles, its weight increases in its rest frame the faster the particles are moving. Any energy in the box (including the kinetic energy of the particles) adds to the mass, so that the relative motion of the particles contributes to the mass of the box. But if the box itself is moving (its center of mass is moving), there remains the question of whether the kinetic energy of the overall motion should be included in the mass of the system. The invariant mass is calculated excluding the kinetic energy of the system as a whole (calculated using the single velocity of the box, which is to say the velocity of the box's center of mass), while the relativistic mass is calculated including invariant mass plus the kinetic energy of the system which is calculated from the velocity of the center of mass.
Relativistic vs. rest mass
Relativistic mass and rest mass are both traditional concepts in physics, but the relativistic mass corresponds to the total energy. The relativistic mass is the mass of the system as it would be measured on a scale, but in some cases (such as the box above) this fact remains true only because the system on average must be at rest to be weighed (it must have zero net momentum, which is to say, the measurement is in its center of momentum frame). For example, if an electron in a cyclotron is moving in circles with a relativistic velocity, the mass of the cyclotron+electron system is increased by the relativistic mass of the electron, not by the electron's rest mass. But the same is also true of any closed system, such as an electron-and-box, if the electron bounces at high speed inside the box. It is only the lack of total momentum in the system (the system momenta sum to zero) which allows the kinetic energy of the electron to be "weighed". If the electron is stopped and weighed, or the scale were somehow sent after it, it would not be moving with respect to the scale, and again the relativistic and rest masses would be the same for the single electron (and would be smaller). In general, relativistic and rest masses are equal only in systems which have no net momentum and the system center of mass is at rest; otherwise they may be different.
The invariant mass is proportional to the value of the total energy in one reference frame, the frame where the object as a whole is at rest (as defined below in terms of center of mass). This is why the invariant mass is the same as the rest mass for single particles. However, the invariant mass also represents the measured mass when the center of mass is at rest for systems of many particles. This special frame where this occurs is also called the center of momentum frame, and is defined as the inertial frame in which the center of mass of the object is at rest (another way of stating this is that it is the frame in which the momenta of the system's parts add to zero). For compound objects (made of many smaller objects, some of which may be moving) and sets of unbound objects (some of which may also be moving), only the center of mass of the system is required to be at rest, for the object's relativistic mass to be equal to its rest mass.
A so-called massless particle (such as a photon, or a theoretical graviton) moves at the speed of light in every frame of reference. In this case there is no transformation that will bring the particle to rest. The total energy of such particles becomes smaller and smaller in frames which move faster and faster in the same direction. As such, they have no rest mass, because they can never be measured in a frame where they are at rest. This property of having no rest mass is what causes these particles to be termed "massless". However, even massless particles have a relativistic mass, which varies with their observed energy in various frames of reference.
Invariant mass
The invariant mass is the ratio of four-momentum (the four-dimensional generalization of classical momentum) to four-velocity:
and is also the ratio of four-acceleration to four-force when the rest mass is constant. The four-dimensional form of Newton's second law is:
Relativistic energy–momentum equation
The relativistic expressions for and obey the relativistic energy–momentum relation:
where the m is the rest mass, or the invariant mass for systems, and is the total energy.
The equation is also valid for photons, which have :
and therefore
A photon's momentum is a function of its energy, but it is not proportional to the velocity, which is always .
For an object at rest, the momentum is zero, therefore
Note that the formula is true only for particles or systems with zero momentum.
The rest mass is only proportional to the total energy in the rest frame of the object.
When the object is moving, the total energy is given by
To find the form of the momentum and energy as a function of velocity, it can be noted that the four-velocity, which is proportional to , is the only four-vector associated with the particle's motion, so that if there is a conserved four-momentum , it must be proportional to this vector. This allows expressing the ratio of energy to momentum as
resulting in a relation between and :
This results in
and
these expressions can be written as
where the factor
When working in units where , known as the natural unit system, all the relativistic equations are simplified and the quantities energy, momentum, and mass have the same natural dimension:
The equation is often written this way because the difference is the relativistic length of the energy momentum four-vector, a length which is associated with rest mass or invariant mass in systems. Where and , this equation again expresses the mass–energy equivalence .
The mass of composite systems
The rest mass of a composite system is not the sum of the rest masses of the parts, unless all the parts are at rest. The total mass of a composite system includes the kinetic energy and field energy in the system.
The total energy of a composite system can be determined by adding together the sum of the energies of its components. The total momentum of the system, a vector quantity, can also be computed by adding together the momenta of all its components. Given the total energy and the length (magnitude) of the total momentum vector , the invariant mass is given by:
In the system of natural units where , for systems of particles (whether bound or unbound) the total system invariant mass is given equivalently by the following:
Where, again, the particle momenta are first summed as vectors, and then the square of their resulting total magnitude (Euclidean norm) is used. This results in a scalar number, which is subtracted from the scalar value of the square of the total energy.
For such a system, in the special center of momentum frame where momenta sum to zero, again the system mass (called the invariant mass) corresponds to the total system energy or, in units where , is identical to it. This invariant mass for a system remains the same quantity in any inertial frame, although the system total energy and total momentum are functions of the particular inertial frame which is chosen, and will vary in such a way between inertial frames as to keep the invariant mass the same for all observers. Invariant mass thus functions for systems of particles in the same capacity as "rest mass" does for single particles.
Note that the invariant mass of an isolated system (i.e., one closed to both mass and energy) is also independent of observer or inertial frame, and is a constant, conserved quantity for isolated systems and single observers, even during chemical and nuclear reactions. The concept of invariant mass is widely used in particle physics, because the invariant mass of a particle's decay products is equal to its rest mass. This is used to make measurements of the mass of particles like the Z boson or the top quark.
Conservation versus invariance of mass in special relativity
Total energy is an additive conserved quantity (for single observers) in systems and in reactions between particles, but rest mass (in the sense of being a sum of particle rest masses) may not be conserved through an event in which rest masses of particles are converted to other types of energy, such as kinetic energy. Finding the sum of individual particle rest masses would require multiple observers, one for each particle rest inertial frame, and these observers ignore individual particle kinetic energy. Conservation laws require a single observer and a single inertial frame.
In general, for isolated systems and single observers, relativistic mass is conserved (each observer sees it constant over time), but is not invariant (that is, different observers see different values). Invariant mass, however, is both conserved and invariant (all single observers see the same value, which does not change over time).
The relativistic mass corresponds to the energy, so conservation of energy automatically means that relativistic mass is conserved for any given observer and inertial frame. However, this quantity, like the total energy of a particle, is not invariant. This means that, even though it is conserved for any observer during a reaction, its absolute value will change with the frame of the observer, and for different observers in different frames.
By contrast, the rest mass and invariant masses of systems and particles are conserved also invariant. For example: A closed container of gas (closed to energy as well) has a system "rest mass" in the sense that it can be weighed on a resting scale, even while it contains moving components. This mass is the invariant mass, which is equal to the total relativistic energy of the container (including the kinetic energy of the gas) only when it is measured in the center of momentum frame. Just as is the case for single particles, the calculated "rest mass" of such a container of gas does not change when it is in motion, although its "relativistic mass" does change.
The container may even be subjected to a force which gives it an overall velocity, or else (equivalently) it may be viewed from an inertial frame in which it has an overall velocity (that is, technically, a frame in which its center of mass has a velocity). In this case, its total relativistic mass and energy increase. However, in such a situation, although the container's total relativistic energy and total momentum increase, these energy and momentum increases subtract out in the invariant mass definition, so that the moving container's invariant mass will be calculated as the same value as if it were measured at rest, on a scale.
Closed (meaning totally isolated) systems
All conservation laws in special relativity (for energy, mass, and momentum) require isolated systems, meaning systems that are totally isolated, with no mass–energy allowed in or out, over time. If a system is isolated, then both total energy and total momentum in the system are conserved over time for any observer in any single inertial frame, though their absolute values will vary, according to different observers in different inertial frames. The invariant mass of the system is also conserved, but does not change with different observers. This is also the familiar situation with single particles: all observers calculate the same particle rest mass (a special case of the invariant mass) no matter how they move (what inertial frame they choose), but different observers see different total energies and momenta for the same particle.
Conservation of invariant mass also requires the system to be enclosed so that no heat and radiation (and thus invariant mass) can escape. As in the example above, a physically enclosed or bound system does not need to be completely isolated from external forces for its mass to remain constant, because for bound systems these merely act to change the inertial frame of the system or the observer. Though such actions may change the total energy or momentum of the bound system, these two changes cancel, so that there is no change in the system's invariant mass. This is just the same result as with single particles: their calculated rest mass also remains constant no matter how fast they move, or how fast an observer sees them move.
On the other hand, for systems which are unbound, the "closure" of the system may be enforced by an idealized surface, inasmuch as no mass–energy can be allowed into or out of the test-volume over time, if conservation of system invariant mass is to hold during that time. If a force is allowed to act on (do work on) only one part of such an unbound system, this is equivalent to allowing energy into or out of the system, and the condition of "closure" to mass–energy (total isolation) is violated. In this case, conservation of invariant mass of the system also will no longer hold. Such a loss of rest mass in systems when energy is removed, according to where is the energy removed, and is the change in rest mass, reflect changes of mass associated with movement of energy, not "conversion" of mass to energy.
The system invariant mass vs. the individual rest masses of parts of the system
Again, in special relativity, the rest mass of a system is not required to be equal to the sum of the rest masses of the parts (a situation which would be analogous to gross mass-conservation in chemistry). For example, a massive particle can decay into photons which individually have no mass, but which (as a system) preserve the invariant mass of the particle which produced them. Also a box of moving non-interacting particles (e.g., photons, or an ideal gas) will have a larger invariant mass than the sum of the rest masses of the particles which compose it. This is because the total energy of all particles and fields in a system must be summed, and this quantity, as seen in the center of momentum frame, and divided by , is the system's invariant mass.
In special relativity, mass is not "converted" to energy, for all types of energy still retain their associated mass. Neither energy nor invariant mass can be destroyed in special relativity, and each is separately conserved over time in closed systems. Thus, a system's invariant mass may change only because invariant mass is allowed to escape, perhaps as light or heat. Thus, when reactions (whether chemical or nuclear) release energy in the form of heat and light, if the heat and light is not allowed to escape (the system is closed and isolated), the energy will continue to contribute to the system rest mass, and the system mass will not change. Only if the energy is released to the environment will the mass be lost; this is because the associated mass has been allowed out of the system, where it contributes to the mass of the surroundings.
History of the relativistic mass concept
Transverse and longitudinal mass
Concepts that were similar to what nowadays is called "relativistic mass", were already developed before the advent of special relativity. For example, it was recognized by J. J. Thomson in 1881 that a charged body is harder to set in motion than an uncharged body, which was worked out in more detail by Oliver Heaviside (1889) and George Frederick Charles Searle (1897). So the electrostatic energy behaves as having some sort of electromagnetic mass , which can increase the normal mechanical mass of the bodies.
Then, it was pointed out by Thomson and Searle that this electromagnetic mass also increases with velocity. This was further elaborated by Hendrik Lorentz (1899, 1904) in the framework of Lorentz ether theory. He defined mass as the ratio of force to acceleration, not as the ratio of momentum to velocity, so he needed to distinguish between the mass parallel to the direction of motion and the mass perpendicular to the direction of motion (where is the Lorentz factor, is the relative velocity between the ether and the object, and is the speed of light). Only when the force is perpendicular to the velocity, Lorentz's mass is equal to what is now called "relativistic mass". Max Abraham (1902) called longitudinal mass and transverse mass (although Abraham used more complicated expressions than Lorentz's relativistic ones). So, according to Lorentz's theory no body can reach the speed of light because the mass becomes infinitely large at this velocity.
Albert Einstein also initially used the concepts of longitudinal and transverse mass in his 1905 electrodynamics paper (equivalent to those of Lorentz, but with a different by an unfortunate force definition, which was later corrected), and in another paper in 1906. However, he later abandoned velocity dependent mass concepts (see quote at the end of next section).
The precise relativistic expression (which is equivalent to Lorentz's) relating force and acceleration for a particle with non-zero rest mass moving in the x direction with velocity v and associated Lorentz factor is
Relativistic mass
In special relativity, an object that has nonzero rest mass cannot travel at the speed of light. As the object approaches the speed of light, the object's energy and momentum increase without bound.
In the first years after 1905, following Lorentz and Einstein, the terms longitudinal and transverse mass were still in use. However, those expressions were replaced by the concept of relativistic mass, an expression which was first defined by Gilbert N. Lewis and Richard C. Tolman in 1909. They defined the total energy and mass of a body as
and of a body at rest
with the ratio
Tolman in 1912 further elaborated on this concept, and stated: "the expression m0(1 − v/c)−1/2 is best suited for the mass of a moving body."
In 1934, Tolman argued that the relativistic mass formula holds for all particles, including those moving at the speed of light, while the formula only applies to a slower-than-light particle (a particle with a nonzero rest mass). Tolman remarked on this relation that "We have, moreover, of course the experimental verification of the expression in the case of moving electrons ... We shall hence have no hesitation in accepting the expression as correct in general for the mass of a moving particle."
When the relative velocity is zero, is simply equal to 1, and the relativistic mass is reduced to the rest mass as one can see in the next two equations below. As the velocity increases toward the speed of light c, the denominator of the right side approaches zero, and consequently approaches infinity. While Newton's second law remains valid in the form
the derived form is not valid because in is generally not a constant (see the section above on transverse and longitudinal mass).
Even though Einstein initially used the expressions "longitudinal" and "transverse" mass in two papers (see previous section), in his first paper on (1905) he treated as what would now be called the rest mass. Einstein never derived an equation for "relativistic mass", and in later years he expressed his dislike of the idea:
Popular science and textbooks
The concept of relativistic mass is widely used in popular science writing and in high school and undergraduate textbooks. Authors such as Okun and A. B. Arons have argued against this as archaic and confusing, and not in accord with modern relativistic theory.
Arons wrote:
For many years it was conventional to enter the discussion of dynamics through derivation of the relativistic mass, that is the mass–velocity relation, and this is probably still the dominant mode in textbooks. More recently, however, it has been increasingly recognized that relativistic mass is a troublesome and dubious concept. [See, for example, Okun (1989).]... The sound and rigorous approach to relativistic dynamics is through direct development of that expression for momentum that ensures conservation of momentum in all frames: rather than through relativistic mass.
C. Alder takes a similarly dismissive stance on mass in relativity. Writing on said subject matter, he says that "its introduction into the theory of special relativity was much in the way of a historical accident", noting towards the widespread knowledge of and how the public's interpretation of the equation has largely informed how it is taught in higher education. He instead supposes that the difference between rest and relativistic mass should be explicitly taught, so that students know why mass should be thought of as invariant "in most discussions of inertia".
Many contemporary authors such as Taylor and Wheeler avoid using the concept of relativistic mass altogether:
While spacetime has the unbounded geometry of Minkowski space, the velocity-space is bounded by and has the geometry of hyperbolic geometry where relativistic mass plays an analogous role to that of Newtonian mass in the barycentric coordinates of Euclidean geometry. The connection of velocity to hyperbolic geometry enables the 3-velocity-dependent relativistic mass to be related to the 4-velocity Minkowski formalism.
See also
Tests of relativistic energy and momentum
References
External links
Usenet Physics FAQ
"Does mass change with velocity?" by Philip Gibbs et al., 2002, retrieved August 10, 2006
"What is the mass of a photon?" by Matt Austern et al., 1998, retrieved June 27, 2007
Mass as a Variable Quantity
Special relativity
Mass | Mass in special relativity | [
"Physics",
"Mathematics"
] | 5,567 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Special relativity",
"Theory of relativity",
"Wikipedia categories named after physical quantities",
"Matter"
] |
491,097 | https://en.wikipedia.org/wiki/Variational%20principle | In science and especially in mathematical studies, a variational principle is one that enables a problem to be solved using calculus of variations, which concerns finding functions that optimize the values of quantities that depend on those functions. For example, the problem of determining the shape of a hanging chain suspended at both ends—a catenary—can be solved using variational calculus, and in this case, the variational principle is the following: The solution is a function that minimizes the gravitational potential energy of the chain.
History
Physics
The history of the variational principle in classical mechanics started with Maupertuis's principle in the 18th century.
Math
Felix Klein's 1872 Erlangen program attempted to identify invariants under a group of transformations.
Examples
In mathematics
Ekeland's variational principle in mathematical optimization
The finite element method
The variation principle relating topological entropy and Kolmogorov-Sinai entropy.
In physics
The Rayleigh–Ritz method for solving boundary-value problems in elasticity and wave propagation
Fermat's principle in geometrical optics
Hamilton's principle in classical mechanics
Maupertuis' principle in classical mechanics
The principle of least action in mechanics, electromagnetic theory, and quantum mechanics
The variational method in quantum mechanics
Hellmann–Feynman theorem
Gauss's principle of least constraint and Hertz's principle of least curvature
Hilbert's action principle in general relativity, leading to the Einstein field equations.
Palatini variation
Hartree–Fock method
Density functional theory
Gibbons–Hawking–York boundary term
Variational quantum eigensolver
References
External links
The Feynman Lectures on Physics Vol. II Ch. 19: The Principle of Least Action
S T Epstein 1974 "The Variation Method in Quantum Chemistry". (New York: Academic)
C Lanczos, The Variational Principles of Mechanics (Dover Publications)
R K Nesbet 2003 "Variational Principles and Methods In Theoretical Physics and Chemistry". (New York: Cambridge U.P.)
S K Adhikari 1998 "Variational Principles for the Numerical Solution of Scattering Problems". (New York: Wiley)
C G Gray, G Karl G and V A Novikov 1996, Ann. Phys. 251 1.
C.G. Gray, G. Karl, and V. A. Novikov, "Progress in Classical and Quantum Variational Principles". 11 December 2003. physics/0312071 Classical Physics.
John Venables, "The Variational Principle and some applications". Dept of Physics and Astronomy, Arizona State University, Tempe, Arizona (Graduate Course: Quantum Physics)
Andrew James Williamson, "The Variational Principle -- Quantum monte carlo calculations of electronic excitations". Robinson College, Cambridge, Theory of Condensed Matter Group, Cavendish Laboratory. September 1996. (dissertation of Doctor of Philosophy)
Kiyohisa Tokunaga, "Variational Principle for Electromagnetic Field". Total Integral for Electromagnetic Canonical Action, Part Two, Relativistic Canonical Theory of Electromagnetics, Chapter VI
Komkov, Vadim (1986) Variational principles of continuum mechanics with engineering applications. Vol. 1. Critical points theory. Mathematics and its Applications, 24. D. Reidel Publishing Co., Dordrecht.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Theoretical physics
Articles containing proofs
Principles | Variational principle | [
"Physics",
"Mathematics"
] | 693 | [
"Mathematical principles",
"Variational principles",
"Theoretical physics",
"Articles containing proofs"
] |
26,802,549 | https://en.wikipedia.org/wiki/Lean%20enterprise | Lean enterprise is a practice focused on value creation for the end customer with minimal waste and processes. Principals derive from lean manufacturing and Six Sigma (or Lean Six Sigma). The lean principles were popularized by Toyota in the automobile manufacturing industry, and subsequently the electronics and internet software industries.
Principles and variants
Principles for lean enterprise derive from lean manufacturing and Six Sigma principles:
There are five principles, originating from lean manufacturing, outlined by James Womack and Daniel Jones
Value: Understand clearly what value the customer wants for the product or service.
Value Stream: The entire flow of a product's or service's life cycle. In other words, from raw materials, production of the product or service, customer delivery, customer use, and final disposal.
Flow: Keep the value stream moving. If it's not moving, it's creating waste and less value for the customer.
Pull: Do not make anything until the customer orders it.
Perfection: Systematically and continuously remove root causes of poor quality from production processes.
There are key lean enterprise principles originating from Lean Six Sigma principles. These principles focus on eliminating 8 varieties of waste (Muda) and form the acronym Downtime:
Defects
Overproduction
Waiting
Non-Utilized Talent
Transportation
Inventory
Motion
Extra-Processing
These 8 varieties of waste are derivative from the original 7 wastes as defined in the Toyota Production System (TPS). They are:
Transportation
Inventory
Motion
Waiting
Overproduction
Over-processing
Defects
The 8th waste of non-utilized talent was not recognized until post-Americanization of the Toyota Production System (TPS).
The lean startup principles, developed in 2008 from lean manufacturing, also now contribute to understanding of lean enterprise:
Eliminate wasteful practices
Increase value producing practices
Customer feedback during product development
Build what customers want
KPIs
Continuous deployment process
History
Lean enterprise is a practice focused on value creation for the end customer with minimal waste and processes. The term has historically been associated with lean manufacturing and Six Sigma (or Lean Six Sigma) due to lean principles being popularized by Toyota in the automobile manufacturing industry and subsequently the electronics and internet software industries.
Early 1900s: Ford, GM & Toyota Systems
Henry Ford developed a process called assembly line production. This is a manufacturing process in which parts are added as the assembly moves from work station to work station where parts are added in sequence until final assembly is produced.
Alfred Sloan of General Motors further developed the concept of assembly line production by building a process called mass production that allowed scale and variety. This process enabled large amounts of standardized products to run through assembly lines while still being able to produce more variety and compete against Ford's single offering.
Kiichiro Toyoda studied the Ford production system and adapted the process in order to have smaller production quantities. He built a production system called Just-in-Time Manufacturing for Toyota along with Taiichi Ohno. It's worth noting too that kaizen, the process of continuous improvement, was developed in the 1950s by Eiji Toyoda along with the Toyota Production System (TPS).
1980s & 1990s: Motorola
New innovations in lean enterprise moved away from machine technology to electronic technology.
Another development was management techniques from Motorola commonly referred to as Six Sigma. This management technique was built off of mass production principles with more focus on minimizing variability. Applying Six Sigma principles led to reduced cycle time, reduced pollution, reduced costs, increased customer satisfaction, and increased profits.
1990s & 2000s: Internet companies
New innovations in lean enterprise moved away from electronic manufacturing to internet and software technology. Before, during, and after the dot-com bubble, internet and software enterprises originally did not place emphasis on lean enterprise principles for efficient usage and allocation of capital and labor due to accessible funds from venture capital and capital markets. The idea of "build it and they will come" became common practice as a result.
After the dot-com bubble, inspired by the Agile Manifesto, internet and software companies began operating under agile software development practices such as Extreme programming. Along with the agile software movement, companies (especially startups) applied both lean enterprise and agile software principles together in order to develop new products or even new companies more efficiently and based on validated customer demand. Very early practices of lean enterprise and agile software principles was commonly referred to as lean startup.
After 2010, more and more enterprises are adopting this new branch of lean enterprise (lean startup) since it provides principles and methodologies for non-internet enterprises to enter in new markets or offer goods and services in new form factors with less time, labor, and capital. For internet and software enterprises (by tradition), the Lean startup variant of lean enterprise enabled them to remain competitive with new technologies and services that are rapidly coming out to market without exclusively resorting to mergers and acquisitions, and being able to retain internal innovation ecosystem competency.
See also
Lean manufacturing
Lean Six Sigma
Toyota Production System
Lean startup
Management fad
References
Lean manufacturing | Lean enterprise | [
"Engineering"
] | 983 | [
"Lean manufacturing"
] |
26,808,371 | https://en.wikipedia.org/wiki/Semen%20cryopreservation | Semen cryopreservation (commonly called sperm banking or sperm freezing) is a procedure to preserve sperm cells. Semen can be used successfully indefinitely after cryopreservation. It can be used for sperm donation where the recipient wants the treatment in a different time or place, or as a means of preserving fertility for men undergoing vasectomy or treatments that may compromise their fertility, such as chemotherapy, radiation therapy or surgery. It is also often used by trans women prior to medically transitioning in ways that affect fertility, such as feminizing hormone therapy and orchiectomies.
Freezing
The most common cryoprotectant used for semen is glycerol (10% in culture medium). Often sucrose or other di-, trisaccharides are added to glycerol solution. Cryoprotectant media may be supplemented with either egg yolk or soy lecithin, with the two having no statistically significant differences compared to each other regarding motility, morphology, ability to bind to hyaluronate in vitro, or DNA integrity after thawing.
Additional cryoprotectants can be used to increase sperm viability and fertility rates post-freezing. Treatment of sperm with heparin binding proteins prior to cryopreservation showed decreased cryoinjury and generation of ROS. The addition of nerve growth factor as a cryoprotectant decreases sperm cell death rates and increased motility after thawing. Incorporation of cholesterol into sperm cell membranes with the use of cyclodextrins prior to freezing also increases sperm viability.
There are different techniques for freezing semen samples:
- Block freezing (slow): First the sample is washed, the cryoprotectant is added. A cryoprotectant solution is added to the sperm to protect them from possible damage during freezing and thawing. The solution helps prevent ice crystals from forming in the sperm, which can damage the cell membrane and affect its viability. The sperm is separated into aliquots and frozen little by little, then immersed in liquid nitrogen and stored in sperm banks. It is a slow procedure.
- Freezing in pills (fast): It is the same procedure but now it is stored in the form of CO2 pills. Then the pills are put in aliquots and put into N2 and stored.
This is done if the seminal samples are considered valuable, for example if the sample is from a person who wants to have an ICSI or IVF, as this will allow the optimization of the semen, since it allows the sample to be thawed little by little.
-Vitrification (ultrafast): is a very expensive procedure. Sperm vitrification is an ultra-fast cryopreservation method that involves direct exposure to liquid nitrogen. In this way, the crystallization of intracellular water and cryodamage is avoided, without using permeable cryoprotectants that can imply mutagenic risk.
It prevents ice crystals from forming, the main cause of irreparable deterioration or cell death. Through this practice, we ensure that the samples maintain their fertilizing capacity and a similar behavior to the fresh sample. Vitrification gives superior post-thaw motility and cryosurvival. This current technique, invented by the Japanese, is used in the best centers around the world. It is extremely fast (-23000°C/min), so as a result it avoids the appearance of small ice crystals, preventing the "knife" effect.
Packaging Method
The packaging method is a crucial aspect of cryopreservation processes, as it directly affects thermal stability, storage capacity, and the efficiency of sample thawing. Several packaging techniques are used for sperm cryopreservation, each with its advantages and disadvantages:
1. Cryo-tops (Cryovials or Cryotubes)
Advantages:
- Provide a good surface for sample identification.
- Offer greater thermal stability during the process.
Disadvantages:
- Nitrogen may come into contact with the stored material.
- Require more storage space.
- Exhibit a less uniform freezing rate.
2. Goblets
Advantages:
- Low cost.
- Allow partial thawing of samples.
- Ensure uniform freezing.
Disadvantages:
- Nitrogen comes into contact with the sample.
- Pills stored in these devices are more sensitive to temperature changes.
3. Straws
Advantages:
- Require less storage space.
- Enable a more uniform freezing rate.
- Allow rapid thawing.
Disadvantages:
- More sensitive to temperature changes.
- Offer a smaller surface for identification.
- The cost of filling and sealing equipment is high.
Thawing
Thawing at seems to result in optimal sperm motility. On the other hand, the exact thawing temperature seems to have only minor effect on sperm viability, acrosomal status, ATP content, and DNA. As with freezing, various techniques have been developed for the thawing process, both discussed by Di Santo et al.
Refreezing
In terms of the level of sperm DNA fragmentation, up to three cycles of freezing and thawing can be performed without causing a level of risk significantly higher than following a single cycle of freezing and thawing. This is provided that samples are refrozen in their original cryoprotectant and are not going through sperm washing or other alteration in between, and provided that they are separated by density gradient centrifugation or swim-up before use in assisted reproduction technology.
Effect on quality
Some evidence suggests an increase in single-strand breaks, condensation and fragmentation of DNA in sperm after cryopreservation. This can potentially increase the risk of mutations in offspring DNA. Antioxidants and the use of well-controlled cooling regimes could potentially improve outcomes.
In long-term follow-up studies, no evidence has been found either of an increase in birth defects or chromosomal abnormalities in people conceived from cryopreserved sperm compared with the general population.
See also
Frozen bovine semen
Oocyte cryopreservation
References
Assisted reproductive technology
Cryopreservation
Semen | Semen cryopreservation | [
"Chemistry",
"Biology"
] | 1,251 | [
"Cryopreservation",
"Cryobiology",
"Assisted reproductive technology",
"Medical technology"
] |
26,809,738 | https://en.wikipedia.org/wiki/C23H28O6 | The molecular formula C23H28O6 (molar mass: 400.46 g/mol, exact mass: 400.1886 u) may refer to:
Enprostil
Molecular formulas | C23H28O6 | [
"Physics",
"Chemistry"
] | 42 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
8,334,381 | https://en.wikipedia.org/wiki/Coble%20creep | In materials science, Coble creep, a form of diffusion creep, is a mechanism for deformation of crystalline solids. Contrasted with other diffusional creep mechanisms, Coble creep is similar to Nabarro–Herring creep in that it is dominant at lower stress levels and higher temperatures than creep mechanisms utilizing dislocation glide. Coble creep occurs through the diffusion of atoms in a material along grain boundaries. This mechanism is observed in polycrystals or along the surface in a single crystal, which produces a net flow of material and a sliding of the grain boundaries.
American materials scientist Robert L. Coble first reported his theory of how materials creep across grain boundaries and at high temperatures in alumina. Here he famously noticed a different creep mechanism that was more dependent on the size of the grain.
The strain rate in a material experiencing Coble creep is given by
where
is a geometric prefactor
is the applied stress,
is the average grain diameter,
is the grain boundary width,
is the diffusion coefficient in the grain boundary,
is the vacancy formation energy,
is the activation energy for diffusion along the grain boundary
is the Boltzmann constant,
is the thermodynamic temperature (in kelvins)
is the atomic volume for the material.
Derivation
Coble creep, a diffusive mechanism, is driven by a vacancy (or mass) concentration gradient. The change in vacancy concentration from its equilibrium value is given by
This can be seen by noting that and taking a high temperature expansion, where the first term on the right hand side is the vacancy concentration from tensile stress and the second term is the concentration due to compressive stress. This change in concentration occurs perpendicular to the applied stress axis, while parallel to the stress there is no change in vacancy concentration (due to the resolved stress and work being zero).
We continue by assuming a spherical grain, to be consistent with the derivation for Nabarro–Herring creep; however, we will absorb geometric constants into a proportionality constant . If we consider the vacancy concentration across the grain under an applied tensile stress, then we note that there is a larger vacancy concentration at the equator (perpendicular to the applied stress) than at the poles (parallel to the applied stress). Therefore, a vacancy flux exists between the poles and equator of the grain. The vacancy flux is given by Fick's first law at the boundary: the diffusion coefficient times the gradient of vacancy concentration. For the gradient, we take the average value given by where we've divided the total concentration difference by the arclength between equator and pole then multiply by the boundary width and length .
where is a proportionality constant. From here, we note that the volume change due to a flux of vacancies diffusing from a source of area is the vacancy flux times atomic volume :
where the second equality follows from the definition of strain rate: . From here we can read off the strain rate:
where has absorbed constants and the vacancy diffusivity through the grain boundary .
Comparison to other creep mechanisms
Nabarro–Herring
Coble creep and Nabarro–Herring creep are closely related mechanisms. They are both diffusion processes, driven by the same concentration gradient of vacancies, occur in high temperature, low stress environments and their derivations are similar. For both mechanisms, the strain rate is linearly proportional to the applied stress and there is an exponential temperature dependence. The difference is that for Coble creep, mass transport occurs along grain boundaries, whereas for Nabarro–Herring the diffusion occurs through the crystal. Because of this, Nabarro–Herring creep does not have a dependence on grain boundary thickness, and has a weaker dependence on grain size . In Nabarro–Herring creep, the strain rate is proportional to as opposed to the dependence for Coble creep. When considering the net diffusional creep rate, the sum of both diffusional rates is vital as they work in a parallel processes.
The activation energy for Nabarro–Herring creep is, in general, different than that of Coble creep. This can be used to identify which mechanism is dominant. For example, the activation energy for dislocation climb is the same as for Nabarro–Herring, so by comparing the temperature dependence of low and high stress regimes, one can determine whether Coble creep or Nabarro–Herring creep is dominant.
Researchers commonly use these relationships to determine which mechanism is dominant in a material; by varying the grain size and measuring how the strain rate is affected, they can determine the value of in and conclude whether Coble or Nabarro–Herring creep is dominant.
Dislocation creep
Under moderate to high stress, the dominant creep mechanism is no longer linear in the applied stress . Dislocation creep, sometimes called power law creep (PLC), has a power law dependence on the applied stress ranging from 3 to 8. Dislocation movement is related to the atomic and lattice structure of the crystal, so different materials respond differently to stress, as opposed to Coble creep which is always linear. This makes the two mechanisms easily identifiable by finding the slope of vs .
Dislocation climb-glide and Coble creep both induce grain boundary sliding.
Deformation mechanism maps
To understand the temperature and stress regimes in which Coble creep is dominant for a material, it is helpful to look at deformation mechanism maps. These maps plot a normalized stress versus a normalized temperature and demarcate where specific creep mechanisms are dominant for a given material and grain size (some maps imitate a 3rd axis to show grain size). These maps should only be used as a guide, as they are based on heuristic equations. These maps are helpful to determine the creep mechanism when the working stresses and temperature are known for an application to guide the design of the material.
Grain boundary sliding
Since Coble creep involves mass transport along grain boundaries, cracks or voids would form within the material without proper accommodation. Grain boundary sliding is the process by which grains move to prevent separation at grain boundaries. This process typically occurs on timescales significantly faster than that of mass diffusion (an order of magnitude quicker). Because of this, the rate of grain boundary sliding is typically irrelevant to determining material processes. However, certain grain boundaries, such as coherent boundaries or where structural features inhibit grain boundary movement, can slow down the rate of grain boundary sliding to the point where it needs to be taken into consideration. The processes underlying grain boundary sliding are the same as those causing diffusional creep
This mechanism is originally proposed by Ashby and Verrall in 1973 as a grain switching creep. This is competitive with Coble creep; however, grain switching will dominate at large stresses while Coble creep dominates at low stresses.
This model predicts a strain rate with the threshold strain for grain switching .
The relation to Coble creep is clear by looking at the first term which is dependent on grain boundary thickness and inverse grain size cubed .
References
Materials degradation | Coble creep | [
"Materials_science",
"Engineering"
] | 1,409 | [
"Materials degradation",
"Materials science"
] |
8,334,655 | https://en.wikipedia.org/wiki/Einstein%E2%80%93Brillouin%E2%80%93Keller%20method | The Einstein–Brillouin–Keller (EBK) method is a semiclassical technique (named after Albert Einstein, Léon Brillouin, and Joseph B. Keller) used to compute eigenvalues in quantum-mechanical systems. EBK quantization is an improvement from Bohr-Sommerfeld quantization which did not consider the caustic phase jumps at classical turning points. This procedure is able to reproduce exactly the spectrum of the 3D harmonic oscillator, particle in a box, and even the relativistic fine structure of the hydrogen atom.
In 1976–1977, Michael Berry and M. Tabor derived an extension to Gutzwiller trace formula for the density of states of an integrable system starting from EBK quantization.
There have been a number of recent results on computational issues related to this topic, for example, the work of Eric J. Heller and Emmanuel David Tannenbaum using a partial differential equation gradient descent approach.
Procedure
Given a separable classical system defined by coordinates , in which every pair describes a closed function or a periodic function in , the EBK procedure involves quantizing the line integrals of over the closed orbit of :
where is the action-angle coordinate, is a positive integer, and and are Maslov indexes. corresponds to the number of classical turning points in the trajectory of (Dirichlet boundary condition), and corresponds to the number of reflections with a hard wall (Neumann boundary condition).
Examples
1D Harmonic oscillator
The Hamiltonian of a simple harmonic oscillator is given by
where is the linear momentum and the position coordinate. The action variable is given by
where we have used that is the energy and that the closed trajectory is 4 times the trajectory from 0 to the turning point .
The integral turns out to be
,
which under EBK quantization there are two soft turning points in each orbit and . Finally, that yields
,
which is the exact result for quantization of the quantum harmonic oscillator.
2D hydrogen atom
The Hamiltonian for a non-relativistic electron (electric charge ) in a hydrogen atom is:
where is the canonical momentum to the radial distance , and is the canonical momentum of the azimuthal angle .
Take the action-angle coordinates:
For the radial coordinate :
where we are integrating between the two classical turning points ()
Using EBK quantization :
and by making the spectrum of the 2D hydrogen atom is recovered :
Note that for this case almost coincides with the usual quantization of the angular momentum operator on the plane . For the 3D case, the EBK method for the total angular momentum is equivalent to the Langer correction.
See also
Hamilton–Jacobi equation
WKB approximation
Quantum chaos
References
Quantum mechanics | Einstein–Brillouin–Keller method | [
"Physics"
] | 576 | [
"Theoretical physics",
"Quantum mechanics"
] |
8,337,703 | https://en.wikipedia.org/wiki/Water%20model | In computational chemistry, a water model is used to simulate and thermodynamically calculate water clusters, liquid water, and aqueous solutions with explicit solvent. The models are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate a specific nature of molecules, many types of models have been developed. In general, these can be classified by the following three points; (i) the number of interaction points called site, (ii) whether the model is rigid or flexible, (iii) whether the model includes polarization effects.
An alternative to the explicit water models is to use an implicit solvation model, also termed a continuum model, an example of which would be the COSMO solvation model or the polarizable continuum model (PCM) or a hybrid solvation model.
Simple water models
The rigid models are considered the simplest water models and rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law, and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P (transferable intermolecular potential with 3 points) and TIP4P is represented by
where kC, the electrostatic constant, has a value of 332.1 Å·kcal/(mol·e²) in the units commonly used in molecular modeling; qi and qj are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms.
The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model.
2-site
A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory.
3-site
Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of the models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°.
The table below lists the parameters for some 3-site models.
The SPC/E model adds an average polarization correction to the potential energy function:
where μ is the electric dipole moment of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of . Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model.
The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified. Three-site model (TIP3P) has better performance in calculating specific heats.
Flexible SPC water model
The flexible simple point-charge water model (or flexible SPC water model) is a re-parametrization of the three-site SPC water model. The SPC model is rigid, whilst the flexible SPC model is flexible. In the model of Toukan and Rahman, the O–H stretching is made anharmonic, and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water.
Flexible SPC is implemented in the programs MDynaMix and Abalone.
Other models
Ferguson (flexible SPC)
CVFF (flexible)
MG (flexible and dissociative)
KKY potential (flexible model).
BLXL (smear charged potential).
4-site
The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal–Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is thus of historical interest only. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough.
The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water; and TIP4PQ/2005, a similar model but designed to accurately describe the properties of solid and liquid water when quantum effects are included in the simulation.
Most of the four-site water models use an OH distance and HOH angle which match those of the free water molecule. One exception is the OPC model, in which no geometry constraints are imposed other than the fundamental C2v molecular symmetry of the water molecule. Instead, the point charges and their positions are optimized to best describe the electrostatics of the water molecule. OPC reproduces a comprehensive set of bulk properties more accurately than several of the commonly used rigid n-site water models. The OPC model is implemented within the AMBER force field.
Others:
q-TIP4P/F (flexible)
TIP4P/2005f (flexible)
5-site
The 5-site models place the negative charge on dummy atoms (labelled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximal density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums.
Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r):
Thus, the RL and RU parameters only apply to BNS and ST2.
6-site
Originally designed to study water/ice systems, a 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden. Since it had a very high melting temperature when employed under periodic electrostatic conditions (Ewald summation), a modified version was published later optimized by using the Ewald method for estimating the Coulomb interaction.
Other
The effect of explicit solute model on solute behavior in biomolecular simulations has been also extensively studied. It was shown that explicit water models affected the specific solvation and dynamics of unfolded peptides, while the conformational behavior and flexibility of folded peptides remained intact.
MB model. A more abstract model resembling the Mercedes-Benz logo that reproduces some features of water in two-dimensional systems. It is not used as such for simulations of "real" (i.e., three-dimensional) systems, but it is useful for qualitative studies and for educational purposes.
Coarse-grained models. One- and two-site models of water have also been developed. In coarse-grain models, each site can represent several water molecules.
Many-body models. Water models built using training-set configurations solved quantum mechanically, which then use machine learning protocols to extract potential-energy surfaces. These potential-energy surfaces are fed into MD simulations for an unprecedented degree of accuracy in computing physical properties of condensed phase systems.
Another classification of many body models is on the basis of the expansion of the underlying electrostatics, e.g., the SCME (Single Center Multipole Expansion) model
Computational cost
The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O–O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1).
When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step).
See also
Water (properties)
Water (data page)
Water dimer
Force field (chemistry)
Comparison of force field implementations
Molecular mechanics
Molecular modelling
Comparison of software for molecular mechanics modeling
Solvent models
References
Water
Computational chemistry | Water model | [
"Chemistry",
"Environmental_science"
] | 2,282 | [
"Water",
"Theoretical chemistry",
"Computational chemistry",
"Hydrology"
] |
8,340,209 | https://en.wikipedia.org/wiki/Alternative%20fuel%20vehicle | An alternative fuel vehicle is a motor vehicle that runs on alternative fuel rather than traditional petroleum-based fossil fuels such as gasoline, petrodiesel or liquefied petroleum gas (autogas). The term typically refers to internal combustion engine vehicles or fuel cell vehicles that utilize synthetic renewable fuels such as biofuels (ethanol fuel, biodiesel and biogasoline), hydrogen fuel or so-called "Electrofuel". The term can also be used to describe an electric vehicle (particularly a battery electric vehicle or a solar vehicle), which should be more appropriately called an "alternative energy vehicle" or "new energy vehicle" as its propulsion actually rely on electricity rather than motor fuel.
Vehicle engines powered by gasoline/petrol first emerged in the 1860s and 1870s; they took until the 1930s to completely dominate the original "alternative" engines driven by steam (18th century), by gases (early 19th century), or by electricity ( 1830s). Because of a combination of factors, such as environmental and health concerns including climate change and air pollution, high oil-prices and the potential for peak oil, development of cleaner alternative fuels and advanced power systems for vehicles has become a high priority for many governments and vehicle manufacturers around the world in recent years.
Hybrid electric vehicles such as the Toyota Prius are not actually alternative fuel vehicles, as they still use traditional fuels such as gasoline, but through advancement in electric battery/supercapacitor and motor-generator technologies, they have an overall better fuel efficiency than conventional combustion vehicles. Other research and development efforts in alternative forms of power focus on developing plug-in electric, range extender and fuel cell vehicles, and even compressed-air vehicles.
An environmental analysis of the impacts of various vehicle-fuels extends beyond just operating efficiency and emissions, especially if a technology comes into wide use. A life-cycle assessment of a vehicle involves production and post-use considerations. In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles.
Current deployments
, there were more than 1.49 billion motor vehicles on the world's roads, compared with approximately 159 million alternative fuel and advanced technology vehicles that had been sold or converted worldwide at the end of 2022 and consisting of:
Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million.
Over 26 million plug-in electric vehicles, 70% of which were battery electric vehicles (BEVs) and 30% of which were plug-in hybrids (PHEVs). China had 13.8 million units, Europe 7.8 million, and the United States 3 million. In 2022, annual sales exceeded 10 million vehicles, up 55% relative to 2021.
24.9 million LPG powered vehicles by December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million).
24.5 million natural gas vehicles by the end of 2017, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million). In 2015, 2.4 million units were sold.
Over 13 million hybrid electric vehicles as of 2019.
5.7 million neat-ethanol only light-vehicles built in Brazil since 1979, with 2.4 to 3.0 million vehicles still in use by 2003. and 1.22 million units as of December 2011.
70,200 fuel cell electric vehicles (FCEVs) powered with hydrogen by the end of 2022. South Korea had 29,500 units, the United States 15,000, China 11,200, and Japan 7,700. In 2022, annual sales amounted to 15,391 vehicles. Hydrogen FCEV sales as a percentage of market share among electric vehicles (BEVs, PHEVs and FCEVs) declined for the 6th consecutive year.
Mainstream commercial technologies
Flexible fuel
A flexible-fuel vehicle (FFV) or dual-fuel vehicle (DFF) is an alternative fuel automobile or light duty truck with a multifuel engine that can use more than one fuel, usually mixed in the same tank, and the blend is burned in the combustion chamber together. These vehicles are colloquially called flex-fuel, or flexifuel in Europe, or just flex in Brazil. FFVs are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks. The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with the major markets concentrated in the United States, Brazil, Sweden, and some other European countries.
Ethanol flexible-fuel vehicles have standard gasoline engines that are capable of running with ethanol and gasoline mixed in the same tank. These mixtures have "E" numbers which describe the percentage of ethanol in the mixture, for example, E85 is 85% ethanol and 15% gasoline. (See common ethanol fuel mixtures for more information.) Though technology exists to allow ethanol FFVs to run on any mixture up to E100, in the U.S. and Europe, flex-fuel vehicles are optimized to run on E85. This limit is set to avoid cold starting problems during very cold weather.
Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million. Other markets were Canada (1.6 million by 2014), and Sweden (243,100 through December 2014). The Brazilian flex fuel fleet includes over 4 million flexible-fuel motorcycles produced since 2009 through March 2015. In Brazil, 65% of flex-fuel car owners were using ethanol fuel regularly in 2009, while, the actual number of American FFVs being run on E85 is much lower; surveys conducted in the U.S. have found that 68% of American flex-fuel car owners were not aware they owned an E85 flex.
There have been claims that American automakers are motivated to produce flex-fuel vehicles due to a loophole in the Corporate Average Fuel Economy (CAFE) requirements, which gives the automaker a "fuel economy credit" for every flex-fuel vehicle sold, whether or not the vehicle is actually fueled with E85 in regular use. This loophole allegedly allows the U.S. auto industry to meet CAFE fuel economy targets not by developing more fuel-efficient models, but by spending between US$100 and US$200 extra per vehicle to produce a certain number of flex-fuel models, enabling them to continue selling less fuel-efficient vehicles such as SUVs, which netted higher profit margins than smaller, more fuel-efficient cars.
Plug-in electric
Battery-electric
Battery electric vehicles (BEVs), also known as all-electric vehicles (AEVs), are electric vehicles whose main energy storage is in the chemical energy of batteries. BEVs are the most common form of what is defined by the California Air Resources Board (CARB) as zero emission vehicle (ZEV) because they produce no tailpipe emissions at the point of operation. The electrical energy carried on board a BEV to power the motors is obtained from a variety of battery chemistries arranged into battery packs. For additional range genset trailers or pusher trailers are sometimes used, forming a type of hybrid vehicle. Batteries used in electric vehicles include "flooded" lead-acid, absorbed glass mat, NiCd, nickel metal hydride, Li-ion, Li-poly and zinc-air batteries.
Attempts at building viable, modern battery-powered electric vehicles began in the 1950s with the introduction of the first modern (transistor controlled) electric car – the Henney Kilowatt, even though the concept was out in the market since 1890. Despite the poor sales of the early battery-powered vehicles, development of various battery-powered vehicles continued through the mid-1990s, with such models as the General Motors EV1 and the Toyota RAV4 EV.
Battery powered cars had primarily used lead-acid batteries and NiMH batteries. Lead-acid batteries' recharge capacity is considerably reduced if they're discharged beyond 75% on a regular basis, making them a less-than-ideal solution. NiMH batteries are a better choice, but are considerably more expensive than lead-acid. Lithium-ion battery powered vehicles such as the Venturi Fetish and the Tesla Roadster have recently demonstrated excellent performance and range, and nevertheless is used in most mass production models launched since December 2010.
Expanding on traditional lithium-ion batteries predominately used in today's battery electric vehicles, is an emerging science that is paving the way to utilize a carbon fiber structure (a vehicle body or chassis in this case) as a structural battery. Experiments being conducted at the Chalmers University of Technology in Sweden are showing that when coupled with Lithium-ion insertion mechanisms, an enhanced carbon fiber structure can have electromechanical properties. This means that the carbon fiber structure itself can act as its own battery/power source for propulsion. This would negate the need for traditional heavy battery banks, reducing weight and therefore increasing fuel efficiency.
, several neighborhood electric vehicles, city electric cars and series production highway-capable electric cars and utility vans have been made available for retails sales, including Tesla Roadster, GEM cars, Buddy, Mitsubishi i MiEV and its rebadged versions Peugeot iOn and Citroën C-Zero, Chery QQ3 EV, JAC J3 EV, Nissan Leaf, Smart ED, Mia electric, BYD e6, Renault Kangoo Z.E., Bolloré Bluecar, Renault Fluence Z.E., Ford Focus Electric, BMW ActiveE, Renault Twizy, Tesla Model S, Honda Fit EV, RAV4 EV second generation, Renault Zoe, Mitsubishi Minicab MiEV, Roewe E50, Chevrolet Spark EV, Fiat 500e, BMW i3, Volkswagen e-Up!, Nissan e-NV200, Volkswagen e-Golf, Mercedes-Benz B-Class Electric Drive, Kia Soul EV, BYD e5, and Tesla Model X. The world's all-time top selling highway legal electric car is the Nissan Leaf, released in December 2010, with global sales of more than 250,000 units through December 2016. The Tesla Model S, released in June 2012, ranks second with global sales of over 158,000 cars delivered . The Renault Kangoo Z.E. utility van is the leader of the light-duty all-electric segment with global sales of 25,205 units through December 2016.
Plug-in hybrid
Plug-in hybrid electric vehicles (PHEVs) use batteries to power an electric motor, as well as another fuel, such as gasoline or diesel, to power an internal combustion engine or other propulsion source. PHEVs can charge their batteries through charging equipment and regenerative braking. Using electricity from the grid to run the vehicle some or all of the time reduces operating costs and fuel use, relative to conventional vehicles.
Until 2010 most plug-in hybrids on the road in the U.S. were conversions of conventional hybrid electric vehicles, and the most prominent PHEVs were conversions of 2004 or later Toyota Prius, which have had plug-in charging and more batteries added and their electric-only range extended. Chinese battery manufacturer and automaker BYD Auto released the F3DM to the Chinese fleet market in December 2008 and began sales to the general public in Shenzhen in March 2010. General Motors began deliveries of the Chevrolet Volt in the U.S. in December 2010. Deliveries to retail customers of the Fisker Karma began in the U.S. in November 2011.
During 2012, the Toyota Prius Plug-in Hybrid, Ford C-Max Energi, and Volvo V60 Plug-in Hybrid were released. The following models were launched during 2013 and 2015: Honda Accord Plug-in Hybrid, Mitsubishi Outlander P-HEV, Ford Fusion Energi, McLaren P1 (limited edition), Porsche Panamera S E-Hybrid, BYD Qin, Cadillac ELR, BMW i3 REx, BMW i8, Porsche 918 Spyder (limited production), Volkswagen XL1 (limited production), Audi A3 Sportback e-tron, Volkswagen Golf GTE, Mercedes-Benz S 500 e, Porsche Cayenne S E-Hybrid, Mercedes-Benz C 350 e, BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, BMW X5 xDrive40e, Hyundai Sonata PHEV, and Volvo S60L PHEV.
, about 500,000 highway-capable plug-in hybrid electric cars had been sold worldwide since December 2008, out of total cumulative global sales of 1.2 million light-duty plug-in electric vehicles. , the Volt/Ampera family of plug-in hybrids, with combined sales of about 134,500 units is the top selling plug-in hybrid in the world. Ranking next are the Mitsubishi Outlander P-HEV with about 119,500, and the Toyota Prius Plug-in Hybrid with almost 78,000.
Biofuels
Bioalcohol and ethanol
The first commercial vehicle that used ethanol as a fuel was the Ford Model T, produced from 1908 through 1927. It was fitted with a carburetor with adjustable jetting, allowing use of gasoline or ethanol, or a combination of both. Other car manufactures also provided engines for ethanol fuel use. In the United States, alcohol fuel was produced in corn-alcohol stills until Prohibition criminalized the production of alcohol in 1919. The use of alcohol as a fuel for internal combustion engines, either alone or in combination with other fuels, lapsed until the oil price shocks of the 1970s. Furthermore, additional attention was gained because of its possible environmental and long-term economical advantages over fossil fuel.
Both ethanol and methanol have been used as an automotive fuel. While both can be obtained from petroleum or natural gas, ethanol has attracted more attention because it is considered a renewable resource, easily obtained from sugar or starch in crops and other agricultural produce such as grain, sugarcane, sugar beets or even lactose. Since ethanol occurs in nature whenever yeast happens to find a sugar solution such as overripe fruit, most organisms have evolved some tolerance to ethanol, whereas methanol is toxic. Other experiments involve butanol, which can also be produced by fermentation of plants. Support for ethanol comes from the fact that it is a biomass fuel, which addresses climate change and greenhouse gas emissions, though these benefits are now highly debated, including the heated 2008 food vs fuel debate.
Most modern cars are designed to run on gasoline are capable of running with a blend from 10% up to 15% ethanol mixed into gasoline (E10-E15). With a small amount of redesign, gasoline-powered vehicles can run on ethanol concentrations as high as 85% (E85), the maximum set in the United States and Europe due to cold weather during the winter, or up to 100% (E100) in Brazil, with a warmer climate. Ethanol has close to 34% less energy per volume than gasoline, consequently fuel economy ratings with ethanol blends are significantly lower than with pure gasoline, but this lower energy content does not translate directly into a 34% reduction in mileage, because there are many other variables that affect the performance of a particular fuel in a particular engine, and also because ethanol has a higher octane rating which is beneficial to high compression ratio engines.
For this reason, for pure or high ethanol blends to be attractive for users, its price must be lower than gasoline to offset the lower fuel economy. As a rule of thumb, Brazilian consumers are frequently advised by the local media to use more alcohol than gasoline in their mix only when ethanol prices are 30% lower or more than gasoline, as ethanol price fluctuates heavily depending on the results and seasonal harvests of sugar cane and by region. In the US, and based on EPA tests for all 2006 E85 models, the average fuel economy for E85 vehicles was found 25.56% lower than unleaded gasoline. The EPA-rated mileage of current American flex-fuel vehicles could be considered when making price comparisons, though E85 has octane rating of about 104 and could be used as a substitute for premium gasoline. Regional retail E85 prices vary widely across the US, with more favorable prices in the Midwest region, where most corn is grown and ethanol produced. In August 2008 the US average spread between the price of E85 and gasoline was 16.9%, while in Indiana was 35%, 30% in Minnesota and Wisconsin, 19% in Maryland, 12 to 15% in California, and just 3% in Utah. Depending on the vehicle capabilities, the break even price of E85 usually has to be between 25 and 30% lower than gasoline.
Reacting to the high price of oil and its growing dependence on imports, in 1975 Brazil launched the Pro-alcool program, a huge government-subsidized effort to manufacture ethanol fuel (from its sugar cane crop) and ethanol-powered automobiles. These ethanol-only vehicles were very popular in the 1980s, but became economically impractical when oil prices fell – and sugar prices rose – late in that decade. In May 2003 Volkswagen built for the first time a commercial ethanol flexible fuel car, the Gol 1.6 Total Flex. These vehicles were a commercial success and by early 2009 other nine Brazilian manufacturers are producing flexible fuel vehicles: Chevrolet, Fiat, Ford, Peugeot, Renault, Honda, Mitsubishi, Toyota, Citroën, and Nissan. The adoption of the flex technology was so rapid, that flexible fuel cars reached 87.6% of new car sales in July 2008. As of August 2008, the fleet of "flex" automobiles and light commercial vehicles had reached 6 million new vehicles sold, representing almost 19% of all registered light vehicles. The rapid success of "flex" vehicles, as they are popularly known, was made possible by the existence of 33,000 filling stations with at least one ethanol pump available by 2006, a heritage of the Pro-alcool program.
In the United States, initial support to develop alternative fuels by the government was also a response to the 1973 oil crisis, and later on, as a goal to improve air quality. Also, liquid fuels were preferred over gaseous fuels not only because they have a better volumetric energy density but also because they were the most compatible fuels with existing distribution systems and engines, thus avoiding a big departure from the existing technologies and taking advantage of the vehicle and the refueling infrastructure. California led the search of sustainable alternatives with interest in methanol.
In 1996, a new FFV Ford Taurus was developed, with models fully capable of running either methanol or ethanol blended with gasoline. This ethanol version of the Taurus was the first commercial production of an E85 FFV. The momentum of the FFV production programs at the American car companies continued, although by the end of the 1990s, the emphasis was on the FFV E85 version, as it is today. Ethanol was preferred over methanol because there is a large support in the farming community and thanks to government's incentive programs and corn-based ethanol subsidies. Sweden also tested both the M85 and the E85 flexifuel vehicles, but due to agriculture policy, in the end emphasis was given to the ethanol flexifuel vehicles.
Biodiesel
The main benefit of Diesel combustion engines is that they have a 44% fuel burn efficiency; compared with just 25–30% in the best gasoline engines. In addition diesel fuel has slightly higher energy density by volume than gasoline. This makes Diesel engines capable of achieving much better fuel economy than gasoline vehicles.
Biodiesel (fatty acid methyl ester), is commercially available in most oilseed-producing states in the United States. As of 2005, it is somewhat more expensive than fossil diesel, though it is still commonly produced in relatively small quantities (in comparison to petroleum products and ethanol). Many farmers who raise oilseeds use a biodiesel blend in tractors and equipment as a matter of policy, to foster production of biodiesel and raise public awareness. It is sometimes easier to find biodiesel in rural areas than in cities. Biodiesel has lower energy density than fossil diesel fuel, so biodiesel vehicles are not quite able to keep up with the fuel economy of a fossil fuelled diesel vehicle, if the diesel injection system is not reset for the new fuel. If the injection timing is changed to take account of the higher cetane value of biodiesel, the difference in economy is negligible. Because biodiesel contains more oxygen than diesel or vegetable oil fuel, it produces the lowest emissions from diesel engines, and is lower in most emissions than gasoline engines. Biodiesel has a higher lubricity than mineral diesel and is an additive in European pump diesel for lubricity and emissions reduction.
Some Diesel-powered cars can run with minor modifications on 100% pure vegetable oils. Vegetable oils tend to thicken (or solidify if it is waste cooking oil), in cold weather conditions so vehicle modifications (a two tank system with diesel start/stop tank), are essential in order to heat the fuel prior to use under most circumstances. Heating to the temperature of engine coolant reduces fuel viscosity, to the range cited by injection system manufacturers, for systems prior to 'common rail' or 'unit injection ( VW PD)' systems. Waste vegetable oil, especially if it has been used for a long time, may become hydrogenated and have increased acidity. This can cause the thickening of fuel, gumming in the engine and acid damage of the fuel system. Biodiesel does not have this problem, because it is chemically processed to be PH neutral and lower viscosity. Modern low emission diesels (most often Euro -3 and -4 compliant), typical of the current production in the European industry, would require extensive modification of injector system, pumps and seals etc. due to the higher operating pressures, that are designed thinner (heated) mineral diesel than ever before, for atomisation, if they were to use pure vegetable oil as fuel. Vegetable oil fuel is not suitable for these vehicles as they are currently produced. This reduces the market as increasing numbers of new vehicles are not able to use it. However, the German Elsbett company has successfully produced single tank vegetable oil fuel systems for several decades, and has worked with Volkswagen on their TDI engines. This shows that it is technologically possible to use vegetable oil as a fuel in high efficiency / low emission diesel engines.
Greasestock is an event held yearly in Yorktown Heights, New York, and is one of the largest showcases of vehicles using waste oil as a biofuel in the United States.
Biogas
Compressed biogas may be used for internal combustion engines after purification of the raw gas. The removal of H2O, H2S and particles can be seen as standard producing a gas which has the same quality as compressed natural gas.
Compressed natural gas
High-pressure compressed natural gas (CNG), mainly composed of methane, that is used to fuel normal combustion engines instead of gasoline. Combustion of methane produces the least amount of CO2 of all fossil fuels. Gasoline cars can be retrofitted to CNG and become bifuel Natural gas vehicles (NGVs) as the gasoline tank is kept. The driver can switch between CNG and gasoline during operation. Natural gas vehicles (NGVs) are popular in regions or countries where natural gas is abundant. Widespread use began in the Po River Valley of Italy, and later became very popular in New Zealand by the eighties, though its use has declined.
As of 2017, there were 24.5 million natural gas vehicles worldwide, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million).
As of 2010, the Asia-Pacific region led the global market with a share of 54%. In Europe they are popular in Italy (730,000), Ukraine (200,000), Armenia (101,352), Russia (100,000) and Germany (91,500), and they are becoming more so as various manufacturers produce factory made cars, buses, vans and heavy vehicles. In the United States CNG powered buses are the favorite choice of several public transit agencies, with an estimated CNG bus fleet of some 130,000. Other countries where CNG-powered buses are popular include India, Australia, Argentina, and Germany.
CNG vehicles are common in South America, where these vehicles are mainly used as taxicabs in main cities of Argentina and Brazil. Normally, standard gasoline vehicles are retrofitted in specialized shops, which involve installing the gas cylinder in the trunk and the CNG injection system and electronics. The Brazilian GNV fleet is concentrated in the cities of Rio de Janeiro and São Paulo. Pike Research reports that almost 90% of NGVs in Latin America have bi-fuel engines, allowing these vehicles to run on either gasoline or CNG.
Dual fuel
Dual fuel vehicle is referred as the vehicle using two types of fuel in the same time (can be gas + liquid, gas + gas, liquid + liquid) with different fuel tank.
Diesel-CNG dual fuel is a system using two type of fuel which are diesel and compressed natural gas (CNG) at the same time. It is because of CNG need a source of ignition for combustion in diesel engine.
Hybrid electric
A hybrid vehicle uses multiple propulsion systems to provide motive power. The most common type of hybrid vehicle is the gasoline-electric hybrid vehicles, which use gasoline (petrol) and electric batteries for the energy used to power internal-combustion engines (ICEs) and electric motors. These motors are usually relatively small and would be considered "underpowered" by themselves, but they can provide a normal driving experience when used in combination during acceleration and other maneuvers that require greater power.
The Toyota Prius first went on sale in Japan in 1997 and it is sold worldwide since 2000.
, there are over 50 models of hybrid electric cars available in several world markets, with more than 12 million hybrid electric vehicles sold worldwide since their inception in 1997.
Hydrogen
A hydrogen car is an automobile which uses hydrogen as its primary source of power for locomotion. These cars generally use the hydrogen in one of two methods: combustion or fuel-cell conversion. In combustion, the hydrogen is "burned" in engines in fundamentally the same method as traditional gasoline cars. The common internal combustion engine, usually fueled with gasoline (petrol) or diesel liquids, can be converted to run on gaseous hydrogen. This emits water at the point of use, and during combustion with air NOx can be produced. However, the most efficient use of hydrogen involves the use of fuel cells and electric motors instead of a traditional engine. Hydrogen reacts with oxygen inside the fuel cells, which produces electricity to power the motors, with the only byproduct from the spent hydrogen being water.
A small number of commercially available hydrogen fuel cell cars currently exist: the Hyundai NEXO, Toytota Mirai, and previously the Honda FCX Clarity. One primary area of research is hydrogen storage, to try to increase the range of hydrogen vehicles while reducing the weight, energy consumption, and complexity of the storage systems. Two primary methods of storage are metal hydrides and compression. Some believe that hydrogen cars will never be economically viable and that the emphasis on this technology is a diversion from the development and popularization of more efficient battery electric vehicles.
In the light road vehicle segment, by the end of 2022, 70,200 hydrogen fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, the global scope for hydrogen’s role in cars is shrinking relative to earlier expectations.
Electric, fed by external source
Electric power fed from an external source to the vehicle is standard in railway electrification. At such systems usually the tracks form one pole, while the other is usually a single overhead wire or a rail insulated against ground.
On roads this system does not work as described, as normal road surfaces are very poor electric conductors; and so electric vehicles fed with external power on roads require at least two overhead wires. The most common type of road vehicles fed with electricity from external source are trolleybusses, but there are also some trucks powered with this technology. The advantage is that the vehicle can be operated without breaks for refueling or charging. Disadvantages include: a large infrastructure of electric wires; difficulty in driving as one has to prevent a dewirement of the vehicle; vehicles cannot overtake each other; a danger of electrocution; and an aesthetic problem.
Wireless transmission (see Wireless power transfer) is possible, in principle; but the infrastructure (especially wiring) necessary for inductive or capacitive coupling would be extensive and expensive. In principle it is also possible to transmit energy by microwaves or by lasers to the vehicle, but this may be inefficient and dangerous for the power required. Beside this, in the case of lasers one requires a guidance system to track the vehicle to be powered, as laser beams have a small diameter.
Comparative assessment of fossil and alternative fuels
Comparative assessments of conventional fossil and alternative fuel vehicles usually encompass more than in-use environmental impacts and running costs. They factor in issues like resource extractive impacts (e.g. for battery manufacture or fossil fuel extraction), ‘well-to-wheel’ efficiency, and the carbon intensity of electricity in different geographies. In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles. BEVs have lower emissions than internal combustion engine vehicles even in places where electricity generation is relatively carbon-intensive, for example China where electricity is predominantly generated from coal.
Other technologies
Engine air compressor
The air engine is an emission-free piston engine that uses compressed air as a source of energy. The first compressed air car was invented by a French engineer named Guy Nègre. The expansion of compressed air may be used to drive the pistons in a modified piston engine. Efficiency of operation is gained through the use of environmental heat at normal temperature to warm the otherwise cold expanded air from the storage tank. This non-adiabatic expansion has the potential to greatly increase the efficiency of the machine. The only exhaust is cold air (−15 °C), which could also be used to air condition the car. The source for air is a pressurized carbon-fiber tank. Air is delivered to the engine via a rather conventional injection system. Unique crank design within the engine increases the time during which the air charge is warmed from ambient sources and a two-stage process allows improved heat transfer rates.
Electric, stored-otherway
Electricity can be also stored in supercapacitors and superconductors. However superconductor storage is unsuitable for vehicle propulsion as it requires extreme deep temperature and produces strong magnetic fields. Supercapacitors, however, can be used in vehicles and are used in some trams on sections without overhead wire. They can be load in during regular stops, at which passengers enter and leave the train, but can only travel a few kilometres with the stored energy. However, this is no problem in this case as the next stop is usually in reachable distance.
Solar
A solar car is an electric vehicle powered by solar energy obtained from solar panels on the car. Solar panels cannot currently be used to directly supply a car with a suitable amount of power at this time, but they can be used to extend the range of electric vehicles. As of 2022, a handful of solar electric cars with varying performance are becoming commercially available, from Fisker and Lightyear, among others.
Solar cars are raced in competitions such as the World Solar Challenge and the North American Solar Challenge. These events are often sponsored by Government agencies such as the United States Department of Energy keen to promote the development of alternative energy technology such as solar cells and electric vehicles. Such challenges are often entered by universities to develop their students' engineering and technological skills as well as motor vehicle manufacturers such as GM and Honda.
Dimethyl ether fuel
Dimethyl ether (DME) is a promising fuel in diesel engines, petrol engines (30% DME / 70% LPG), and gas turbines owing to its high cetane number, which is 55, compared to diesel's, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn DME. The simplicity of this short carbon chain compound leads during combustion to very low emissions of particulate matter, NOx, CO. For these reasons as well as being sulfur-free, DME meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan). Mobil is using DME in their methanol to gasoline process.
DME is being developed as a synthetic second generation biofuel (BioDME), which can be manufactured from lignocellulosic biomass. In 2006 the EU considered BioDME in its potential biofuel mix in 2030; the Volvo Group was the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant based on black liquor gasification is nearing completion in Piteå, Sweden.
Ammonia fuelled vehicles
Ammonia is produced by combining gaseous hydrogen with nitrogen from the air. Large-scale ammonia production uses natural gas for the source of hydrogen. Ammonia was used during World War II to power buses in Belgium, and in engine and solar energy applications prior to 1900. Liquid ammonia also fuelled the Reaction Motors XLR99 rocket engine, that powered the X-15 hypersonic research aircraft. Although not as powerful as other fuels, it left no soot in the reusable rocket engine and its density approximately matches the density of the oxidizer, liquid oxygen, which simplified the aircraft's design.
Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. The calorific value of ammonia is 22.5 MJ/kg (9690 BTU/lb), which is about half that of diesel. In a normal engine, in which the water vapour is not condensed, the calorific value of ammonia will be about 21% less than this figure. It can be used in existing engines with only minor modifications to carburettors/injectors.
When ammonia is produced using coal, the CO2 emitted has the potential to be sequestered (the combustion products are nitrogen and water).
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and in streetcars in New Orleans. In 1981 a Canadian company converted a 1981 Chevrolet Impala to operate using ammonia as fuel.
Ammonia and is being used with success by developers in Canada, since it can run in spark ignited or diesel engines with minor modifications, also the only green fuel to power jet engines, and despite its toxicity is reckoned to be no more dangerous than petrol or LPG. It can be made from renewable electricity, and having half the density of petrol or diesel can be readily carried in sufficient quantities in vehicles. On complete combustion it has no emissions other than nitrogen and water vapour. The combustion chemical formula is 4 NH3 + 3 O2 → 2 N2 + 6 H2O, 75% water is the result.
Charcoal
In the 1930s Tang Zhongming made an invention using abundant charcoal resources for Chinese auto market. The charcoal-fuelled car was later used intensively in China, serving the army and conveyancer after the breakout of World War II.
Liquefied natural gas
Liquefied natural gas (LNG) is natural gas that has been cooled to a point at which it becomes a cryogenic liquid. In this liquid state, natural gas is more than 2 times as dense as highly compressed CNG. LNG fuel systems function on any vehicle capable of burning natural gas. Unlike CNG, which is stored at high pressure (typically 3000 or 3600 psi) and then regulated to a lower pressure that the engine can accept, LNG is stored at low pressure (50 to 150 psi) and simply vaporized by a heat exchanger before entering the fuel metering devices to the engine. Because of its high energy density compared to CNG, it is very suitable for those interested in long ranges while running on natural gas.
In the United States, the LNG supply chain is the main thing that has held back this fuel source from growing rapidly. The LNG supply chain is very analogous to that of diesel or gasoline. First, pipeline natural gas is liquefied in large quantities, which is analogous to refining gasoline or diesel. Then, the LNG is transported via semi trailer to fuel stations where it is stored in bulk tanks until it is dispensed into a vehicle. CNG, on the other hand, requires expensive compression at each station to fill the high-pressure cylinder cascades.
Autogas
LPG or liquefied petroleum gas (LPG) is a low pressure liquefied gas mixture composed mainly of propane and butane which burns in conventional gasoline combustion engines with less CO2 than gasoline. Gasoline cars can be retrofitted to LPG aka Autogas and become bifuel vehicles as the gasoline tank is not removed, allowing drivers to switch between LPG and gasoline during operation. Estimated 10 million vehicles running worldwide.
There are 24.9 million LPG powered vehicles worldwide as of December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million). In the U.S., 190,000 on-road vehicles use propane, and 450,000 forklifts use it for power. However, it is banned in Pakistan (DEC 2013) as it is considered a risk to public safety by OGRA.
Formic acid
Formic acid is used by converting it first to hydrogen, and using that in a hydrogen fuel cell. It can also be used directly in formic acid fuel cells. Formic acid is much easier to store than hydrogen.
Liquid nitrogen car
Liquid nitrogen (LN2) is a method of storing energy. Energy is used to liquefy air, and then LN2 is produced by evaporation, and distributed. LN2 is exposed to ambient heat in the car and the resulting nitrogen gas can be used to power a piston or turbine engine. The maximum amount of energy that can be extracted from LN2 is 213 Watt-hours per kg (W·h/kg) or 173 W·h per liter, in which a maximum of 70 W·h/kg can be utilized with an isothermal expansion process. Such a vehicle with a 350-liter (93 gallon) tank can achieve ranges similar to a gasoline powered vehicle with a 50-liter (13 gallon) tank. Theoretical future engines, using cascading topping cycles, can improve this to around 110 W·h/kg with a quasi-isothermal expansion process. The advantages are zero harmful emissions and superior energy densities compared to a compressed-air vehicle as well as being able to refill the tank in a matter of minutes.
Nuclear power
In principle, it is possible to build a vehicle powered by nuclear fission or nuclear decay. However, there are two major problems: first one has to transform the energy, which comes as heat and radiation into energy usable for a drive. One possible would be to use a steam turbine as in a nuclear power plant, but such a device would take too much space. A more suitable way would be direct conversion into electricity for example with thermoelements or thermionic devices. The second problem is that nuclear fission produces high levels of neutron and gamma rays, which require excessive shielding, that would result in a vehicle too large for use on public roads. However studies were made in this way by Ford Nucleon.
A better way for a nuclear powered vehicle would be the use of power of radioactive decay in radioisotope thermoelectric generators, which are also very safe and reliable. The required shielding of these devices depends on the used radio nuclide. Plutonium-238 as nearly pure alpha radiator does not require much shielding.
As prices for suitable radionuclide are high and energy density is low (generating 1 watt with Plutonium-238 requires a half gram of it), this way of propulsion is too expensive for wide use. Also radioisotope thermoelectric generators offer according to their large content of high radioactive material an extreme danger in case of misuse for example by terrorists. The only vehicle in use, which is driven by radioisotope thermoelectric generators is the Mars rover Curiosity.
Other forms of nuclear power as fusion and annihilation are at present not available for vehicle propulsion, as no working fusion reactor is available and it is questionable if one can ever built one with a size suitable for a road vehicle. Annihilation may perhaps work in some ways (see antimatter drive), but there is no technology existing to produce and store enough antimatter.
Pedal-assisted electric hybrid vehicle
In very small vehicles, the power demand decreases, so human power can be employed to make a significant improvement in battery life. Three such commercially made vehicles are the Sinclair C5, ELF and TWIKE.
Flywheels
Flywheels can be also used for alternative fuel and were used in the 1950s for the propulsion of buses in Switzerland, the such called gyrobuses. The flywheel of the bus was loaded up by electric power at the terminals of the line and allowed it to travel a way up to 8 kilometres just with its flywheel. Flywheel-powered vehicles are quieter than vehicles with combustion engine, require no overhead wire and generate no exhausts, but the flywheel device has a great weight (1.5 tons for 5 kWh) and requires special safety measures due to its high rotational speed.
Silanes
Silanes higher than heptasilane can be stored like gasoline and may also work as fuel. They have the advantage that they can also burn with the nitrogen of the air, but have as major disadvantage its high price and that its combustion products are solid, which gives trouble in combustion engines.
Spring
The power of wound-up springs or twisted rubber cords can be used for the propulsion of small vehicles. However this way of energy storage allows only saving small energy amounts not suitable for the propulsion of vehicles for transporting people. Spring-powered vehicles are wind-up toys or mousetrap cars.
Steam
A steam car is a car that has a steam engine. Wood, coal, ethanol, or others can be used as fuel. The fuel is burned in a boiler and the heat converts water into steam. When the water turns to steam, it expands. The expansion creates pressure. The pressure pushes the pistons back and forth. This turns the driveshaft to spin the wheels which provides moves the car forward. It works like a coal-fueled steam train, or steam boat. The steam car was the next logical step in independent transport.
Steam cars take a long time to start, but some can reach speeds over 100 mph (161 km/h) eventually. The late model Doble steam cars could be brought to operational condition in less than 30 seconds, had high top speeds and fast acceleration, but were expensive to buy.
A steam engine uses external combustion, as opposed to internal combustion. Gasoline-powered cars are more efficient at about 25–28% efficiency. In theory, a combined cycle steam engine in which the burning material is first used to drive a gas turbine can produce 50% to 60% efficiency. However, practical examples of steam engined cars work at only around 5–8% efficiency.
The best known and best selling steam-powered car was the Stanley Steamer. It used a compact fire-tube boiler under the hood to power a simple two-piston engine which was connected directly to the rear axle. Before Henry Ford introduced monthly payment financing with great success, cars were typically purchased outright. This is why the Stanley was kept simple; to keep the purchase price affordable.
Steam produced in refrigeration also can be use by a turbine in other vehicle types to produce electricity, that can be employed in electric motors or stored in a battery.
Steam power can be combined with a standard oil-based engine to create a hybrid. Water is injected into the cylinder after the fuel is burned, when the piston is still superheated, often at temperatures of 1500 degrees or more. The water will instantly be vaporized into steam, taking advantage of the heat that would otherwise be wasted.
Wind
Wind-powered vehicles have been well known for a long time. They can be realized with sails similar to those used on ships, by using an onboard wind turbine, which drives the wheels directly or which generates electricity for an electric motor, or can be pulled by a kite. Wind-powered land vehicles need an enormous clearance in height, especially when sails or kites are used and are unsuitable in urban area. They may be also be difficult to steer.
Wind-powered vehicles are only used for recreational activities on beaches or other free areas.
The concept is described in further detail here: .
Wood gas
Wood gas can be used to power cars with ordinary internal combustion engines if a wood gasifier is attached. This was quite popular during World War II in several European and Asian countries because the war prevented easy and cost-effective access to oil.
Herb Hartman of Woodward, Iowa currently drives a wood powered Cadillac. He claims to have attached the gasifier to the Cadillac for just $700. Hartman claims, "A full hopper will go about fifty miles depending on how you drive it," and he added that splitting the wood was "labor-intensive. That's the big drawback."
See also
Alternative Fuels Training Consortium
Alternatives to the automobile
Bi-fuel vehicle
Butanol fuel
Carbon-neutral fuel
Clean Cities
Engine control unit altering to optimize running on different fuels
Green vehicle
Fuel gas-powered scooter
Hydrogen vehicle
List of hybrid vehicles
Phase-out of fossil fuel vehicles
Renewable energy
Solar vehicle
The Hype about Hydrogen
Vehicle classification by propulsion system
Water-fuelled car
Wind-powered vehicle
References
External links
Cradle-to-Grave Lifecycle Analysis of U.S. Light-Duty Vehicle-Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025–2030) Technologies (includes estimated cost of avoided GHG emissions from different AFV technologies), Argonne National Laboratory, June 2016.
Official website of the Alternative Fuels Data Center, Office of Energy Efficiency and Renewable Energy, United States Department of Energy
Transitions to Alternative Vehicles and Fuels, National Academy of Sciences (2013)
Vehicle technology
Green vehicles
Alternative fuels | Alternative fuel vehicle | [
"Engineering"
] | 9,731 | [
"Vehicle technology",
"Mechanical engineering by discipline"
] |
8,342,547 | https://en.wikipedia.org/wiki/Strontium%20sulfate | Strontium sulfate (SrSO4) is the sulfate salt of strontium. It is a white crystalline powder and occurs in nature as the mineral celestine. It is poorly soluble in water to the extent of 1 part in 8,800. It is more soluble in dilute HCl and nitric acid and appreciably soluble in alkali chloride solutions (e.g. sodium chloride).
Structure
Strontium sulfate is a polymeric material, isostructural with barium sulfate. Crystallized strontium sulfate is utilized by a small group of radiolarian protozoa, called the Acantharea, as a main constituent of their skeleton.
Applications and chemistry
Strontium sulfate is of interest as a naturally occurring precursor to other strontium compounds, which are more useful. In industry it is converted to the carbonate for use as ceramic precursor and the nitrate for use in pyrotechnics.
The low aqueous solubility of strontium sulfate can lead to scale formation in processes where these ions meet. For example, it can form on surfaces of equipment in underground oil wells depending on the groundwater conditions.
References
Strontium compounds
Sulfates
Pyrotechnic colorants | Strontium sulfate | [
"Chemistry"
] | 255 | [
"Sulfates",
"Salts"
] |
25,309,794 | https://en.wikipedia.org/wiki/Lycopodium%20powder | Lycopodium powder is a yellow-tan dust-like powder, consisting of the dry spores of clubmoss plants, or various fern relatives. When it is mixed with air, the spores are highly flammable and are used to create dust explosions as theatrical special effects. The powder was traditionally used in physics experiments to demonstrate phenomena such as Brownian motion.
Composition
The powder consists of the dry spores of clubmoss plants, or various fern relatives principally in the genera Lycopodium and Diphasiastrum. The preferred source species are Lycopodium clavatum (stag's horn clubmoss) and Diphasiastrum digitatum (common groundcedar), because these widespread and often locally abundant species are both prolific in their spore production and easy to collect.
Main uses
Today, the principal use of the powder is to create flashes or flames that are large and impressive but relatively easy to manage safely in magic acts and for cinema and theatrical special effects. Historically it was also used as a photographic flash powder. Both these uses rely on the same principle as a dust explosion, as the spores have a large surface area per unit of volume (a single spore's diameter is about 33 micrometers (μm)), and a high fat content.
It is also used in fireworks and explosives, fingerprint powders, as a covering for pills, and as an ice cream stabilizer.
Other uses
Lycopodium powder is also sometimes used as a lubricating dust on skin-contacting latex (natural rubber) goods, such as condoms and medical gloves.
In physics experiments and demonstrations, lycopodium powder can be used to make sound waves in air visible for observation and measurement, and to make a pattern of electrostatic charge visible. The powder is also highly hydrophobic; if the surface of a cup of water is coated with lycopodium powder, a finger or other object inserted straight into the cup will come out dusted with the powder but remain completely dry.
Because of the very small size of its particles, lycopodium powder can be used to demonstrate Brownian motion. A microscope slide, with or without a well, is prepared with a droplet of water, and a fine dusting of lycopodium powder is applied. Then, a cover-glass can be placed over the water and spore sample in order to reduce convection in the water by evaporation. Under several hundred diameters magnification, one will see in the microscope, when well focused upon individual lycopodium particles, that the spore particles "dance" randomly. This is in response to asymmetric collisional forces applied to the macroscopic (but still quite small) powder particle by microscopic water molecules in random thermal motion.
As a then-common laboratory supply, lycopodium powder was often used by inventors developing experimental prototypes. For example, Nicéphore Niépce used lycopodium powder in the fuel for one of the first internal combustion engines, the Pyréolophore, in about 1807, and Chester Carlson used lycopodium powder in 1938 in his early experiments to demonstrate xerography.
References
Pyrotechnic compositions
Powders | Lycopodium powder | [
"Physics",
"Chemistry"
] | 675 | [
"Materials",
"Powders",
"Pyrotechnic compositions",
"Matter"
] |
25,317,580 | https://en.wikipedia.org/wiki/Hypothiocyanite | Hypothiocyanite is the anion [OSCN]− and the conjugate base of hypothiocyanous acid (HOSCN). It is an organic compound part of the thiocyanates as it contains the functional group SCN. It is formed when an oxygen is singly bonded to the thiocyanate group. Hypothiocyanous acid is a fairly weak acid; its acid dissociation constant (pKa) is 5.3.
Hypothiocyanite is formed by peroxidase catalysis of hydrogen peroxide and thiocyanate:
H2O2 + SCN− → OSCN− + H2O
As a bactericide
Hypothiocyanite occurs naturally in the antimicrobial immune system of the human respiratory tract in a redox reaction catalyzed by the enzyme lactoperoxidase. It has been researched extensively for its capabilities as an alternative antibiotic as it is harmless to human body cells while being cytotoxic to bacteria. The exact processes for making hypothiocyanite have been patented as such an effective antimicrobial has many commercial applications.
Mechanism of action
Lactoperoxidase-catalysed reactions yield short-lived intermediary oxidation products of SCN−, providing antibacterial activity.
The major intermediary oxidation product is hypothiocyanite OSCN−, which is produced in an amount of about 1 mole per mole of hydrogen peroxide. At the pH optimum of 5.3, the OSCN− is in equilibrium with HOSCN. The uncharged HOSCN is considered to be the greater bactericidal of the two forms. At pH 7, it was evaluated that HOSCN represents 2% compare to OSCN− 98%.
The action of OSCN− against bacteria is reported to be caused by sulfhydryl (SH) oxidation.
The oxidation of -SH groups in the bacterial cytoplasmic membrane results in loss of the ability to transport glucose and also in leaking of potassium ions, amino acids and peptide.
OSCN− has also been identified as an antimicrobial agent in milk, saliva, tears, and mucus.
OSCN− is considered as a safe product as it is not mutagenic.
Relation to cystic fibrosis
Initially, this particular lactoperoxidase-catalyzed compound was originally discovered while viewing the specific environment of cystic fibrosis patients' weakened respiratory immune system against bacterial infection.
Symptoms of cystic fibrosis include an inability to secrete sufficient quantities of SCN− which results in a shortage of necessary hypothiocyanite, resulting in increasing mucous viscosity, inflammation and bacterial infection in the respiratory tract.
Lactoferrin with hypothiocyanite has been granted orphan drug status by the EMEA and the FDA.
Naturally, the discovery correlated with studies exploring different methods seeking to further gain alternative antibiotics, understanding that most older antibiotics are decreasing in effectiveness against bacteria with antibiotic resistance.
OSCN−, which is not an antibiotic, has proved efficacy on superbugs including MRSA reference strains, BCC, Mucoid PA
Schema of LPO/SCN−/H2O2 in human lung:
Efficacy range
Non exhaustive list of microorganisms.
Bacteria (Gram-positive and -negative)
Acinetobacter spp.
Aeromonas hydrophila
Bacillus brevis
Bacillus cereus
Bacillus megaterium
Bacillus subtilis
Burkholderia cepacia
Campylobacter jejuni
Capnocytophaga ochracea
Corynebacterium xerosis
Enterobacter cloacae
Escherichia coli
Haemophilus influenzae
Helicobacter pylori
Klebsiella oxytoca
Klebsiella pneumoniae
Legionella spp.
Listeria monocytogenes
Micrococcus luteus
Mycobacterium smegmatis
Mycobacterium abscessus
Neisseria spp.
Pseudomonas aeruginosa
Pseudomonas pyocyanea
Salmonella spp.
Selenomonas sputigena
Shigella sonnei
Staphylococcus aerogenes
Staphylococcus aureus
Streptococcus agalactiae
Streptococcus faecalis
Streptococcus mutans
Wolinella recta
Xanthomonas campestris
Yersinia enterocolitica
Viruses
Echovirus 11
Herpes simplex virus, HSV
Influenza virus
Human immunodeficiency virus, HIV
Respiratory syncytial virus, RSV
Yeasts and moulds
Aspergillus niger
Botryodiplodia theobromae
Byssochlamys fulva
Candida albicans
Colletotrichum gloeosporioide
Colletotrichum musae
Fusarium monoliforme
Fusarium oxysporum
Rhodotula rubra
Sclerotinia spp.
See also
Respiratory tract antimicrobial defense system
References
Further reading
Anions
Thiocyanates
Chemical pathology
Sulfur ions | Hypothiocyanite | [
"Physics",
"Chemistry",
"Biology"
] | 1,105 | [
"Matter",
"Anions",
"Functional groups",
"Biochemistry",
"Thiocyanates",
"Chemical pathology",
"Sulfur ions",
"Ions"
] |
25,318,244 | https://en.wikipedia.org/wiki/DNA%20Repair%20and%20Mutagenesis | DNA Repair and Mutagenesis is a college-level textbook about DNA repair and mutagenesis written by Errol Friedberg, Graham Walker, Wolfram Siede, Richard D. Wood, and Roger Schultz. In its second edition as of 2009, DNA Repair and Mutagenesis contains over 1,000 pages, 10,000 references and 700 illustrations and has been described as "the most comprehensive book available in [the] field."
References
Biology books
DNA repair
2006 non-fiction books
2006 in biology | DNA Repair and Mutagenesis | [
"Biology"
] | 107 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
23,962,835 | https://en.wikipedia.org/wiki/European%20chemical%20Substances%20Information%20System | The European chemical Substances Information System (ESIS) was a chemoinformatics database that stored information system on chemicals of the European Union. It was created in the year 2003 by the former European Chemicals Bureau, which completed its mandate in 2008. ESIS was set up by the Joint Research Centre of the European Commission in order to make data on the safety of chemicals more readily accessible to the public, offering a single search tool on chemicals and the legislation under which they are presently covered. By October 3, 2013, ESIS contained 14,897 substance records.
ESIS provided access to several registers and lists, shown below:
EINECS (European Inventory of Existing Commercial chemical Substances)
NLP (No-Longer Polymers)
BPD (Biocidal Products Directive)
Export and Import of Dangerous Chemicals
The following databases were originally part of ESIS, but have been taken over by the European Chemicals Agency (ECHA), which will also ensure further updates:
ELINCS (European List of Notified Chemical Substances)
PBT (Persistent, bioaccumulative and toxic)
C&L (Classification and Labelling)
HPVCs (High Production Volume Chemicals) and LPVCs (Low Production Volume Chemicals)
IUCLID Chemical Data Sheets
Priority Lists, Risk Assessment process and tracking system
References
External links
Archived version — Note: ESIS information system has been discontinued (since 17 November 2014).
Cheminformatics
Government databases of the European Union
Regulation of chemicals in the European Union | European chemical Substances Information System | [
"Chemistry"
] | 301 | [
"Regulation of chemicals in the European Union",
"Regulation of chemicals",
"Computational chemistry",
"nan",
"Cheminformatics"
] |
23,967,473 | https://en.wikipedia.org/wiki/Fibrifold | In mathematics, a fibrifold is (roughly) a fiber space whose fibers and base spaces are orbifolds. They were introduced by , who introduced a system of notation for 3-dimensional fibrifolds and used this to assign names to the 219 affine space group types. 184 of these are considered reducible, and 35 irreducible.
Irreducible cubic space groups
The 35 irreducible space groups correspond to the cubic space group.
Irreducible group symbols (indexed 195−230) in Hermann–Mauguin notation, Fibrifold notation, geometric notation, and Coxeter notation:
References
Symmetry
Finite groups
Discrete groups | Fibrifold | [
"Physics",
"Mathematics"
] | 134 | [
"Mathematical structures",
"Finite groups",
"Algebraic structures",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
1,280,843 | https://en.wikipedia.org/wiki/Hilbert%27s%20syzygy%20theorem | In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, that were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem, which asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings.
Hilbert's syzygy theorem concerns the relations, or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; the theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in indeterminates over a field, one eventually finds a zero module of relations, after at most steps.
Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry.
History
The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890). The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem.
Syzygies (relations)
Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring.
Given a generating set of a module over a ring , a relation or first syzygy between the generators is a -tuple of elements of such that
Let be a free module with basis The -tuple may be identified with the element
and the relations form the kernel of the linear map defined by In other words, one has an exact sequence
This first syzygy module depends on the choice of a generating set, but, if is the module that is obtained with another generating set, there exist two free modules and such that
where denote the direct sum of modules.
The second syzygy module is the module of the relations between generators of the first syzygy module. By continuing in this way, one may define the th syzygy module for every positive integer .
If the th syzygy module is free for some , then by taking a basis as a generating set, the next syzygy module (and every subsequent one) is the zero module. If one does not take a basis as a generating set, then all subsequent syzygy modules are free.
Let be the smallest integer, if any, such that the th syzygy module of a module is free or projective. The above property of invariance, up to the sum direct with free modules, implies that does not depend on the choice of generating sets. The projective dimension of is this integer, if it exists, or if not. This is equivalent with the existence of an exact sequence
where the modules are free and is projective. It can be shown that one may always choose the generating sets for being free, that is for the above exact sequence to be a free resolution.
Statement
Hilbert's syzygy theorem states that, if is a finitely generated module over a polynomial ring in indeterminates over a field , then the th syzygy module of is always a free module.
In modern language, this implies that the projective dimension of is at most , and thus that there exists a free resolution
of length .
This upper bound on the projective dimension is sharp, that is, there are modules of projective dimension exactly . The standard example is the field , which may be considered as a -module by setting for every and every . For this module, the th syzygy module is free, but not the th one (for a proof, see , below).
The theorem is also true for modules that are not finitely generated. As the global dimension of a ring is the supremum of the projective dimensions of all modules, Hilbert's syzygy theorem may be restated as: the global dimension of is .
Low dimension
In the case of zero indeterminates, Hilbert's syzygy theorem is simply the fact that every finitely generated vector space has a basis.
In the case of a single indeterminate, Hilbert's syzygy theorem is an instance of the theorem asserting that over a principal ideal ring, every submodule of a free module is itself free.
Koszul complex
The Koszul complex, also called "complex of exterior algebra", allows, in some cases, an explicit description of all syzygy modules.
Let be a generating system of an ideal in a polynomial ring , and let be a free module of basis The exterior algebra of is the direct sum
where is the free module, which has, as a basis, the exterior products
such that In particular, one has (because of the definition of the empty product), the two definitions of coincide, and for . For every positive , one may define a linear map by
where the hat means that the factor is omitted. A straightforward computation shows that the composition of two consecutive such maps is zero, and thus that one has a complex
This is the Koszul complex. In general the Koszul complex is not an exact sequence, but it is an exact sequence if one works with a polynomial ring and an ideal generated by a regular sequence of homogeneous polynomials.
In particular, the sequence is regular, and the Koszul complex is thus a projective resolution of In this case, the th syzygy module is free of dimension one (generated by the product of all ); the th syzygy module is thus the quotient of a free module of dimension by the submodule generated by This quotient may not be a projective module, as otherwise, there would exist polynomials such that which is impossible (substituting 0 for the in the latter equality provides ). This proves that the projective dimension of is exactly .
The same proof applies for proving that the projective dimension of is exactly if the form a regular sequence of homogeneous polynomials.
Computation
At Hilbert's time, there was no method available for computing syzygies. It was only known that an algorithm may be deduced from any upper bound of the degree of the generators of the module of syzygies. In fact, the coefficients of the syzygies are unknown polynomials. If the degree of these polynomials is bounded, the number of their monomials is also bounded. Expressing that one has a syzygy provides a system of linear equations whose unknowns are the coefficients of these monomials. Therefore, any algorithm for linear systems implies an algorithm for syzygies, as soon as a bound of the degrees is known.
The first bound for syzygies (as well as for the ideal membership problem) was given in 1926 by Grete Hermann: Let a submodule of a free module of dimension over if the coefficients over a basis of of a generating system of have a total degree at most , then there is a constant such that the degrees occurring in a generating system of the first syzygy module is at most The same bound applies for testing the membership to of an element of .
On the other hand, there are examples where a double exponential degree necessarily occurs. However such examples are extremely rare, and this sets the question of an algorithm that is efficient when the output is not too large. At the present time, the best algorithms for computing syzygies are Gröbner basis algorithms. They allow the computation of the first syzygy module, and also, with almost no extra cost, all syzygies modules.
Syzygies and regularity
One might wonder which ring-theoretic property of causes the Hilbert syzygy theorem to hold. It turns out that
this is regularity, which is an algebraic formulation of the fact that affine -space is a variety without singularities. In fact the following generalization holds: Let be a Noetherian ring. Then has finite global dimension if and only if is regular and the Krull dimension of is finite; in that case the global dimension of is equal to the Krull dimension. This result may be proven using Serre's theorem on regular local rings.
See also
Quillen–Suslin theorem
Hilbert series and Hilbert polynomial
References
David Eisenbud, Commutative algebra. With a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag, New York, 1995. xvi+785 pp. ;
Commutative algebra
Homological algebra
Invariant theory
Theorems in ring theory | Hilbert's syzygy theorem | [
"Physics",
"Mathematics"
] | 1,856 | [
"Symmetry",
"Mathematical structures",
"Group actions",
"Fields of abstract algebra",
"Category theory",
"Commutative algebra",
"Invariant theory",
"Homological algebra"
] |
1,281,160 | https://en.wikipedia.org/wiki/Resistance%20thermometer | Resistance thermometers, also called resistance temperature detectors (RTDs), are sensors used to measure temperature. Many RTD elements consist of a length of fine wire wrapped around a heat-resistant ceramic or glass core but other constructions are also used. The RTD wire is a pure material, typically platinum (Pt), nickel (Ni), or copper (Cu). The material has an accurate resistance/temperature relationship which is used to provide an indication of temperature. As RTD elements are fragile, they are often housed in protective probes.
RTDs, which have higher accuracy and repeatability, are slowly replacing thermocouples in industrial applications below 600 °C.
Resistance/temperature relationship of metals
Common RTD sensing elements for biomedical application constructed of platinum (Pt), nickel (Ni), or copper (Cu) have a repeatable, resistance versus temperature relationship (R vs T) and operating temperature range. The R vs T relationship is defined as the amount of resistance change of the sensor per degree of temperature change. The relative change in resistance (temperature coefficient of resistance) varies only slightly over the useful range of the sensor.
Platinum was proposed by Sir William Siemens as an element for a resistance temperature detector at the Bakerian lecture in 1871: it is a noble metal and has the most stable resistance–temperature relationship over the largest temperature range. Nickel elements have a limited temperature range because the temperature coefficient of resistance changes at temperatures over 300 °C (572 °F). Copper has a very linear resistance–temperature relationship; however, copper oxidizes at moderate temperatures and cannot be used over 150 °C (302 °F).
The significant characteristic of metals used as resistive elements is the linear approximation of the resistance versus temperature relationship between 0 and 100 °C. This temperature coefficient of resistance is denoted by α and is usually given in units of Ω/(Ω·°C):
where
is the resistance of the sensor at 0 °C,
is the resistance of the sensor at 100 °C.
Pure platinum has α = 0.003925 Ω/(Ω·°C) in the 0 to 100 °C range and is used in the construction of laboratory-grade RTDs. Conversely, two widely recognized standards for industrial RTDs IEC 60751 and ASTM E-1137 specify α = 0.00385 Ω/(Ω·°C). Before these standards were widely adopted, several different α values were used. It is still possible to find older probes that are made with platinum that have α = 0.003916 Ω/(Ω·°C) and 0.003902 Ω/(Ω·°C).
These different α values for platinum are achieved by doping – carefully introducing impurities, which become embedded in the lattice structure of the platinum and result in a different R vs. T curve and hence α value.
Calibration
To characterize the R vs T relationship of any RTD over a temperature range that represents the planned range of use, calibration must be performed at temperatures other than 0 °C and 100 °C. This is necessary to meet calibration requirements. Although RTDs are considered to be linear in operation, it must be proven that they are accurate with regard to the temperatures with which they will actually be used (see details in Comparison calibration option). Two common calibration methods are the fixed-point method and the comparison method.
Fixed-point calibration is used for the highest-accuracy calibrations by national metrology laboratories. It uses the triple point, freezing point or melting point of pure substances such as water, zinc, tin, and argon to generate a known and repeatable temperature. These cells allow the user to reproduce actual conditions of the ITS-90 temperature scale. Fixed-point calibrations provide extremely accurate calibrations (within ±0.001 °C). A common fixed-point calibration method for industrial-grade probes is the ice bath. The equipment is inexpensive, easy to use, and can accommodate several sensors at once. The ice point is designated as a secondary standard because its accuracy is ±0.005 °C (±0.009 °F), compared to ±0.001 °C (±0.0018 °F) for primary fixed points.
Comparison calibrations is commonly used with secondary standard platinum resistance thermometers and industrial RTDs. The thermometers being calibrated are compared to calibrated thermometers by means of a bath whose temperature is uniformly stable. Unlike fixed-point calibrations, comparisons can be made at any temperature between −100 °C and 500 °C (−148 °F to 932 °F). This method might be more cost-effective, since several sensors can be calibrated simultaneously with automated equipment. These electrically heated and well-stirred baths use silicone oils and molten salts as the medium for the various calibration temperatures.
Element types
The three main categories of RTD sensors are thin-film, wire-wound, and coiled elements. While these types are the ones most widely used in industry, other more exotic shapes are used; for example, carbon resistors are used at ultra-low temperatures (−273 °C to −173 °C).
Carbon resistor elements are cheap and widely used. They have very reproducible results at low temperatures. They are the most reliable over extremely wide range of temperatures. They generally do not suffer from significant hysteresis or strain gauge effects.
Strain-free elements use a wire coil minimally supported within a sealed housing filled with an inert gas. These sensors work up to and are used in the SPRTs that define ITS-90. They consist of platinum wire loosely coiled over a support structure, so the element is free to expand and contract with temperature. They are very susceptible to shock and vibration, as the loops of platinum can sway back and forth, causing deformation.
Thin-film elements have a sensing element that is formed by depositing a very thin layer of resistive material, normally platinum, on a ceramic substrate (plating). This layer is usually just 10 to 100 ångströms (1 to 10 nanometers) thick. This film is then coated with an epoxy or glass that helps protect the deposited film and also acts as a strain relief for the external lead wires. Disadvantages of this type are that they are not as stable as their wire-wound or coiled counterparts. They also can only be used over a limited temperature range due to the different expansion rates of the substrate and resistive deposited giving a "strain gauge" effect that can be seen in the resistive temperature coefficient. These elements work with temperatures to without further packaging, but can operate up to when suitably encapsulated in glass or ceramic. Special high-temperature RTD elements can be used up to with the right encapsulation.
Wire-wound elements can have greater accuracy, especially for wide temperature ranges. The coil diameter provides a compromise between mechanical stability and allowing expansion of the wire to minimize strain and consequential drift. The sensing wire is wrapped around an insulating mandrel or core. The winding core can be round or flat, but must be an electrical insulator. The coefficient of thermal expansion of the winding core material is matched to the sensing wire to minimize any mechanical strain. This strain on the element wire will result in a thermal measurement error. The sensing wire is connected to a larger wire, usually referred to as the element lead or wire. This wire is selected to be compatible with the sensing wire, so that the combination does not generate an emf that would distort the thermal measurement. These elements work with temperatures to 660 °C.
Coiled elements have largely replaced wire-wound elements in industry. This design has a wire coil that can expand freely over temperature, held in place by some mechanical support, which lets the coil keep its shape. This “strain free” design allows the sensing wire to expand and contract free of influence from other materials; in this respect it is similar to the SPRT, the primary standard upon which ITS-90 is based, while providing the durability necessary for industrial use. The basis of the sensing element is a small coil of platinum sensing wire. This coil resembles a filament in an incandescent light bulb. The housing or mandrel is a hard fired ceramic oxide tube with equally spaced bores that run transverse to the axes. The coil is inserted in the bores of the mandrel and then packed with a very finely ground ceramic powder. = This permits the sensing wire to move, while still remaining in good thermal contact with the process. These elements work with temperatures to 850 °C.
The current international standard that specifies tolerance and the temperature-to-electrical resistance relationship for platinum resistance thermometers (PRTs) is IEC 60751:2008; ASTM E1137 is also used in the United States. By far the most common devices used in industry have a nominal resistance of 100 ohms at 0 °C and are called Pt100 sensors ("Pt" is the symbol for platinum, "100" for the resistance in ohms at 0 °C). It is also possible to get Pt1000 sensors, where 1000 is for the resistance in ohms at 0 °C. The sensitivity of a standard 100 Ω sensor is a nominal 0.385 Ω/°C. RTDs with a sensitivity of 0.375 and 0.392 Ω/°C, as well as a variety of others, are also available.
Function
Resistance thermometers are constructed in a number of forms and offer greater stability, accuracy and repeatability in some cases than thermocouples. While thermocouples use the Seebeck effect to generate a voltage, resistance thermometers use electrical resistance and require a power source to operate. The resistance ideally varies nearly linearly with temperature per the Callendar–Van Dusen equation.
The platinum detecting wire needs to be kept free of contamination to remain stable. A platinum wire or film is supported on a former in such a way that it gets minimal differential expansion or other strains from its former, yet is reasonably resistant to vibration. RTD assemblies made from iron or copper are also used in some applications. Commercial platinum grades exhibit a temperature coefficient of resistance 0.00385/°C (0.385%/°C) (European Fundamental Interval). The sensor is usually made to have a resistance of 100 Ω at 0 °C. This is defined in BS EN 60751:1996 (taken from IEC 60751:1995). The American Fundamental Interval is 0.00392/°C, based on using a purer grade of platinum than the European standard. The American standard is from the Scientific Apparatus Manufacturers Association (SAMA), who are no longer in this standards field. As a result, the "American standard" is hardly the standard even in the US.
Lead-wire resistance can also be a factor; adopting three- and four-wire, instead of two-wire, connections can eliminate connection-lead resistance effects from measurements (see below); three-wire connection is sufficient for most purposes and is an almost universal industrial practice. Four-wire connections are used for the most precise applications.
Advantages and limitations
The advantages of platinum resistance thermometers include:
High accuracy
Low drift
Wide operating range
Suitability for precision applications.
Limitations:
RTDs in industrial applications are rarely used above 660 °C. At temperatures above 660 °C it becomes increasingly difficult to prevent the platinum from becoming contaminated by impurities from the metal sheath of the thermometer. This is why laboratory standard thermometers replace the metal sheath with a glass construction. At very low temperatures, say below −270 °C (3 K), because there are very few phonons, the resistance of an RTD is mainly determined by impurities and boundary scattering and thus basically independent of temperature. As a result, the sensitivity of the RTD is essentially zero and therefore not useful.
Compared to thermistors, platinum RTDs are less sensitive to small temperature changes and have a slower response time. However, thermistors have a smaller temperature range and stability.
RTDs vs thermocouples
The two most common ways of measuring temperatures for industrial applications are with resistance temperature detectors (RTDs) and thermocouples. The choice between them is typically determined by four factors.
Temperature If process temperatures are between , an industrial RTD is the preferred option. Thermocouples have a range of , so for temperatures above it is the contact temperature measurement device commonly found in physics laboratories.
Response time If the process requires a very fast response to temperature changes (fractions of a second as opposed to seconds), then a thermocouple is the best choice. Time response is measured by immersing the sensor in water moving at with a 63.2% step change.
Size A standard RTD sheath is in diameter; sheath diameters for thermocouples can be less than .
Accuracy and stability requirements If a tolerance of 2 °C is acceptable and the highest level of repeatability is not required, a thermocouple will serve. RTDs are capable of higher accuracy and can maintain stability for many years, while thermocouples can drift within the first few hours of use.
Construction
These elements nearly always require insulated leads attached. PVC, silicone rubber or PTFE insulators are used at temperatures below about 250 °C. Above this, glass fibre or ceramic are used. The measuring point, and usually most of the leads, require a housing or protective sleeve, often made of a metal alloy that is chemically inert to the process being monitored. Selecting and designing protection sheaths can require more care than the actual sensor, as the sheath must withstand chemical or physical attack and provide convenient attachment points.
The RTD construction design may be enhanced to handle shock and vibration by including compacted magnesium oxide (MgO) powder inside the sheath. MgO is used to isolate the conductors from the external sheath and from each other. MgO is used due to its dielectric constant, rounded grain structure, high-temperature capability, and its chemical inertness.
Wiring configurations
Two-wire configuration
The simplest resistance-thermometer configuration uses two wires. It is only used when high accuracy is not required, as the resistance of the connecting wires is added to that of the sensor, leading to errors of measurement. This configuration allows use of 100 meters of cable. This applies equally to balanced bridge and fixed bridge system.
For a balanced bridge usual setting is with R2 = R1, and R3 around the middle of the range of the RTD. So for example, if we are going to measure between , RTD resistance will range from 100 Ω to 138.5 Ω. We would choose R3 = 120 Ω. In that way we get a small measured voltage in the bridge.
Three-wire configuration
In order to minimize the effects of the lead resistances, a three-wire configuration can be used. The suggested setting for the configuration shown, is with R1 = R2, and R3 around the middle of the range of the RTD. Looking at the Wheatstone bridge circuit shown, the voltage drop on the lower left hand side is V_rtd + V_lead, and on the lower righthand side is V_R3 + V_lead, therefore the bridge voltage (V_b) is the difference, V_rtd − V_R3. The voltage drop due to the lead resistance has been cancelled out. This always applies if R1=R2, and R1, R2 >> RTD, R3. R1 and R2 can serve the use of limiting the current through the RTD, for example for a Pt100, limiting to 1 mA, and 5 V, would suggest a limiting resistance of approximately R1 = R2 = 5/0.001 = 5,000 Ohms.
Four-wire configuration
The four-wire resistance configuration increases the accuracy of measurement of resistance. Four-terminal sensing eliminates voltage drop in the measuring leads as a contribution to error. To increase accuracy further, any residual thermoelectric voltages generated by different wire types or screwed connections are eliminated by reversal of the direction of the 1 mA current and the leads to the DVM (digital voltmeter). The thermoelectric voltages will be produced in one direction only. By averaging the reversed measurements, the thermoelectric error voltages are cancelled out.
Classifications of RTDs
The highest-accuracy of all PRTs are the Ultra Precise Platinum Resistance Thermometers (UPRTs). This accuracy is achieved at the expense of durability and cost. The UPRT elements are wound from reference-grade platinum wire. Internal lead wires are usually made from platinum, while internal supports are made from quartz or fused silica. The sheaths are usually made from quartz or sometimes Inconel, depending on temperature range. Larger-diameter platinum wire is used, which drives up the cost and results in a lower resistance for the probe (typically 25.5 Ω). UPRTs have a wide temperature range (−200 °C to 1000 °C) and are approximately accurate to ±0.001 °C over the temperature range. UPRTs are only appropriate for laboratory use.
Another classification of laboratory PRTs is Standard Platinum Resistance Thermometers (Standard SPRTs). They are constructed like the UPRT, but the materials are more cost-effective. SPRTs commonly use reference-grade, high-purity smaller-diameter platinum wire, metal sheaths and ceramic type insulators. Internal lead wires are usually a nickel-based alloy. Standard PRTs are more limited in temperature range (−200 °C to 500 °C) and are approximately accurate to ±0.03 °C over the temperature range.
Industrial PRTs are designed to withstand industrial environments. They can be almost as durable as a thermocouple. Depending on the application, industrial PRTs can use thin-film or coil-wound elements. The internal lead wires can range from PTFE-insulated stranded nickel-plated copper to silver wire, depending on the sensor size and application. Sheath material is typically stainless steel; higher-temperature applications may demand Inconel. Other materials are used for specialized applications.
History
Contemporary to the Seebeck effect, the discovery that resistivity in metals is dependent on the temperature was announced in 1821 by Sir Humphry Davy.
The practical application of the tendency of electrical conductors to increase their electrical resistance with rising temperature was first described by Sir William Siemens at the Bakerian Lecture of 1871 before the Royal Society of Great Britain, suggesting platinum as a suitable element. The necessary methods of construction were established by Callendar, Griffiths, Holborn and Wein between 1885 and 1900.
In 1871 Carl Wilhelm Siemens invented the Platinum Resistance Temperature Detector and presented a three-term interpolation formula. Siemens’ RTD rapidly fell out of favour due to the instability of the temperature reading. Hugh Longbourne Callendar developed the first commercially successful platinum RTD in 1885.
A 1971 paper by Eriksson, Keuther, and Glatzel identified six noble metal alloys (63Pt37Rh, 37Pd63Rh, 26Pt74Ir, 10Pd90Ir, 34Pt66Au, 14Pd86Au) with approximately linear resistance temperature characteristics. The alloy 63Pt37Rh is similar to the readily available 70Pt30Rh alloy wire used in thermocouples.
The Space Shuttle made extensive use of platinum resistance thermometers. The only in-flight shutdown of a Space Shuttle Main Engine – mission STS-51F – was caused by multiple failures of RTDs which had become brittle and unreliable due to multiple heat-and-cool cycles. (The failures of the sensors falsely suggested that a fuel pump was critically overheating, and the engine was automatically shut down.) Following the engine failure incident, the RTDs were replaced with thermocouples.
Standard resistance thermometer data
Temperature sensors are usually supplied with thin-film elements. The resistance elements are rated in accordance with BS EN 60751:2008 as:
Resistance-thermometer elements functioning up to 1000 °C can be supplied. The relation between temperature and resistance is given by the Callendar–Van Dusen equation:
Here is the resistance at temperature T, is the resistance at 0 °C, and the constants (for an α = 0.00385 platinum RTD) are:
Since the B and C coefficients are relatively small, the resistance changes almost linearly with the temperature.
For positive temperature, solution of the quadratic equation yields the following relationship between temperature and resistance:
Then for a four-wire configuration with a 1 mA precision current source the relationship between temperature and measured voltage is
Temperature-dependent resistances for various popular resistance thermometers
Copied from German version, please do not remove
See also
Thermowell
Thermistor
Thermostat
Thermocouple
Notes
References
Sensors
Resistive components
Thermometers | Resistance thermometer | [
"Physics",
"Technology",
"Engineering"
] | 4,319 | [
"Physical quantities",
"Measuring instruments",
"Resistive components",
"Thermometers",
"Sensors",
"Electrical resistance and conductance"
] |
1,281,863 | https://en.wikipedia.org/wiki/Tantalum%20carbide | Tantalum carbides (TaC) form a family of binary chemical compounds of tantalum and carbon with the empirical formula TaCx, where x usually varies between 0.4 and 1. They are extremely hard, brittle, refractory ceramic materials with metallic electrical conductivity. They appear as brown-gray powders, which are usually processed by sintering.
Being important cermet materials, tantalum carbides are commercially used in tool bits for cutting applications and are sometimes added to tungsten carbide alloys.
The melting points of tantalum carbides was previously estimated to be about depending on the purity and measurement conditions; this value is among the highest for binary compounds. And only tantalum hafnium carbide was estimated to have a higher melting point of . However new tests have conclusively proven that TaC actually has a melting point of 3,768 °C and both tantalum hafnium carbide and hafnium carbide have higher melting points.
Preparation
TaCx powders of desired composition are prepared by heating a mixture of tantalum and graphite powders in vacuum or inert-gas atmosphere (argon). The heating is performed at a temperature of about using a furnace or an arc-melting setup. An alternative technique is reduction of tantalum pentoxide by carbon in vacuum or hydrogen atmosphere at a temperature of . This method was used to obtain tantalum carbide in 1876, but it lacks control over the stoichiometry of the product. Production of TaC directly from the elements has been reported through self-propagating high-temperature synthesis.
Crystal structure
TaCx compounds have a cubic (rock-salt) crystal structure for x = 0.7–1.0; the lattice parameter increases with x. TaC0.5 has two major crystalline forms. The more stable one has an anti-cadmium iodide-type trigonal structure, which transforms upon heating to about 2,000 °C into a hexagonal lattice with no long-range order for the carbon atoms.
Here Z is the number of formula units per unit cell, ρ is the density calculated from lattice parameters.
Properties
The bonding between tantalum and carbon atoms in tantalum carbides is a complex mixture of ionic, metallic and covalent contributions, and because of the strong covalent component, these carbides are very hard and brittle materials. For example, TaC has a microhardness of 1,600–2,000 kg/mm2 (~9 Mohs) and an elastic modulus of 285 GPa, whereas the corresponding values for tantalum are 110 kg/mm2 and 186 GPa.
Tantalum carbides have metallic electrical conductivity, both in terms of its magnitude and temperature dependence. TaC is a superconductor with a relatively high transition temperature of TC = 10.35 K.
The magnetic properties of TaCx change from diamagnetic for x ≤ 0.9 to paramagnetic at larger x. An inverse behavior (para-diamagnetic transition with increasing x) is observed for HfCx, despite that it has the same crystal structure as TaCx.
Application
Tantalum carbide is widely used as sintering additive in ultra-high temperature ceramics (UHTCs) or as a ceramic reinforcement in high-entropy alloys (HEAs) due to its excellent physical properties in melting point, hardness, elastic modulus, thermal conductivity, thermal shock resistance, and chemical stability, which makes it a desirable material for aircraft and rockets in aerospace industries.
Wang et al. have synthesized SiBCN ceramic matrix with TaC addition by mechanical alloying plus reactive hot-pressing sintering methods, in which BN, graphite and TaC powders were mixed with ball-milling and sintered at to obtain SiBCN-TaC composites. For the synthesis, the ball-milling process refined the TaC powders down to 5 nm without reacting with other components, allowing to form agglomerates that are composed of spherical clusters with a diameter of 100 nm-200 nm. TEM analysis showed that TaC is distributed either randomly in the form of nanoparticles with sizes of 10-20 nm within the matrix or distributed in BN with smaller size of 3-5 nm. As a result, the composite with 10 wt% addition of TaC improved the fracture toughness of the matrix, reaching 399.5 MPa compared to 127.9 MPa of pristine SiBCN ceramics. This is mainly due to the mismatch of thermal expansion coefficients between TaC and SiBCN ceramic matrix. Since TaC has a larger coefficient of thermal expansion than that of SiBCN matrix, TaC particles endures tensile stress while the matrix endures tensile stress in radial direction and compressive stress in tangential direction. This makes the cracks to bypass the particles and absorbs some energy to achieve toughening. In addition, the uniform distribution of TaC particles contributes to the yield stress explained by Hall-Petch relationship due to a decrease in grain size.
Wei et al. have synthesized novel refractory MoNbRe0.5W(TaC)x HEA matrix using vacuum arc melting. XRD patterns showed that the resulting material is mainly composed of a single BCC crystal structure in the base alloy MoNbRe0.5W and a multi-component (MC) type carbide of (Nb, Ta, Mo, W)C to form a lamellar eutectic structure, with the amount of MC phase proportional to TaC addition. TEM analysis showed that the lamellar interface between BCC and MC phase presents a smooth and curvy morphology which exhibits good bonding with no lattice misfit dislocations. As a result, the grain size decreases with increasing TaC addition which improves the yield stress explained by Hall-Petch relationship. The formation of lamellar structure is because at elevated temperature, the decomposition reaction occurs in the MoNbRe0.5W(TaC)x composites:
(Mo, Nb, W, Ta)2C → (Mo, Nb, W, Ta) + (Mo, Nb, W, Ta)C
in which Re is dissolved in both components to nucleate BCC phase first and MC phase in the following, according to the phase diagrams. In addition, the MC phase also improves the strength of composites, due to its stiffer and more elastic property compared to BCC phase.
Wu et al. have also synthesized Ti(C, N)-based cermets with TaC addition with ball-milling and sintering at . TEM analysis showed that TaC helps dissolution of carbonitride phase and converts to TaC-binder phase. The resulting is a formation of “black-core-white rim” structure with decreasing grain size in the region of 3-5 wt% TaC addition and increasing transverse rupture strength (TRS). 0-3 wt% TaC region showed a decrease in the TRS because the TaC addition decreases the wettability between binder and carbonitride phase and creates pores. Further addition of TaC beyond 5 wt% also decreases TRS because TaC agglomerates during sintering and porosity again forms. The best TRS is found at 5wt% addition where fine grains and homogeneous microstructure are achieved for less grain boundary sliding.
Natural occurrence
Tantalcarbide is a natural form of tantalum carbide. It is a cubic, extremely rare mineral.
See also
Tantalum hafnium carbide
Hafnium carbide
Hafnium carbonitride
References
Carbides
Tantalum compounds
Superhard materials
Refractory materials
Native element minerals
Rock salt crystal structure | Tantalum carbide | [
"Physics"
] | 1,625 | [
"Refractory materials",
"Materials",
"Superhard materials",
"Matter"
] |
1,282,548 | https://en.wikipedia.org/wiki/Jarzynski%20equality | The Jarzynski equality (JE) is an equation in statistical mechanics that relates free energy differences between two states and the irreversible work along an ensemble of trajectories joining the same states. It is named after the physicist Christopher Jarzynski (then at the University of Washington and Los Alamos National Laboratory, currently at the University of Maryland) who derived it in 1996. Fundamentally, the Jarzynski equality points to the fact that the fluctuations in the work satisfy certain constraints separately from the average value of the work that occurs in some process.
Overview
In thermodynamics, the free energy difference between two states A and B is connected to the work W done on the system through the inequality:
,
with equality holding only in the case of a quasistatic process, i.e. when one takes the system from A to B infinitely slowly (such that all intermediate states are in thermodynamic equilibrium). In contrast to the thermodynamic statement above, the JE remains valid no matter how fast the process happens. The JE states:
Here k is the Boltzmann constant and T is the temperature of the system in the equilibrium state A or, equivalently, the temperature of the heat reservoir with which the system was thermalized before the process took place.
The over-line indicates an average over all possible realizations of an external process that takes the system from the equilibrium state A to a new, generally nonequilibrium state under the same external conditions as that of the equilibrium state B. This average over possible realizations is an average over different possible fluctuations that could occur during the process (due to Brownian motion, for example), each of which will cause a slightly different value for the work done on the system. In the limit of an infinitely slow process, the work W performed on the system in each realization is numerically the same, so the average becomes irrelevant and the Jarzynski equality reduces to the thermodynamic equality (see above). Away from the infinitely slow limit, the average value of the work obeys while the distribution of the fluctuations in the work are further constrained such that In this general case, W depends upon the specific initial microstate of the system, though its average can still be related to through an application of Jensen's inequality in the JE, viz.
in accordance with the second law of thermodynamics.
The Jarzynski equality holds when the initial state is a Boltzmann distribution (e.g. the system is in equilibrium) and the system and environment can be described by a large number of degrees of freedom evolving under arbitrary Hamiltonian dynamics. The final state does not need to be in equilibrium. (For example, in the textbook case of a gas compressed by a piston, the gas is equilibrated at piston position A and compressed to piston position B; in the Jarzynski equality, the final state of the gas does not need to be equilibrated at this new piston position).
Since its original derivation, the Jarzynski equality has been verified in a variety of contexts, ranging from experiments with biomolecules to numerical simulations. The Crooks fluctuation theorem, proved two years later, leads immediately to the Jarzynski equality. Many other theoretical derivations have also appeared, lending further confidence to its generality.
Examples
Fluctuation-dissipation theorem
Taking the log of , and use the cumulant expansion up to the second cumulant, we obtain . The left side is the work dissipated into the heat bath, and the right side could be interpreted as the fluctuation in the work due to thermal noise.
Consider dragging an overdamped particle in a viscous fluid with temperature at constant force for a time . Because there is no potential energy for the particle, the change in free energy is zero, so we obtain .
The work expended is , where is the total displacement during the time. The particle's displacement has a mean part due to the external dragging, and a varying part due to its own diffusion, so , where is the diffusion coefficient. Together, we obtainor , where is the viscosity. This is the fluctuation-dissipation theorem.
In fact, for most trajectories, the work is positive, but for some rare trajectories, the work is negative, and those contribute enormously to the expectation, giving us an expectation that is exactly one.
History
A question has been raised about who gave the earliest statement of the Jarzynski equality. For example, in 1977 the Russian physicists G.N. Bochkov and Yu. E. Kuzovlev (see Bibliography) proposed a generalized version of the fluctuation-dissipation theorem which holds in the presence of arbitrary external time-dependent forces. Despite its close similarity to the JE, the Bochkov-Kuzovlev result does not relate free energy differences to work measurements, as discussed by Jarzynski himself in 2007.
Another similar statement to the Jarzynski equality is the nonequilibrium partition identity, which can be traced back to Yamada and Kawasaki. (The Nonequilibrium Partition Identity is the Jarzynski equality applied to two systems whose free energy difference is zero - like straining a fluid.) However, these early statements are very limited in their application. Both Bochkov and Kuzovlev as well as Yamada and Kawasaki consider a deterministic time reversible Hamiltonian system. As Kawasaki himself noted this precludes any treatment of nonequilibrium steady states. The fact that these nonequilibrium systems heat up forever because of the lack of any thermostatting mechanism leads to divergent integrals etc. No purely Hamiltonian description is capable of treating the experiments carried out to verify the Crooks fluctuation theorem, Jarzynski equality and the fluctuation theorem. These experiments involve thermostatted systems in contact with heat baths.
See also
Fluctuation theorem - Provides an equality that quantifies fluctuations in time averaged entropy production in a wide variety of nonequilibrium systems.
Crooks fluctuation theorem - Provides a fluctuation theorem between two equilibrium states. Implies Jarzynski equality.
Nonequilibrium partition identity
References
Bibliography
For earlier results dealing with the statistics of work in adiabatic (i.e. Hamiltonian) nonequilibrium processes, see:
; op. cit. 76, 1071 (1979)
; op. cit. 106A, 480 (1981)
For a comparison of such results, see:
For an extension to relativistic Brownian motion, see:
External links
Jarzynski Equality on arxiv.org
"Fluctuation-Dissipation: Response Theory in Statistical Physics" by Umberto Marini Bettolo Marconi, Andrea Puglisi, Lamberto Rondoni, Angelo Vulpiani
Statistical mechanics
Non-equilibrium thermodynamics
Equations | Jarzynski equality | [
"Physics",
"Mathematics"
] | 1,435 | [
"Non-equilibrium thermodynamics",
"Mathematical objects",
"Equations",
"Statistical mechanics",
"Dynamical systems"
] |
1,282,696 | https://en.wikipedia.org/wiki/Red%20eye%20%28medicine%29 | A red eye is an eye that appears red due to illness or injury. It is usually injection and prominence of the superficial blood vessels of the conjunctiva, which may be caused by disorders of these or adjacent structures. Conjunctivitis and subconjunctival hemorrhage are two of the less serious but more common causes.
Management includes assessing whether emergency action (including referral) is needed, or whether treatment can be accomplished without additional resources.
Slit lamp examination is invaluable in diagnosis but initial assessment can be performed using a careful history, testing vision (visual acuity), and carrying out a penlight examination.
Diagnosis
Particular signs and symptoms may indicate that the cause is serious and requires immediate attention.
Seven such signs are:
Reduced visual acuity
Ciliary flush (circumcorneal injection)
Corneal abnormalities including edema or opacities ("corneal haze")
Corneal staining
Pupil abormalities including abormal pupil size
Abnormal intraocular pressure
Severe pain
The most useful is a smaller pupil in the red eye than the non-red eye (opposite eye) and sensitivity to bright light.
Reduced visual acuity
A reduction in visual acuity in a 'red eye' is indicative of serious ocular disease, such as keratitis, iridocyclitis, and glaucoma, and never occurs in simple conjunctivitis without accompanying corneal involvement.
Ciliary flush
Ciliary flush is usually present in eyes with corneal inflammation, iridocyclitis or acute glaucoma, though not simple conjunctivitis.
A ciliary flush is a ring of red or violet spreading out from around the cornea of the eye.
Corneal abnormalities
The cornea is required to be transparent to transmit light to the retina. Because of injury, infection or inflammation, an area of opacity may develop which can be seen with a penlight or slit lamp. In rare instances, this opacity is congenital. In some, there is a family history of corneal growth disorders which may be progressive with age. Much more commonly, misuse of contact lenses may be a precipitating factor. Whichever, it is always potentially serious and sometimes necessitates urgent treatment and corneal opacities are the fourth leading cause of blindness.
Opacities may be keratic, that is, due to the deposition of inflammatory cells, hazy, usually from corneal edema, or they may be localized in the case of corneal ulcer or keratitis.
Corneal epithelial disruptions may be detected with fluorescein staining of the eye, and careful observation with cobalt-blue light.
Corneal epithelial disruptions would stain green, which represents some injury of the corneal epithelium.
These types of disruptions may be due to corneal inflammations or physical trauma to the cornea, such as a foreign body.
Pupillary abnormalities
In an eye with iridocyclitis, (inflammation of both the iris and ciliary body), the involved pupil will be smaller than the uninvolved, due to reflex muscle spasm of the iris sphincter muscle.
Generally, conjunctivitis does not affect the pupils.
With acute angle-closure glaucoma, the pupil is generally fixed in mid-position, oval, and responds sluggishly to light, if at all.
Shallow anterior chamber depth may indicate a predisposition to one form of glaucoma (narrow angle) but requires slit-lamp examination or other special techniques to determine it.
In the presence of a "red eye", a shallow anterior chamber may indicate acute glaucoma, which requires immediate attention.
Abnormal intraocular pressure
Intraocular pressure should be measured as part of a routine eye examination.
It is usually only elevated by iridocyclitis or acute-closure glaucoma, but not by relatively benign conditions.
In iritis and traumatic perforating ocular injuries, the intraocular pressure is usually low.
Severe pain
Those with conjunctivitis may report mild irritation or scratchiness, but never extreme pain, which is an indicator of more serious disease such as keratitis, corneal ulceration, iridocyclitis, or acute glaucoma.
Differential diagnosis
Of the many causes, conjunctivitis is the most common. Others include:
Usually nonurgent
airborne eye irritants
blepharitis – a usually chronic inflammation of the eyelids with scaling, sometimes resolving spontaneously
drugs: medications or recreational drug use
Cannabis
dry eye syndrome – caused by either decreased tear production or increased tear film evaporation which may lead to irritation and redness
subconjunctival hemorrhage – a sometimes dramatic, but usually harmless, bleeding underneath the conjunctiva most often from spontaneous rupture of the small, fragile blood vessels, commonly from a cough or sneeze
inflamed pterygium – a benign, triangular, horizontal growth of the conjunctiva, arising from the inner side, at the level of contact of the upper and lower eyelids, associated with exposure to sunlight, low humidity and dust. It may be more common in occupations such as farming and welding.
inflamed pinguecula – a yellow-white deposit close to the junction between the cornea and sclera, on the conjunctiva. It is most prevalent in tropical climates with much UV exposure. Although harmless, it can occasionally become inflamed.
tiredness
episcleritis – most often a mild, inflammatory disorder of the 'white' of the eye unassociated with eye complications in contrast to scleritis, and responding to topical medications such as anti-inflammatory drops.
Usually urgent
acute closed-angle glaucoma – implies injury to the optic nerve with the potential for irreversible vision loss which may be permanent unless treated quickly, as a result of increased pressure within the eyeball. Not all forms of glaucoma are acute, and not all are associated with increased intraocular pressure.
eye injury
keratitis – a potentially serious inflammation or injury to the cornea (window), often associated with significant pain, light intolerance, and deterioration in vision. Numerous causes include virus infection. Injury from contact lenses can lead to keratitis.
iritis – together with the ciliary body and choroid, the iris makes up the uvea, part of the middle, pigmented, structures of the eye. Inflammation of this layer (uveitis) requires urgent control and is estimated to be responsible for 10% of blindness in the United States.
scleritis – a serious inflammatory condition, often painful, that can result in permanent vision loss, and without an identifiable cause in half of those presenting with it. About 30–40% have an underlying systemic autoimmune condition.
tick-borne illnesses like Rocky Mountain spotted fever – the eye is not primarily involved, but the presence of conjunctivitis, along with fever and rash, may help with the diagnosis in appropriate circumstances.
See also
List of eye diseases and disorders
Ocular straylight
References
External links
Medical emergencies
Eye diseases
External signs of ageing
Disorders of conjunctiva | Red eye (medicine) | [
"Biology"
] | 1,503 | [
"Senescence",
"External signs of ageing"
] |
1,283,240 | https://en.wikipedia.org/wiki/Culvert | A culvert is a structure that channels water past an obstacle or to a subterranean waterway. Typically embedded so as to be surrounded by soil, a culvert may be made from a pipe, reinforced concrete or other material. In the United Kingdom, the word can also be used for a longer artificially buried watercourse.
Culverts are commonly used both as cross-drains to relieve drainage of ditches at the roadside, and to pass water under a road at natural drainage and stream crossings. When they are found beneath roads, they are frequently empty. A culvert may also be a bridge-like structure designed to allow vehicle or pedestrian traffic to cross over the waterway while allowing adequate passage for the water. Dry culverts are used to channel a fire hose beneath a noise barrier for the ease of firefighting along a highway without the need or danger of placing hydrants along the roadway itself.
Culverts come in many sizes and shapes including round, elliptical, flat-bottomed, open-bottomed, pear-shaped, and box-like constructions. The culvert type and shape selection is based on a number of factors including requirements for hydraulic performance, limitations on upstream water surface elevation, and roadway embankment height.
The process of removing culverts to restore an open-air watercourse is known as daylighting. In the UK, the practice is also known as deculverting.
Materials
Culverts can be constructed of a variety of materials including cast-in-place or precast concrete (reinforced or non-reinforced), galvanized steel, aluminum, or plastic (typically high-density polyethylene). Two or more materials may be combined to form composite structures. For example, open-bottom corrugated steel structures are often built on concrete footings.
Design and engineering
Construction or installation at a culvert site generally results in disturbance of the site's soil, stream banks, or stream bed, and can result in the occurrence of unwanted problems such as scour holes or slumping of banks adjacent to the culvert structure.
Culverts must be properly sized and installed, and protected from erosion and scour. Many US agencies such as the Federal Highway Administration, Bureau of Land Management, and Environmental Protection Agency, as well as state or local authorities, require that culverts be designed and engineered to meet specific federal, state, or local regulations and guidelines to ensure proper function and to protect against culvert failures.
Culverts are classified by standards for their load capacities, water flow capacities, life spans, and installation requirements for bedding and backfill. Most agencies adhere to these standards when designing, engineering, and specifying culverts.
Failures
Culvert failures can occur for a wide variety of reasons including maintenance, environmental, and installation-related failures, functional or process failures related to capacity and volume causing the erosion of the soil around or under them, and structural or material failures that cause culverts to fail due to collapse or corrosion of the materials from which they are made.
If the failure is sudden and catastrophic, it can result in injury or loss of life. Sudden road collapses are often the result of poorly designed and engineered culvert crossing sites or unexpected changes in the surrounding environment cause design parameters to be exceeded. Water passing through undersized culverts will scour away the surrounding soil over time. This can cause a sudden failure during medium-sized rain events. Accidents from culvert failure can also occur if a culvert has not been adequately sized and a flood event overwhelms the culvert, or disrupts the road or railway above it.
Ongoing culvert function without failure depends on proper design and engineering considerations being given to load, hydraulic flow, surrounding soil analysis, backfill and bedding compaction, and erosion protection. Improperly designed backfill support around culverts can result in material collapse or failure from inadequate load support.
For existing culverts which have experienced degradation, loss of structural integrity or need to meet new codes or standards, rehabilitation using a reline pipe may be preferred versus replacement. Sizing of a reline culvert uses the same hydraulic flow design criteria as that of a new culvert however as the reline culvert is meant to be inserted into an existing culvert or host pipe, reline installation requires the grouting of the annular space between the host pipe and the surface of reline pipe (typically using a low compression strength grout) so as to prevent or reduce seepage and soil migration. Grouting also serves as a means in establishing a structural connection between the liner, host pipe and soil. Depending on the size and annular space to be filled as well as the pipe elevation between the inlet and outlet, it may be necessary to add grout in multiple stages or "lifts". If multiple lifts are required, then a grouting plan is required, which should define the placement of grout feed tubes, air tubes, type of grout to be used, and if injecting or pumping grout, then the required developed pressure for injection. As the diameter of the reline pipe will be smaller than the host pipe, the cross-sectional flow area will be smaller. By selecting a reline pipe with a very smooth internal surface with an approximate Hazen-Williams Friction Factor C value of between 140–150, the decreased flow area can be offset, and hydraulic flow rates potentially increased by way of reduced surface flow resistance. Examples of pipe materials with high C-factors are high-density polyethylene (150) and polyvinyl chloride (140).
Environmental impacts
Safe and stable stream crossings can accommodate wildlife and protect stream health, while reducing expensive erosion and structural damage. Undersized and poorly placed culverts can cause problems for water quality and aquatic organisms. Poorly designed culverts can degrade water quality via scour and erosion, as well as restrict the movement of aquatic organisms between upstream and downstream habitat. Fish are a common victim in the loss of habitat due to poorly designed crossing structures.
Culverts that offer adequate aquatic organism passage reduce impediments to movement of fish, wildlife, and other aquatic life that require instream passage. Poorly designed culverts are also more apt to become jammed with sediment and debris during medium to large scale rain events. If the culvert cannot pass the water volume in the stream, then the water may overflow the road embankment. This may cause significant erosion, ultimately washing out the culvert. The embankment material that is washed away can clog other structures downstream, causing them to fail as well. It can also damage crops and property. A properly sized structure and hard bank armoring can help to alleviate this pressure.
Culvert style replacement is a widespread practice in stream restoration. Long-term benefits of this practice include reduced risk of catastrophic failure and improved fish passage. If best management practices are followed, short-term impacts on the aquatic biology are minimal.
Fish passage
While the culvert discharge capacity derives from hydrological and hydraulic engineering considerations, this results often in large velocities in the barrel, creating a possible fish passage barrier. Critical culvert parameters in terms of fish passage are the dimensions of the barrel, particularly its length, cross-sectional shape, and invert slope. The behavioural response by fish species to culvert dimensions, light conditions, and flow turbulence may play a role in their swimming ability and culvert passage rate. There is no simple technical means to ascertain the turbulence characteristics most relevant to fish passage in culverts, but it is understood that the flow turbulence plays a key role in fish behaviour.
The interactions between swimming fish and vortical structures involve a broad range of relevant length and time scales. Recent discussions emphasised the role of secondary flow motion, considerations of fish dimensions in relation to the spectrum of turbulence scales, and the beneficial role of turbulent structures provided that fish are able to exploit them.
The current literature on culvert fish passage focuses mostly on fast-swimming fish species, but a few studies have argued for better guidelines for small-bodied fish including juveniles. Finally, a solid understanding of turbulence typology is a basic requirement to any successful hydraulic structure design conducive of upstream fish passage.
Minimum energy loss culverts
In the coastal plains of Queensland, Australia, torrential rains during the wet season place a heavy demand on culverts. The natural slope of the flood plains is often very small, and little fall (or head loss) is permissible in the culverts. Researchers developed and patented the design procedure of minimum energy loss culverts which yield small afflux.
A minimum energy loss culvert or waterway is a structure designed with the concept of minimum head loss. The flow in the approach channel is contracted through a streamlined inlet into the barrel where the channel width is minimum, and then it is expanded in a streamlined outlet before being finally released into the downstream natural channel. Both the inlet and the outlet must be streamlined to avoid significant form losses. The barrel invert is often lowered to increase the discharge capacity.
The concept of minimum energy loss culverts was developed by a shire engineer in Victoria and a professor at the University of Queensland during the late 1960s. While a number of small-size structures were designed and built in Victoria, some major structures were designed, tested and built in south-east Queensland.
See also
Notes
References
Oxford English Dictionary,
Culvert Design for Aquatic Organism Passage. US Department of Transportation, Federal Highway Administration
External links
Impact of culverts on salmon
Culvert fact sheet
Culvert analysis tool
Bottomless Culvert Scour Study
Culverts for Fish Passage
Hydraulics of Minimum Energy Loss (MEL)
Hydraulics engineering circular
Culvert use, installation, and sizing
Design guidelines for culverts
Upstream fish passage in box culverts
Bridges
Tunnels | Culvert | [
"Engineering"
] | 2,031 | [
"Structural engineering",
"Bridges"
] |
1,284,618 | https://en.wikipedia.org/wiki/Hemagglutination%20assay | The hemagglutination assay or haemagglutination assay (HA) and the hemagglutination inhibition assay (HI or HAI) were developed in 1941–42 by American virologist George Hirst as methods for quantifying the relative concentration of viruses, bacteria, or antibodies.
HA and HAI apply the process of hemagglutination, in which sialic acid receptors on the surface of red blood cells (RBCs) bind to the hemagglutinin glycoprotein found on the surface of influenza virus (and several other viruses) and create a network, or lattice structure, of interconnected RBCs and virus particles. The agglutinated lattice maintains the RBCs in a suspended distribution, typically viewed as a diffuse reddish solution. The formation of the lattice depends on the concentrations of the virus and RBCs, and when the relative virus concentration is too low, the RBCs are not constrained by the lattice and settle to the bottom of the well. Hemagglutination is observed in the presence of staphylococci, vibrios, and other bacterial species, similar to the mechanism viruses use to cause agglutination of erythrocytes. The RBCs used in HA and HI assays are typically from chickens, turkeys, horses, guinea pigs, or humans depending on the selectivity of the targeted virus or bacterium and the associated surface receptors on the RBC.
Procedure
A general procedure for HA is as follows, a serial dilution of virus is prepared across the rows in a U or V- bottom shaped 96-well microtiter plate. The most concentrated sample in the first well is often diluted to be 1/5x of the stock, and subsequent wells are typically two-fold dilutions (1/10, 1/20, 1/40, etc.).The final well serves as a negative control with no virus. Each row of the plate typically has a different virus and the same pattern of dilutions. After serial dilutions, a standardized concentration of RBCs is added to each well and mixed gently. The plate is incubated for 30 minutes at room temperature. Following the incubation period, the assay can be analyzed to distinguish between agglutinated and non-agglutinated wells. The images across a row will typically progress from agglutinated wells with high virus concentration and a diffuse reddish appearance to a series of wells with low virus concentrations containing a dark red pellet, or button, in the center of the well. The low concentration wells appear nearly identical to the no-virus negative control well. The button appearance occurs because the RBCs are not held in the agglutinated lattice structure and settle into the low point of the U or V-bottom well. The transition from agglutinated to non-agglutinated wells occurs distinctively, within 1 to 2 wells.
The relative concentration, or titer, of the virus sample is based on the well with the last agglutinated appearance, immediately before a pellet is observed. Relative to the initial viral stock concentration, the virus concentration in this well will be some dilution of the stock, for example, 1/40-fold. The titer value of that sample is the inverse of the dilution, i.e., 40. In some cases, the virus is initially so dilute that agglutinated wells are never observed. In that case, the titer of these samples is commonly assigned as 5, indicating the highest possible concentration, but the accuracy of that value is clearly low. Alternatively, if the relative concentration of the virus is extremely high and the wells never transition to a button appearance. The titer value is then commonly assigned to be the highest dilution, such as 5120.
HI is closely related to the HA assay, but includes anti-viral antibodies as “inhibitors” to interfere with the virus-RBC interaction. The goal is to characterize the concentration of antibodies in the antiserum or other samples containing antibodies. The HI assay is generally performed by creating a dilution series of antiserum across the rows of a 96-well microtiter plate. Each row would usually be a different sample. A standardized amount of virus or bacteria is added to each well, and the mixture is allowed to incubate at room temperature for 30 minutes. The last well in each row would be a negative control with no virus added. During the incubation, antibodies bind to the viral particles, and if the concentration and binding affinity of the antibodies are high enough, the viral particles are effectively blocked from causing hemagglutination. Next, a standardized amount of RBCs is added to each well and allowed to incubate at room temperature for an additional 30 minutes. The resulting HI plate images usually progress from non-agglutinated, “button” wells with high antibody concentration to agglutinated, red diffuse wells with low antibody concentration. The HI titer value is the inverse of the last dilution of serum that completely inhibited hemagglutination.
The preceding descriptions of the HA and HI processes are generalized, and specific details can vary depending on the operator and laboratory. For example, serial dilutions across the rows is described, but some laboratories use an alternate orientation and perform dilutions down the columns instead. Similarly, the starting dilution, serial dilution factor, incubation times, and choice of U or V-bottom plate can depend on the specific laboratory.
Advantages
HA and HI have the advantages that the assays are simple, use relatively inexpensive and available instruments and supplies, and provide results within a few hours. The assays are also well established in many laboratories around the world, allowing some measure of credibility, comparison, and standardization.
Limitations
Optimal and reliable results require controlling several variables, such as incubation times, red blood cell concentration, and type of red blood cell. Non-specific factors in the sample can lead to interference and incorrect titer values. For example, molecules in the sample other than virus-specific antibodies can inhibit agglutination between virus and RBCs, as well as potentially blocking antibody from binding to virus. Receptor-destroying enzymes (RDE) are commonly used to treat samples prior to analysis to prevent non-specific inhibition. Analysis of the HA or HI results relies on a qualified individual to read the plate and determine the titer values. The manual interpretation method introduces more opportunities for discrepancies in the assay because results can be subjective and the agreement between human readers is inconsistent. Also, there is no digital record of the plate or titer determinations so the initial interpretation is tedious and commonly done in replicates. The range of potential variables and differences between expert readers can make comparing inter-laboratory results difficult.
See also
Hemagglutination
Virus quantification
References
Diagnostic virology
Microbiology techniques | Hemagglutination assay | [
"Chemistry",
"Biology"
] | 1,422 | [
"Microbiology techniques"
] |
1,284,761 | https://en.wikipedia.org/wiki/Fish%20locomotion | Fish locomotion is the various types of animal locomotion used by fish, principally by swimming. This is achieved in different groups of fish by a variety of mechanisms of propulsion, most often by wave-like lateral flexions of the fish's body and tail in the water, and in various specialised fish by motions of the fins. The major forms of locomotion in fish are:
Anguilliform, in which a wave passes evenly along a long slender body;
Sub-carangiform, in which the wave increases quickly in amplitude towards the tail;
Carangiform, in which the wave is concentrated near the tail, which oscillates rapidly;
Thunniform, rapid swimming with a large powerful crescent-shaped tail; and
Ostraciiform, with almost no oscillation except of the tail fin.
More specialized fish include movement by pectoral fins with a mainly stiff body, opposed sculling with dorsal and anal fins, as in the sunfish; and movement by propagating a wave along the long fins with a motionless body, as in the knifefish or featherbacks.
In addition, some fish can variously "walk" (i.e., crawl over land using the pectoral and pelvic fins), burrow in mud, leap out of the water and even glide temporarily through the air.
Swimming
Mechanism
Fish swim by exerting force against the surrounding water. There are exceptions, but this is normally achieved by the fish contracting muscles on either side of its body in order to generate waves of flexion that travel the length of the body from nose to tail, generally getting larger as they go along. The vector forces exerted on the water by such motion cancel out laterally, but generate a net force backwards which in turn pushes the fish forward through the water. Most fishes generate thrust using lateral movements of their body and caudal fin, but many other species move mainly using their median and paired fins. The latter group swim slowly, but can turn rapidly, as is needed when living in coral reefs for example. But they can not swim as fast as fish using their bodies and caudal fins.
Consider the tilapia shown in the diagram. Like most fish, the tilapia has a streamlined body shape reducing water resistance to movement and enabling the tilapia to cut easily through water. Its head is inflexible, which helps it maintain forward thrust. Its scales overlap and point backwards, allowing water to pass over the fish without unnecessary obstruction. Water friction is further reduced by mucus which tilapia secrete over their body.
The backbone is flexible, allowing muscles to contract and relax rhythmically and bring about undulating movement. A swim bladder provides buoyancy which helps the fish adjust its vertical position in the water column. A lateral line system allows it to detect vibrations and pressure changes in water, helping the fish to respond appropriately to external events.
Well developed fins are used for maintaining balance, braking and changing direction. The pectoral fins act as pivots around which the fish can turn rapidly and steer itself. The paired pectoral and pelvic fins control pitching, while the unpaired dorsal and anal fins reduce yawing and rolling. The caudal fin provides raw power for propelling the fish forward.
Body/caudal fin propulsion
There are five groups that differ in the fraction of their body that is displaced laterally:
Anguilliform
In the anguilliform group, containing some long, slender fish such as eels, there is little increase in the amplitude of the flexion wave as it passes along the body.
Subcarangiform
The subcarangiform group has a more marked increase in wave amplitude along the body with the vast majority of the work being done by the rear half of the fish. In general, the fish body is stiffer, making for higher speed but reduced maneuverability. Trout use sub-carangiform locomotion.
Carangiform
The carangiform group, named for the Carangidae, are stiffer and faster-moving than the previous groups. The vast majority of movement is concentrated in the very rear of the body and tail. Carangiform swimmers generally have rapidly oscillating tails.
Thunniform
The thunniform group contains high-speed long-distance swimmers, and is characteristic of tunas and is also found in several lamnid sharks. Here, virtually all the sideways movement is in the tail and the region connecting the main body to the tail (the peduncle). The tail itself tends to be large and crescent shaped.
Ostraciiform
The ostraciiform group have no appreciable body wave when they employ caudal locomotion. Only the tail fin itself oscillates (often very rapidly) to create thrust. This group includes Ostraciidae.
Median/paired fin propulsion
Not all fish fit comfortably in the above groups. Ocean sunfish, for example, have a completely different system, the tetraodontiform mode, and many small fish use their pectoral fins for swimming as well as for steering and dynamic lift. Fish in the order Gymnotiformes possess electric organs along the length of their bodies and swim by undulating an elongated anal fin while keeping the body still, presumably so as not to disturb the electric field that they generate.
Many fish swim using combined behavior of their two pectoral fins or both their anal and dorsal fins. Different types of Median paired fin propulsion can be achieved by preferentially using one fin pair over the other, and include rajiform, diodontiform, amiiform, gymnotiform and balistiform modes.
Rajiform
Rajiform locomotion is characteristic of rays and skates, when thrust is produced by vertical undulations along large, well developed pectoral fins.
Diodontiform
Diodontiform locomotion propels the fish propagating undulations along large pectoral fins, as seen in the porcupinefish (Diodontidae).
Amiiform
Amiiform locomotion consists of undulations of a long dorsal fin while the body axis is held straight and stable, as seen in the bowfin.
Gymnotiform
Gymnotiform locomotion consists of undulations of a long anal fin, essentially upside down amiiform, seen in the South American knifefish Gymnotiformes.
Balistiform
In balistiform locomotion, both anal and dorsal fins undulate. It is characteristic of the family Balistidae (triggerfishes). It may also be seen in the Zeidae.
Oscillatory
Oscillation is viewed as pectoral-fin-based swimming and is best known as mobuliform locomotion. The motion can be described as the production of less than half a wave on the fin, similar to a bird wing flapping. Pelagic stingrays, such as the manta, cownose, eagle and bat rays use oscillatory locomotion.
Tetraodontiform
In tetraodontiform locomotion, the dorsal and anal fins are flapped as a unit, either in phase or exactly opposing one another, as seen in the Tetraodontiformes (boxfishes and pufferfishes). The ocean sunfish displays an extreme example of this mode.
Labriform
In labriform locomotion, seen in the wrasses (Labriformes), oscillatory movements of pectoral fins are either drag based or lift based. Propulsion is generated either as a reaction to drag produced by dragging the fins through the water in a rowing motion, or via lift mechanisms.
Dynamic lift
Bone and muscle tissues of fish are denser than water. To maintain depth, bony fish increase buoyancy by means of a gas bladder. Alternatively, some fish store oils or lipids for this same purpose. Fish without these features use dynamic lift instead. It is done using their pectoral fins in a manner similar to the use of wings by airplanes and birds. As these fish swim, their pectoral fins are positioned to create lift which allows the fish to maintain a certain depth. The two major drawbacks of this method are that these fish must stay moving to stay afloat and that they are incapable of swimming backwards or hovering.
Hydrodynamics
Similarly to the aerodynamics of flight, powered swimming requires animals to overcome drag by producing thrust. Unlike flying, however, swimming animals often do not need to supply much vertical force because the effect of buoyancy can counter the downward pull of gravity, allowing these animals to float without much effort. While there is great diversity in fish locomotion, swimming behavior can be classified into two distinct "modes" based on the body structures involved in thrust production, Median-Paired Fin (MPF) and Body-Caudal Fin (BCF). Within each of these classifications, there are numerous specifications along a spectrum of behaviours from purely undulatory to entirely oscillatory. In undulatory swimming modes, thrust is produced by wave-like movements of the propulsive structure (usually a fin or the whole body). Oscillatory modes, on the other hand, are characterized by thrust produced by swiveling of the propulsive structure on an attachment point without any wave-like motion.
Body-caudal fin
Most fish swim by generating undulatory waves that propagate down the body through the caudal fin. This form of undulatory locomotion is termed body-caudal fin (BCF) swimming on the basis of the body structures used; it includes anguilliform, sub-carangiform, carangiform, and thunniform locomotory modes, as well as the oscillatory ostraciiform mode.
Adaptation
Similar to adaptation in avian flight, swimming behaviors in fish can be thought of as a balance of stability and maneuverability. Because body-caudal fin swimming relies on more caudal body structures that can direct powerful thrust only rearwards, this form of locomotion is particularly effective for accelerating quickly and cruising continuously. body-caudal fin swimming is, therefore, inherently stable and is often seen in fish with large migration patterns that must maximize efficiency over long periods. Propulsive forces in median-paired fin swimming, on the other hand, are generated by multiple fins located on either side of the body that can be coordinated to execute elaborate turns. As a result, median-paired fin swimming is well adapted for high maneuverability and is often seen in smaller fish that require elaborate escape patterns.
The habitats occupied by fishes are often related to their swimming capabilities. On coral reefs, the faster-swimming fish species typically live in wave-swept habitats subject to fast water flow speeds, while the slower fishes live in sheltered habitats with low levels of water movement.
Fish do not rely exclusively on one locomotor mode, but are rather locomotor generalists, choosing among and combining behaviors from many available behavioral techniques. Predominantly body-caudal fin swimmers often incorporate movement of their pectoral, anal, and dorsal fins as an additional stabilizing mechanism at slower speeds, but hold them close to their body at high speeds to improve streamlining and reducing drag. Zebrafish have even been observed to alter their locomotor behavior in response to changing hydrodynamic influences throughout growth and maturation.
Flight
The transition of predominantly swimming locomotion directly to flight has evolved in a single family of marine fish, the Exocoetidae. Flying fish are not true fliers in the sense that they do not execute powered flight. Instead, these species glide directly over the surface of the ocean water without ever flapping their "wings." Flying fish have evolved abnormally large pectoral fins that act as airfoils and provide lift when the fish launches itself out of the water. Additional forward thrust and steering forces are created by dipping the hypocaudal (i.e. bottom) lobe of their caudal fin into the water and vibrating it very quickly, in contrast to diving birds in which these forces are produced by the same locomotor module used for propulsion. Of the 64 extant species of flying fish, only two distinct body plans exist, each of which optimizes two different behaviors.
Tradeoffs
While most fish have caudal fins with evenly sized lobes (i.e. homocaudal), flying fish have an enlarged ventral lobe (i.e. hypocaudal) which facilitates dipping only a portion of the tail back onto the water for additional thrust production and steering.
Because flying fish are primarily aquatic animals, their body density must be close to that of water for buoyancy stability. This primary requirement for swimming, however, means that flying fish are heavier (have a larger mass) than other habitual fliers, resulting in higher wing loading and lift to drag ratios for flying fish compared to a comparably sized bird. Differences in wing area, wing span, wing loading, and aspect ratio have been used to classify flying fish into two distinct classifications based on these different aerodynamic designs.
Biplane body plan
In the biplane or Cypselurus body plan, both the pectoral and pelvic fins are enlarged to provide lift during flight. These fish also tend to have "flatter" bodies which increase the total lift-producing area, thus allowing them to "hang" in the air better than more streamlined shapes. As a result of this high lift production, these fish are excellent gliders and are well adapted for maximizing flight distance and duration.
Comparatively, Cypselurus flying fish have lower wing loading and smaller aspect ratios (i.e. broader wings) than their Exocoetus monoplane counterparts, which contributes to their ability to fly for longer distances than fish with this alternative body plan. Flying fish with the biplane design take advantage of their high lift production abilities when launching from the water by utilizing a "taxiing glide" in which the hypocaudal lobe remains in the water to generate thrust even after the trunk clears the water's surface and the wings are opened with a small angle of attack for lift generation.
Monoplane body plan
In the Exocoetus or monoplane body plan, only the pectoral fins are enlarged to provide lift. Fish with this body plan tend to have a more streamlined body, higher aspect ratios (long, narrow wings), and higher wing loading than fish with the biplane body plan, making these fish well adapted for higher flying speeds. Flying fish with a monoplane body plan demonstrate different launching behaviors from their biplane counterparts. Instead of extending their duration of thrust production, monoplane fish launch from the water at high speeds at a large angle of attack (sometimes up to 45 degrees). In this way, monoplane fish are taking advantage of their adaptation for high flight speed, while fish with biplane designs exploit their lift production abilities during takeoff.
Walking
A "walking fish" is a fish that is able to travel over land for extended periods of time. Some other cases of nonstandard fish locomotion include fish "walking" along the sea floor, such as the handfish or frogfish.
Most commonly, walking fish are amphibious fish. Able to spend longer times out of water, these fish may use a number of means of locomotion, including springing, snake-like lateral undulation, and tripod-like walking. The mudskippers are probably the best land-adapted of contemporary fish and are able to spend days moving about out of water and can even climb mangroves, although to only modest heights. The Climbing gourami is often specifically referred to as a "walking fish", although it does not actually "walk", but rather moves in a jerky way by supporting itself on the extended edges of its gill plates and pushing itself by its fins and tail. Some reports indicate that it can also climb trees.
There are a number of fish that are less adept at actual walking, such as the walking catfish. Despite being known for "walking on land", this fish usually wriggles and may use its pectoral fins to aid in its movement. Walking Catfish have a respiratory system that allows them to live out of water for several days. Some are invasive species. A notorious case in the United States is the Northern snakehead. Polypterids have rudimentary lungs and can also move about on land, though rather clumsily. The Mangrove rivulus can survive for months out of water and can move to places like hollow logs.
There are some species of fish that can "walk" along the sea floor but not on land; one such animal is the flying gurnard (it does not actually fly, and should not be confused with flying fish). The batfishes of the family Ogcocephalidae (not to be confused with batfish of Ephippidae) are also capable of walking along the sea floor. Bathypterois grallator, also known as a "tripodfish", stands on its three fins on the bottom of the ocean and hunts for food. The African lungfish (P. annectens) can use its fins to "walk" along the bottom of its tank in a manner similar to the way amphibians and land vertebrates use their limbs on land.
Burrowing
Many fishes, particularly eel-shaped fishes such as true eels, moray eels, and spiny eels, are capable of burrowing through sand or mud. Ophichthids, the snake eels, are capable of burrowing either forwards or backwards.
In larvae
Swimming
Fish larvae, like many adult fishes, swim by undulating their body. The swimming speed varies proportionally with the size of the animals, in that smaller animals tend to swim at lower speeds than larger animals. The swimming mechanism is controlled by the flow regime of the larvae. Reynolds number (Re) is defined as the ratio of inertial force to viscous force. Smaller organisms are affected more by viscous forces, like friction, and swim at a smaller Reynolds number. Larger organisms use a larger proportion of inertial forces, like pressure, to swim, at a higher Reynolds number.
The larvae of ray finned fishes, the Actinopterygii, swim at a quite large range of Reynolds number (Re ≈10 to 900). This puts them in an intermediate flow regime where both inertial and viscous forces play an important role. As the size of the larvae increases, the use of pressure forces to swim at higher Reynolds number increases.
Undulatory swimmers generally shed at least two types of wake: Carangiform swimmers shed connected vortex loops and Anguilliform swimmers shed individual vortex rings. These vortex rings depend upon the shape and arrangement of the trailing edge from which the vortices are shed. These patterns depend upon the swimming speed, ratio of swimming speed to body wave speed and the shape of body wave.
A spontaneous bout of swimming has three phases. The first phase is the start or acceleration phase: In this phase the larva tends to rotate its body to make a 'C' shape which is termed the preparatory stroke. It then pushes in the opposite direction to straighten its body, which is called a propulsive stroke, or a power stroke, which powers the larva to move forward. The second phase is cyclic swimming. In this phase, the larva swims with an approximately constant speed. The last phase is deceleration. In this phase, the swimming speed of the larva gradually slows down to a complete stop. In the preparatory stroke, due to the bending of the body, the larva creates 4 vortices around its body, and 2 of those are shed in the propulsive stroke. Similar phenomena can be seen in the deceleration phase. However, in the vortices of the deceleration phase, a large area of elevated vorticity can be seen compared to the starting phase.
The swimming abilities of larval fishes are important for survival. This is particularly true for the larval fishes with higher metabolic rate and smaller size which makes them more susceptible to predators. The swimming ability of a reef fish larva helps it to settle at a suitable reef and for locating its home as it is often isolated from its home reef in search of food. Hence the swimming speed of reef fish larvae are quite high (≈12 cm/s - 100 cm/s) compared to other larvae. The swimming speeds of larvae from the same families at the two locations are relatively similar. However, the variation among individuals is quite large. At the species level, length is significantly related to swimming ability. However, at the family level, only 16% of variation in swimming ability can be explained by length. There is also a negative correlation between the fineness ratio (length of body to maximum width) and the swimming ability of reef fish larvae. This suggests a minimization of overall drag and maximization of volume. Reef fish larvae differ significantly in their critical swimming speed abilities among taxa which leads to high variability in sustainable swimming speed. This again leads to sustainable variability in their ability to alter dispersal patterns, overall dispersal distances and control their temporal and spatial patterns of settlement.
Hydrodynamics
Small undulatory swimmers such as fish larvae experience both inertial and viscous forces, the relative importance of which is indicated by Reynolds number (Re). Reynolds number is proportional to body size and swimming speed. The swimming performance of a larva increases between 2–5 days post fertilization. Compared with adults, larval fish experience relatively high viscous force. To enhance thrust to an equal level with the adults, it increases its tail beat frequency and thus amplitude. In zebrafish, tail beat frequency increases over larval age to 95 Hz in 3 days post fertilization from 80 Hz in 2 days post fertilization. This higher frequency leads to higher swimming speed, thus reducing predation and increasing prey catching ability when they start feeding at around 5 days post fertilization. The vortex shedding mechanics changes with the flow regime in an inverse non-linear way. Strouhal number is a design parameter for the vortex shedding mechanism. It can be defined as a ratio of the product of tail beat frequency with amplitude with the mean swimming speed. Reynolds number (Re) is the main deciding criteria of a flow regime. It has been observed over different type of larval experiments that, slow larvae swims at higher Strouhal number but lower Reynolds number. However, the faster larvae swims distinctively at opposite conditions, that is, at lower Strouhal number but higher Reynolds number. Strouhal number is constant over similar speed ranged adult fishes. Strouhal number does not only depend on the small size of the swimmers, but also dependent to the flow regime. As in fishes which swim in viscous or high-friction flow regime, would create high body drag which will lead to higher Strouhal number. Whereas, in high viscous regime, the adults swim at lower stride length which leads to lower tail beat frequency and lower amplitude. This leads to higher thrust for same displacement or higher propulsive force, which unanimously reduces the Reynolds number.
Larval fishes start feeding at 5–7 days post fertilization. And they experience extreme mortality rate (≈99%) in the few days after feeding starts. The reason for this 'Critical Period' (Hjort-1914) is mainly hydrodynamic constraints. Larval fish fail to eat even if there are enough prey encounters. One of the primary determinants of feeding success is the size of larval body. The smaller larvae function in a lower Reynolds number (Re) regime. As the age increases, the size of the larvae increases, which leads to higher swimming speed and increased Reynolds number. It has been observed through many experiments that the Reynolds number of successful strikes (Re~200) is much higher than the Reynolds number of failed strikes (Re~20). Numerical analysis of suction feeding at a low Reynolds number concluded that around 40% energy invested in mouth opening is lost to frictional forces rather than contributing to accelerating the fluid towards mouth. Ontogenetic improvement in the sensory system, coordination and experiences are non-significant relationship while determining feeding success of larvae A successful strike positively depends upon the peak flow speed or the speed of larvae at the time of strike. The peak flow speed is also dependent on the gape speed or the speed of opening the buccal cavity to capture food. As the larva ages, its body size increase and its gape speed also increase, which cumulatively increase the successful strike outcomes.
The ability of a larval prey to survive an encounter with predator totally depends on its ability to sense and evade the strike. Adult fishes exhibit rapid suction feeding strikes as compared to larval fishes. Sensitivity of larval fish to velocity and flow fields provides the larvae a critical defense against predation. Though many prey use their visual system to detect and evade predators when there is light, it is hard for the prey to detect predators at night, which leads to a delayed response to the attack. There is a mechano-sensory system in fishes to identify the different flow generated by different motion surrounding the water and between the bodies called as lateral line system. After detecting a predator, a larva evades its strike by 'fast start' or 'C' response. A swimming fish disturbs a volume of water ahead of its body with a flow velocity that increases with the proximity to the body. This particular phenomenon is sometimes called a bow wave. The timing of the 'C' start response affects escape probability inversely. Escape probability increases with the distance from the predator at the time of strike. In general, prey successfully evade a predator strike from an intermediate distance (3–6 mm) from the predator.
Behavior
Objective quantification is complicated in higher vertebrates by the complex and diverse locomotor repertoire and neural system. However, the relative simplicity of a juvenile brain and simple nervous system of fishes with fundamental neuronal pathways allows zebrafish larvae to be an apt model to study the interconnection between locomotor repertoire and neuronal system of a vertebrate. Behavior represents the unique interface between intrinsic and extrinsic forces that determine an organism's health and survival. Larval zebrafish perform many locomotor behavior such as escape response, prey tracking, optomotor response etc. These behaviors can be categorized with respect to body position as ‘C’-starts, ‘J’-turns, slow scoots, routine turns etc. Fish larvae respond to abrupt changes in illumination with distinct locomotor behavior. The larvae show high locomotor activity during periods of bright light compared to dark. This behavior can direct towards the idea of searching food in light whereas the larvae do not feed in dark. Also light exposure directly manipulates the locomotor activities of larvae throughout circadian period of light and dark with higher locomotor activity in light condition than in dark condition which is very similar as seen in mammals. Following the onset of darkness, larvae shows hyperactive scoot motion prior to a gradual drop off. This behavior could possibly be linked to find a shelter before nightfall. Also larvae can treat this sudden nightfall as under debris and the hyperactivity can be explained as the larvae navigation back to illuminated areas. Prolonged dark period can reduce the light-dark responsiveness of larvae. Following light extinction, larvae execute large angle turns towards the vanished light source, which explains the navigational response of a larva. Acute ethanol exposure reduce visual sensitivity of larvae causing a latency to respond in light and dark period change.
See also
Microswimmer
References
Further reading
Alexander, R. McNeill (2003) Principles of Animal Locomotion. Princeton University Press. .
Videler JJ (1993) Fish Swimming Springer. .
Vogel, Steven (1994) Life in Moving Fluid: The Physical Biology of Flow. Princeton University Press. (particularly pp. 115–117 and pp. 207–216 for specific biological examples swimming and flying respectively)
Wu, Theodore, Y.-T., Brokaw, Charles J., Brennen, Christopher, Eds. (1975) Swimming and Flying in Nature. Volume 2, Plenum Press. (particularly pp. 615–652 for an in depth look at fish swimming)
External links
How fish swim: study solves muscle mystery
Simulated fish locomotion
Basic introduction to the basic principles of biologically inspired swimming robots
The biomechanics of swimming
Ichthyology
Aquatic locomotion
Animal locomotion
Articles containing video clips | Fish locomotion | [
"Physics",
"Biology"
] | 5,875 | [
"Animal locomotion",
"Physical phenomena",
"Animals",
"Behavior",
"Motion (physics)",
"Ethology"
] |
1,284,762 | https://en.wikipedia.org/wiki/Fischer%E2%80%93Tropsch%20process | The Fischer–Tropsch process (FT) is a collection of chemical reactions that converts a mixture of carbon monoxide and hydrogen, known as syngas, into liquid hydrocarbons. These reactions occur in the presence of metal catalysts, typically at temperatures of and pressures of one to several tens of atmospheres. The Fischer–Tropsch process is an important reaction in both coal liquefaction and gas to liquids technology for producing liquid hydrocarbons.
In the usual implementation, carbon monoxide and hydrogen, the feedstocks for FT, are produced from coal, natural gas, or biomass in a process known as gasification. The process then converts these gases into synthetic lubrication oil and synthetic fuel. This process has received intermittent attention as a source of low-sulfur diesel fuel and to address the supply or cost of petroleum-derived hydrocarbons. Fischer–Tropsch process is discussed as a step of producing carbon-neutral liquid hydrocarbon fuels from CO2 and hydrogen.
The process was first developed by Franz Fischer and Hans Tropsch at the Kaiser Wilhelm Institute for Coal Research in Mülheim an der Ruhr, Germany, in 1925.
Reaction mechanism
The Fischer–Tropsch process involves a series of chemical reactions that produce a variety of hydrocarbons, ideally having the formula (CnH2n+2). The more useful reactions produce alkanes as follows:
(2n + 1) H2 + n CO → CnH2n+2 + n H2O
where n is typically 10–20. The formation of methane (n = 1) is unwanted. Most of the alkanes produced tend to be straight-chain, suitable as diesel fuel. In addition to alkane formation, competing reactions give small amounts of alkenes, as well as alcohols and other oxygenated hydrocarbons.
The reaction is a highly exothermic reaction due to a standard reaction enthalpy (ΔH) of −165 kJ/mol CO combined.
Fischer–Tropsch intermediates and elemental reactions
Converting a mixture of H2 and CO into aliphatic products is a multi-step reaction with several intermediate compounds. The growth of the hydrocarbon chain may be visualized as involving a repeated sequence in which hydrogen atoms are added to carbon and oxygen, the C–O bond is split and a new C–C bond is formed.
For one –CH2– group produced by CO + 2 H2 → (CH2) + H2O, several reactions are necessary:
Associative adsorption of CO
Splitting of the C–O bond
Dissociative adsorption of 2 H2
Transfer of 2 H to the oxygen to yield H2O
Desorption of H2O
Transfer of 2 H to the carbon to yield CH2
The conversion of CO to alkanes involves hydrogenation of CO, the hydrogenolysis (cleavage with H2) of C–O bonds, and the formation of C–C bonds. Such reactions are assumed to proceed via initial formation of surface-bound metal carbonyls. The CO ligand is speculated to undergo dissociation, possibly into oxide and carbide ligands. Other potential intermediates are various C1 fragments including formyl (CHO), hydroxycarbene (HCOH), hydroxymethyl (CH2OH), methyl (CH3), methylene (CH2), methylidyne (CH), and hydroxymethylidyne (COH). Furthermore, and critical to the production of liquid fuels, are reactions that form C–C bonds, such as migratory insertion. Many related stoichiometric reactions have been simulated on discrete metal clusters, but homogeneous Fischer–Tropsch catalysts are of no commercial importance.
Addition of isotopically labelled alcohol to the feed stream results in incorporation of alcohols into product. This observation establishes the facility of C–O bond scission. Using 14C-labelled ethylene and propene over cobalt catalysts results in incorporation of these olefins into the growing chain. Chain growth reaction thus appears to involve both 'olefin insertion' as well as 'CO-insertion'.
8 CO + 17 H2 -> C8H18 + 8 H2O
Feedstocks: gasification
Fischer–Tropsch plants associated with biomass or coal or related solid feedstocks (sources of carbon) must first convert the solid fuel into gases. These gases include CO, H2, and alkanes. This conversion is called gasification. Synthesis gas ("syngas") is obtained from biomass/coal gasification is a mixture of hydrogen and carbon monoxide. The H2:CO ratio is adjusted using the water-gas shift reaction. Coal-based FT plants produce varying amounts of CO2, depending upon the energy source of the gasification process. However, most coal-based plants rely on the feed coal to supply all the energy requirements of the process.
Feedstocks: GTL
Carbon monoxide for FT catalysis is derived from hydrocarbons. In gas to liquids (GTL) technology, the hydrocarbons are low molecular weight materials that often would be discarded or flared. Stranded gas provides relatively cheap gas. For GTL to be commercially viable, gas must remain relatively cheaper than oil.
Several reactions are required to obtain the gaseous reactants required for FT catalysis. First, reactant gases entering a reactor must be desulfurized. Otherwise, sulfur-containing impurities deactivate ("poison") the catalysts required for FT reactions.
Several reactions are employed to adjust the H2:CO ratio. Most important is the water-gas shift reaction, which provides a source of hydrogen at the expense of carbon monoxide:
H2O + CO -> H2 + CO2
For FT plants that use methane as the feedstock, another important reaction is dry reforming, which converts the methane into CO and H2:
CH4 + CO2 -> 2CO + 2H2
Process conditions
Generally, the Fischer–Tropsch process is operated in the temperature range of . Higher temperatures lead to faster reactions and higher conversion rates but also tend to favor methane production. For this reason, the temperature is usually maintained at the low to middle part of the range. Increasing the pressure leads to higher conversion rates and also favors the formation of long-chained alkanes, both of which are desirable. Typical pressures range from one to several tens of atmospheres. Even higher pressures would be favorable, but the benefits may not justify the additional costs of high-pressure equipment, and higher pressures can lead to catalyst deactivation via coke formation.
A variety of synthesis-gas compositions can be used. For cobalt-based catalysts the optimal H2:CO ratio is around 1.8–2.1. Iron-based catalysts can tolerate lower ratios, due to the intrinsic water-gas shift reaction activity of the iron catalyst. This reactivity can be important for synthesis gas derived from coal or biomass, which tend to have relatively low H2:CO ratios (< 1).
Design of the Fischer–Tropsch process reactor
Efficient removal of heat from the reactor is the basic need of FT reactors since these reactions are characterized by high exothermicity. Four types of reactors are discussed:
Multi tubular fixed-bed reactor
This type of reactor contains several tubes with small diameters. These tubes contain catalysts and are surrounded by cooling water which removes the heat of the reaction. A fixed-bed reactor is suitable for operation at low temperatures and has an upper-temperature limit of 257 °C (530 K). Excess temperature leads to carbon deposition and hence blockage of the reactor. Since large amounts of the products formed are in liquid state, this type of reactor can also be referred to as a trickle flow reactor system.
Entrained flow reactor
This type of reactor contains two banks of heat exchangers which remove heat; the remainder of which is removed by the products and recycled in the system. The formation of heavy waxes should be avoided, since they condense on the catalyst and form agglomerations. This leads to fluidization. Hence, risers are operated over 297 °C (570 K).
Slurry reactors
Heat removal is done by internal cooling coils. The synthesis gas is bubbled through the waxy products and finely-divided catalyst which is suspended in the liquid medium. This also provides agitation of the contents of the reactor. The catalyst particle size reduces diffusional heat and mass transfer limitations. A lower temperature in the reactor leads to a more viscous product and a higher temperature (> 297 °C, 570 K) gives an undesirable product spectrum. Also, separation of the product from the catalyst is a problem.
Fluid-bed and circulating catalyst (riser) reactors
These are used for high-temperature FT synthesis (nearly 340 °C) to produce low-molecular-weight unsaturated hydrocarbons on alkalized fused iron catalysts. The fluid-bed technology (as adapted from the catalytic cracking of heavy petroleum distillates) was introduced by Hydrocarbon Research in 1946–50 and named the 'Hydrocol' process. A large scale Fischer–Tropsch Hydrocol plant (350,000 tons per annum) operated during 1951–57 in Brownsville, Texas. Due to technical problems, and impractical economics due to increasing petroleum availability, this development was discontinued. Fluid-bed FT synthesis has been reinvestigated by Sasol. One reactor with a capacity of 500,000 tons per annum is in operation. The process has been used for C2 and C7 alkene production. A high-temperature process with a circulating iron catalyst ('circulating fluid bed', 'riser reactor', 'entrained catalyst process') was introduced by the Kellogg Company and a respective plant built at Sasol in 1956. It was improved by Sasol for successful operation. At Secunda, South Africa, Sasol operated 16 advanced reactors of this type with a capacity of approximately 330,000 tons per annum each. The circulating catalyst process can be replaced by fluid-bed technology. Early experiments with cobalt catalyst particles suspended in oil have been performed by Fischer. The bubble column reactor with a powdered iron slurry catalyst and a CO-rich syngas was particularly developed to pilot plant scale by Kölbel at the Rheinpreuben Company in 1953. Since 1990, low-temperature FT slurry processes are under investigation for the use of iron and cobalt catalysts, particularly for the production of a hydrocarbon wax, or to be hydrocracked and isomerized to produce diesel fuel, by Exxon and Sasol. Slurry-phase (bubble column) low-temperature FT synthesis is efficient. This technology is also under development by the Statoil Company (Norway) for use on a vessel to convert associated gas at offshore oil fields into a hydrocarbon liquid.
Product distribution
In general the product distribution of hydrocarbons formed during the Fischer–Tropsch process follows an Anderson–Schulz–Flory distribution, which can be expressed as:
= (1 − α)2αn−1
where Wn is the weight fraction of hydrocarbons containing n carbon atoms, and α is the chain growth probability or the probability that a molecule will continue reacting to form a longer chain. In general, α is largely determined by the catalyst and the specific process conditions.
Examination of the above equation reveals that methane will always be the largest single product so long as α is less than 0.5; however, by increasing α close to one, the total amount of methane formed can be minimized compared to the sum of all of the various long-chained products. Increasing α increases the formation of long-chained hydrocarbons. The very long-chained hydrocarbons are waxes, which are solid at room temperature. Therefore, for production of liquid transportation fuels it may be necessary to crack some of the FT products. In order to avoid this, some researchers have proposed using zeolites or other catalyst substrates with fixed sized pores that can restrict the formation of hydrocarbons longer than some characteristic size (usually n < 10). This way they can drive the reaction so as to minimize methane formation without producing many long-chained hydrocarbons. Such efforts have had only limited success.
Catalysts
Four metals are active as catalysts for the Fischer–Tropsch process: iron, cobalt, nickel, and ruthenium. Since FT process typically transforms inexpensive precursors into complex mixtures that require further refining, FT catalysts are based on inexpensive metals, especially iron and cobalt. Nickel generates too much methane, so it is not used.
Typically, such heterogeneous catalysts are obtained through precipitation from iron nitrate solutions. Such solutions can be used to deposit the metal salt onto the catalyst support (see below). Such treated materials transform into active catalysts by heating under CO, H2 or with the feedstock to be treated, i.e., the catalysts are generated in situ. Owing to the multistep nature of the FT process, analysis of the catalytically active species is challenging. Furthermore, as is known for iron catalysts, a number of phases may coexist and may participate in diverse steps in the reaction. Such phases include various oxides and carbides as well as polymorphs of the metals. Control of these constituents may be relevant to product distributions. Aside from iron and cobalt, nickel and ruthenium are active for converting the CO/H2 mixture to hydrocarbons. Although expensive, ruthenium is the most active of the Fischer–Tropsch catalysts in the sense that It works at the lowest reaction temperatures and produces higher molecular weight hydrocarbons. Ruthenium catalysts consist of the metal, without any promoters, thus providing relatively simple system suitable for mechanistic analysis. Its high price preclude industrial applications. Cobalt catalysts are more active for FT synthesis when the feedstock is natural gas. Natural gas has a high hydrogen to carbon ratio, so the water-gas shift is not needed for cobalt catalysts. Cobalt-based catalysts are more sensitive than their iron counterparts.
Illustrative of real world catalyst selection, high-temperature Fischer–Tropsch (HTFT), which operates at 330–350 °C, uses an iron-based catalyst. This process was used extensively by Sasol in their coal-to-liquid plants (CTL). Low-temperature Fischer–Tropsch (LTFT) uses an iron- or cobalt-based catalyst. This process is best known for being used in the first integrated GTL-plant operated and built by Shell in Bintulu, Malaysia.
Promoters and supports
In addition to the active metal (usually Fe or Co), two other components comprise the catalyst: promoters and the catalyst support. Promoters are additives that enhance the behavior of the catalyst. For F-T catalysts, typical promoters including potassium and copper, which are usually added as salts. The choice of promoters depends on the primary metal, iron vs cobalt. Iron catalysts need alkali promotion to attain high activity and stability (e.g. 0.5 wt% ). Potassium-doped α-Fe2O3 are synthesized under variable calcination temperatures (400–800 °C). Addition of Cu for reduction promotion, addition of , for structural promotion and maybe some manganese can be applied for selectivity control (e.g. high olefinicity). The choice of promoters depends on the primary metal, i.e., iron vs cobalt. While group 1 alkali metals (e.g., potassium), help iron catalysts, they poison cobalt catalysts.
Catalysts are supported on high-surface-area binders/supports such as silica, alumina, or zeolites.
History
The F-T process attracted attention as a means of Nazi Germany to produce liquid hydrocarbons. The original process was developed by Franz Fischer and Hans Tropsch, working at the Kaiser-Wilhelm-Institut for Chemistry in 1926. They filed a number of patents, e.g., , applied 1926, published 1930. It was commercialized by Brabag in Germany in 1936. Being petroleum-poor but coal-rich, Germany used the process during World War II to produce ersatz (replacement) fuels. FT production accounted for an estimated 9% of German war production of fuels and 25% of the automobile fuel. Many refinements and adjustments have been made to the process since Fischer and Tropsch's time.
The United States Bureau of Mines, in a program initiated by the Synthetic Liquid Fuels Act, employed seven Operation Paperclip synthetic fuel scientists in a Fischer–Tropsch plant in Louisiana, Missouri in 1946.
In Britain, Alfred August Aicher obtained several patents for improvements to the process in the 1930s and 1940s. Aicher's company was named Synthetic Oils Ltd (not related to a company of the same name in Canada).
Around the 1930s and 1940s, Arthur Imhausen developed and implemented an industrial process for producing edible fats from these synthetic oils through oxidation. The products were fractionally distilled and the edible fats were obtained from the - fraction which were reacted with glycerol such as that synthesized from propylene. "Coal butter" margarine made from synthetic oils was found to be nutritious and of agreeable taste, and it was incorporated into diets contributing as much as 700 calories per day. The process required at least 60 kg of coal per kg of synthetic butter.
Commercialization
Uzbekistan GTL
Ras Laffan, Qatar
The LTFT facility Pearl GTL at Ras Laffan, Qatar, is the second largest FT plant in the world after Sasol's Secunda plant in South Africa. It uses cobalt catalysts at 230 °C, converting natural gas to petroleum liquids at a rate of , with additional production of of oil equivalent in natural gas liquids and ethane.
Another plant in Ras Laffan, called Oryx GTL, has been commissioned in 2007 with a capacity of . The plant utilizes the Sasol slurry phase distillate process, which uses a cobalt catalyst. Oryx GTL is a joint venture between QatarEnergy and Sasol.
Sasol
The world's largest scale implementation of Fischer–Tropsch technology is a series of plants operated by Sasol in South Africa, a country with large coal reserves, but little oil. With a capacity of 165000 Bpd at its Secunda CTL plant. The first commercial plant opened in 1952. Sasol uses coal and natural gas as feedstocks and produces a variety of synthetic petroleum products, including most of the country's diesel fuel.
PetroSA
PetroSA, another South African company, operates a refinery with a 36,000 barrels a day plant that completed semi-commercial demonstration in 2011, paving the way to begin commercial preparation. The technology can be used to convert natural gas, biomass or coal into synthetic fuels.
Shell middle distillate synthesis
One of the largest implementations of Fischer–Tropsch technology is in Bintulu, Malaysia. This Shell facility converts natural gas into low-sulfur Diesel fuels and food-grade wax. The scale is .
Velocys
Construction is underway for Velocys' commercial reference plant incorporating its microchannel Fischer–Tropsch technology; ENVIA Energy's Oklahoma City GTL project being built adjacent to Waste Management's East Oak landfill site. The project is being financed by a joint venture between Waste Management, NRG Energy, Ventech and Velocys. The feedstock for this plant will be a combination of landfill gas and pipeline natural gas.
SGCE
Starting as a biomass technology licensor In Summer of 2012 SGC Energia (SGCE) successfully commissioned a pilot multi tubular Fischer–Tropsch process unit and associated product upgrading units at the Pasadena, Tx Technology Center. The technology center focused on the development and operations of their XTLH solution which optimized processing of low value carbon waste streams into advanced fuels and wax products. This unit also serves as an operations training environment for the 1100 BPD Juniper GTL facility constructed in Westlake LA.
UPM (Finland)
In October 2006, Finnish paper and pulp manufacturer UPM announced its plans to produce biodiesel by the Fischer–Tropsch process alongside the manufacturing processes at its European paper and pulp plants, using waste biomass resulting from paper and pulp manufacturing processes as source material.
Arcadia eFuels
Texas based Arcadia eFuels in conjunction with Sasol and Topsoe is constructing a sustainable aviation fuel plant in Vordingborg, Denmark that will use Fischer-Tropsch process to convert syngas derived from water electrolysis and carbon capture into an e-diesel fuel for aviation. The plant will begin production in 2028 with additional plants in development in Teesside, United Kingdom and the United States.
Rentech
A demonstration-scale Fischer–Tropsch plant was built and operated by Rentech, Inc., in partnership with ClearFuels, a company specializing in biomass gasification. Located in Commerce City CO, the facility produces about of fuels from natural gas. Commercial-scale facilities were planned for Rialto, California; Natchez, Mississippi; Port St. Joe, Florida; and White River, Ontario. Rentech closed down their pilot plant in 2013, and abandoned work on their FT process as well as the proposed commercial facilities.
INFRA GTL Technology
In 2010, INFRA built a compact Pilot Plant for conversion of natural gas into synthetic oil. The plant modeled the full cycle of the GTL chemical process including the intake of pipeline gas, sulfur removal, steam methane reforming, syngas conditioning, and Fischer–Tropsch synthesis. In 2013 the first pilot plant was acquired by VNIIGAZ Gazprom LLC. In 2014 INFRA commissioned and operated on a continuous basis a new, larger scale full cycle Pilot Plant. It represents the second generation of INFRA's testing facility and is differentiated by a high degree of automation and extensive data gathering system. In 2015, INFRA built its own catalyst factory in Troitsk (Moscow, Russia). The catalyst factory has a capacity of over 15 tons per year, and produces the unique proprietary Fischer–Tropsch catalysts developed by the company's R&D division. In 2016, INFRA designed and built a modular, transportable GTL (gas-to-liquid) M100 plant for processing natural and associated gas into synthetic crude oil in Wharton TX. The M100 plant is operating as a technology demonstration unit, R&D platform for catalyst refinement, and economic model to scale the Infra GTL process into larger and more efficient plants.
Other
In the United States and India, some coal-producing states have invested in Fischer–Tropsch plants. In Pennsylvania, Waste Management and Processors, Inc. was funded by the state to implement FT technology licensed from Shell and Sasol to convert so-called waste coal (leftovers from the mining process) into low-sulfur diesel fuel.
Research developments
Choren Industries has built a plant in Germany that converts biomass to syngas and fuels using the Shell FT process structure. The company went bankrupt in 2011 due to impracticalities in the process.
Biomass gasification (BG) and Fischer–Tropsch (FT) synthesis can in principle be combined to produce renewable transportation fuels (biofuels).
In partnership with Sunfire, Audi produces E-diesel in small scale with two steps, the second one being FT.
U.S. Air Force certification
Syntroleum, a publicly traded United States company, has produced over of diesel and jet fuel from the Fischer–Tropsch process using natural gas and coal at its demonstration plant near Tulsa, Oklahoma. Syntroleum is working to commercialize its licensed Fischer–Tropsch technology via coal-to-liquid plants in the United States, China, and Germany, as well as gas-to-liquid plants internationally. Using natural gas as a feedstock, the ultra-clean, low sulfur fuel has been tested extensively by the United States Department of Energy and the United States Department of Transportation. Syntroleum has worked to develop a synthetic jet fuel blend that will help the Air Force to reduce its dependence on imported petroleum. The Air Force, which is the United States military's largest user of fuel, began exploring alternative fuel sources in 1999. On December 15, 2006, a B-52 took off from Edwards Air Force Base, California for the first time powered solely by a 50–50 blend of JP-8 and Syntroleum's FT fuel. The seven-hour flight test was considered a success. The goal of the flight test program is to qualify the fuel blend for fleet use on the service's B-52s, and then flight test and qualification on other aircraft. The test program concluded in 2007. This program is part of the Department of Defense Assured Fuel Initiative, an effort to develop secure domestic sources for the military energy needs. The Pentagon hopes to reduce its use of crude oil from foreign producers and obtain about half of its aviation fuel from alternative sources by 2016.
Carbon dioxide reuse
Carbon dioxide is not a typical feedstock for FT catalysis. Hydrogen and carbon dioxide react over a cobalt-based catalyst, producing methane. With iron-based catalysts unsaturated short-chain hydrocarbons are also produced. Upon introduction to the catalyst's support, ceria functions as a reverse water-gas shift catalyst, further increasing the yield of the reaction. The short-chain hydrocarbons were upgraded to liquid fuels over solid acid catalysts, such as zeolites.
Process efficiency
Using conventional FT technology the process ranges in carbon efficiency from 25 to 50 percent and a thermal efficiency of about 50% for CTL facilities idealised at 60% with GTL facilities at about 60% efficiency idealised to 80% efficiency.
Fischer–Tropsch in nature
A Fischer–Tropsch-type process has also been suggested to have produced a few of the building blocks of DNA and RNA within asteroids. Similarly, the hypothetical abiogenic petroleum formation requires some naturally occurring FT-like processes.
Biological Fischer-Tropsch-type chemistry can be carried out by the enzyme nitrogenase at ambient conditions.
See also
, a generic term for this type of process
References
Further reading
External links
Modeling and Integration of Green-Hydrogen-Assisted Carbon Dioxide Utilization for Hydrocarbon Manufacturing
Fischer–Tropsch archives
Fischer–Tropsch fuels from coal and biomass
Abiogenic gas debate (AAPG Explorer Nov. 2002)
Gas origin theories to be studied (AAPG Explorer Nov. 2002)
Unconventional ideas about unconventional gas (Society of Petroleum Engineers)
Process of synthesis of liquid hydrocarbons – Great Britain patent GB309002 – Hermann Plauson
Clean diesel from coal by Kevin Bullis
Implementing the "Hydrogen Economy" with Synfuels (pdf)
Carbon-to-liquids research
Effect of alkali metals on cobalt catalysts
Biofuels technology
Catalysis
Coal
Organometallic chemistry
Petroleum production
Synthetic fuel technologies
German inventions
1925 in science
1925 in Germany
Organic redox reactions
Name reactions | Fischer–Tropsch process | [
"Chemistry",
"Biology"
] | 5,592 | [
"Catalysis",
"Biofuels technology",
"Petroleum technology",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Synthetic fuel technologies",
"Chemical kinetics",
"Organometallic chemistry"
] |
1,285,078 | https://en.wikipedia.org/wiki/Esterel | Esterel is a synchronous programming language for the development of complex reactive systems. The imperative programming style of Esterel allows the simple expression of parallelism and preemption. As a consequence, it is well suited for control-dominated model designs.
The development of the language started in the early 1980s, and was mainly carried out by a team of Ecole des Mines de Paris and INRIA led by Gérard Berry in France. Current compilers take Esterel programs and generate C code or hardware (RTL) implementations (VHDL or Verilog).
The language is still under development, with several compilers out. The commercial development environment of Esterel is Esterel Studio. The company that commercialized it (Synfora) initiated a normalization process with the IEEE in April 2007 however the working group (P1778) dissolved March 2011. The reference manual is publicly available.
A provisional version of Esterel has been implemented in Racket.
The multiform notion of time
The notion of time used in Esterel differs from that of non-synchronous languages in the following way: The notion of physical time is replaced with the notion of order. Only the simultaneity and precedence of events are considered. This means that the physical time does not play any special role. This is called multiform notion of time. An Esterel program describes a totally ordered sequence of logical instants. At each instant, an arbitrary number of events occur (including 0). Event occurrences that happen at the same logical instant are considered simultaneous. Other events are ordered as their instances of occurrences. There are two types of statements: Those that take zero time (execute and terminate in the same instant) and those that delay for a prescribed number of cycles.
Signals
Signals are the only means of communication. There are valued and non-valued signals. They are further categorized as being input, output, or local signals. A signal has the property of being either present or absent in an instant. Valued signals also contain a value. Signals are broadcast across the program, and that means any process can read or write a signal. The value of a valued signal can be determined in any instant, even if the signal is absent. The default status of a signal is absent. Signals remain absent until they are explicitly set to present using the emit statement.
Communication is instantaneous, that means that a signal emitted in a cycle is visible immediately. Note that one can communicate back and forth in the same cycle.
Signal coherence rules
Each signal is only present or absent in a cycle, never both.
All writers run before any readers do.
Thus
present A else
emit A
end
is an erroneous program, since the writer "emit A" should run before the reader "present A", whereas this program requires "present A" to be performed first.
The language statements
Primitive Esterel statements
Pure Esterel has eleven primitive statements.
Derived Esterel statements
Esterel has several derived constructions:
Other Esterel statements
The full Esterel language also has statements for declaring and instantiating modules, for variables, for calling external procedures, and for valued signals.
Example (ABRO)
The following program emits the output O as soon as both inputs A and B have been received. Reset the behaviour whenever the input R is received.
module ABRO:
input A, B, R;
output O;
loop
[ await A || await B ];
emit O
each R
end module
Advantages of Esterel
Model of time gives programmer precise control
Concurrency convenient for specifying control systems
Completely deterministic
Finite-state language
Execution time predictable
Much easier to verify formally
Can be implemented in hardware as well as in software
Disadvantages of Esterel
Finite-state nature of the language limits flexibility (but expressivity is sufficient for the chosen application field)
Semantic challenges
Avoiding causality violations is often difficult
Difficult to compile in the general case, but simple correctness criteria exist
See also
Lustre, a cousin programming language
SIGNAL, a dataflow-oriented synchronous language enabling multi-clock specifications
Esterel Technologies, developer of Esterel Studio and other tools
Parallel programming model
References
External links
The Esterel Language at Inria
The Columbia Esterel Compiler an open-source compiler
Synchronous programming languages
Hardware description languages | Esterel | [
"Technology",
"Engineering"
] | 870 | [
"Electronic engineering",
"Real-time computing",
"Hardware description languages",
"Synchronous programming languages"
] |
828,001 | https://en.wikipedia.org/wiki/Vortex%20ring | A vortex ring, also called a toroidal vortex, is a torus-shaped vortex in a fluid; that is, a region where the fluid mostly spins around an imaginary axis line that forms a closed loop. The dominant flow in a vortex ring is said to be toroidal, more precisely poloidal.
Vortex rings are plentiful in turbulent flows of liquids and gases, but are rarely noticed unless the motion of the fluid is revealed by suspended particles—as in the smoke rings which are often produced intentionally or accidentally by smokers. Fiery vortex rings are also a commonly produced trick by fire eaters. Visible vortex rings can also be formed by the firing of certain artillery, in mushroom clouds, in microbursts, and rarely in volcanic eruptions.
A vortex ring usually tends to move in a direction that is perpendicular to the plane of the ring and such that the inner edge of the ring moves faster forward than the outer edge. Within a stationary body of fluid, a vortex ring can travel for relatively long distance, carrying the spinning fluid with it.
Structure
In a typical vortex ring, the fluid particles move in roughly circular paths around an imaginary circle (the core) that is perpendicular to those paths. As in any vortex, the velocity of the fluid is roughly constant except near the core, so that the angular velocity increases towards the core, and most of the vorticity (and hence most of the energy dissipation) is concentrated near it.
Unlike a sea wave, whose motion is only apparent, a moving vortex ring actually carries the spinning fluid along. Just as a rotating wheel lessens friction between a car and the ground, the poloidal flow of the vortex lessens the friction between the core and the surrounding stationary fluid, allowing it to travel a long distance with relatively little loss of mass and kinetic energy, and little change in size or shape. Thus, a vortex ring can carry mass much further and with less dispersion than a jet of fluid. That explains, for instance, why a smoke ring keeps traveling long after any extra smoke blown out with it has stopped and dispersed. These properties of vortex rings are exploited in the vortex ring gun for riot control, and vortex ring toys such as the air vortex cannons.
Formation
Formation process
The formation of vortex rings has fascinated the scientific community for more than a century, starting with William Barton Rogers who made sounding observations of the formation process of air vortex rings in air, air rings in liquids, and liquid rings in liquids. In particular, William Barton Rogers made use of the simple experimental method of letting a drop of liquid fall on a free liquid surface; a falling colored drop of liquid, such as milk or dyed water, will inevitably form a vortex ring at the interface due to the surface tension.
A method proposed by G. I. Taylor to generate a vortex ring is to impulsively start a disk from rest. The flow separates to form a cylindrical vortex sheet and by artificially dissolving the disk, one is left with an isolated vortex ring. This is the case when someone is stirring their cup of coffee with a spoon and observing the propagation of a half-vortex in the cup.
In a laboratory, vortex rings are formed by impulsively discharging fluid through a sharp-edged nozzle or orifice. The impulsive motion of the piston/cylinder system is either triggered by an electric actuator or by a pressurized vessel connected to a control valve. For a nozzle geometry, and at first approximation, the exhaust speed is uniform and equal to the piston speed. This is referred as a parallel starting jet. It is possible to have a conical nozzle in which the streamlines at the exhaust are directed toward the centerline. This is referred as a converging starting jet. The orifice geometry which consists in an orifice plate covering the straight tube exhaust, can be considered as an infinitely converging nozzle but the vortex formation differs considerably from the converging nozzle, principally due to the absence of boundary layer in the thickness of the orifice plate throughout the formation process. The fast moving fluid (A) is therefore discharged into a quiescent fluid (B). The shear imposed at the interface between the two fluids slows down the outer layer of the fluid (A) relatively to the centerline fluid. In order to satisfy the Kutta condition, the flow is forced to detach, curl and roll-up in the form of a vortex sheet. Later, the vortex sheet detaches from the feeding jet and propagates freely downstream due to its self-induced kinematics. This is the process commonly observed when a smoker forms smoke rings from their mouth, and how vortex ring toys work.
Secondary effects are likely to modify the formation process of vortex rings. Firstly, at the very first instants, the velocity profile at the exhaust exhibits extrema near the edge causing a large vorticity flux into the vortex ring. Secondly, as the ring grows in size at the edge of the exhaust, negative vorticity is generated on the outer wall of the generator which considerably reduces the circulation accumulated by the primary ring. Thirdly, as the boundary layer inside the pipe, or nozzle, thickens, the velocity profile approaches the one of a Poiseuille flow and the centerline velocity at the exhaust is measured to be larger than the prescribed piston speed. Last but not least, in the event the piston-generated vortex ring is pushed through the exhaust, it may interact or even merge with the primary vortex, hence modifying its characteristic, such as circulation, and potentially forcing the transition of the vortex ring to turbulence.
Vortex ring structures are easily observable in nature. For instance, a mushroom cloud formed by a nuclear explosion or volcanic eruption, has a vortex ring-like structure. Vortex rings are also seen in many different biological flows; blood is discharged into the left ventricle of the human heart in the form of a vortex ring and jellyfishes or squids were shown to propel themselves in water by periodically discharging vortex rings in the surrounding. Finally, for more industrial applications, the synthetic jet which consists in periodically-formed vortex rings, was proved to be an appealing technology for flow control, heat and mass transfer and thrust generation
Vortex formation number
Prior to Gharib et al. (1998), few studies had focused on the formation of vortex rings generated with long stroke-to-diameter ratios , where is the length of the column of fluid discharged through the exhaust and is the diameter of the exhaust. For short stroke ratios, only one isolated vortex ring is generated and no fluid is left behind in the formation process. For long stroke ratios, however, the vortex ring is followed by some energetic fluid, referred as the trailing jet. On top of showing experimental evidence of the phenomenon, an explanation of the phenomenon was provided in terms of energy maximisation invoking a variational principle first reported by Kelvin and later proven by Benjamin (1976), or Friedman & Turkington (1981). Ultimately, Gharib et al. (1998) observed the transition between these two states to occur at a non-dimensional time , or equivalently a stroke ratio , of about 4. The robustness of this number with respect to initial and boundary conditions suggested the quantity to be a universal constant and was thus named formation number.
The phenomenon of 'pinch-off', or detachment, from the feeding starting jet is observed in a wide range of flows observed in nature. For instance, it was shown that biological systems such as the human heart or swimming and flying animals generate vortex rings with a stroke-to-diameter ratio close to the formation number of about 4, hence giving ground to the existence of an optimal vortex ring formation process in terms of propulsion, thrust generation and mass transport. In particular, the squid lolliguncula brevis was shown to propel itself by periodically emitting vortex rings at a stroke-ratio close to 4. Moreover, in another study by Gharib et al (2006), the formation number was used as an indicator to monitor the health of the human heart and identify patients with dilated cardiomyopathy.
Other examples
Vortex ring state in helicopters
Air vortices can form around the main rotor of a helicopter, causing a dangerous condition known as vortex ring state (VRS) or "settling with power". In this condition, air that moves down through the rotor turns outward, then up, inward, and then down through the rotor again. This re-circulation of flow can negate much of the lifting force and cause a catastrophic loss of altitude. Applying more power (increasing collective pitch) serves to further accelerate the downwash through which the main-rotor is descending, exacerbating the condition.
In the human heart
A vortex ring is formed in the left ventricle of the human heart during cardiac relaxation (diastole), as a jet of blood enters through the mitral valve. This phenomenon was initially observed in vitro and subsequently strengthened by analyses based on color Doppler mapping and magnetic resonance imaging. Some recent studies have also confirmed the presence of a vortex ring during rapid filling phase of diastole and implied that the process of vortex ring formation can influence mitral annulus dynamics.
Bubble rings
Releasing air underwater forms bubble rings, which are vortex rings of water with bubbles (or even a single donut-shaped bubble) trapped along its axis line. Such rings are often produced by scuba divers and dolphins.
Volcanoes
Under particular conditions, some volcanic vents can produce large visible vortex rings. Though a rare phenomenon, several volcanoes have been observed emitting massive vortex rings as erupting steam and gas condense, forming visible toroidal clouds:
Mount Etna, Italy (Sicily)
Stromboli, Italy (Aeolian Islands)
Eyjafjallajökull, Iceland
Hekla, Iceland
Tungurahua, Ecuador
Pacaya, Guatemala
Mount Redoubt, United States (Alaska)
Mount Aso, Japan (Kyushu)
Whakaari (White Island), New Zealand
Gunung Slamet, Indonesia (Central Java)
Momotombo, Nicaragua (León)
Separated vortex rings
There has been research and experiments on the existence of separated vortex rings (SVR) such as those formed in the wake of the pappus of a dandelion. This special type of vortex ring effectively stabilizes the seed as it travels through the air and increases the lift generated by the seed. Compared to a standard vortex ring, which is propelled downstream, the axially symmetric SVR remains attached to the pappus for the duration of its flight and uses drag to enhance the travel. These dandelion seed structures have been used to create tiny battery-free wireless sensors that can float in the wind and be dispersed across a large area.
Theory
Historical studies
The formation of vortex rings has fascinated the scientific community for more than a century, starting with William Barton Rogers who made sounding observations of the formation process of air vortex rings in air, air rings in liquids, and liquid rings in liquids. In particular, William Barton Rogers made use of the simple experimental method of letting a drop of liquid fall on a free liquid surface; a falling colored drop of liquid, such as milk or dyed water, will inevitably form a vortex ring at the interface due to the surface tension.
Vortex rings were first mathematically analyzed by the German physicist Hermann von Helmholtz, in his 1858 paper On Integrals of the Hydrodynamical Equations which Express Vortex-motion.
Circular vortex lines
For a single zero-thickness vortex ring, the vorticity is represented by a Dirac delta function as where denotes the coordinates of the vortex filament of strength in a constant half-plane. The Stokes stream function is:
with
where and are respectively the least and the greatest distance from the point to the vortex line, and where is the complete elliptic integral of the first kind and is the complete elliptic integral of the second kind.
A circular vortex line is the limiting case of a thin vortex ring. Because there is no core thickness, the speed of the ring is infinite, as well as the kinetic energy. The hydrodynamic impulse can be expressed in term of the strength, or 'circulation' , of the vortex ring as .
Thin-core vortex rings
The discontinuity introduced by the Dirac delta function prevents the computation of the speed and the kinetic energy of a circular vortex line. It is however possible to estimate these quantities for a vortex ring having a finite small thickness. For a thin vortex ring, the core can be approximated by a disk of radius which is assumed to be infinitesimal compared to the radius of the ring , i.e. . As a consequence, inside and in the vicinity of the core ring, one may write: , and , and, in the limit of , the elliptic integrals can be approximated by and .
For a uniform vorticity distribution in the disk, the Stokes stream function can therefore be approximated by
</ref>
The resulting circulation , hydrodynamic impulse and kinetic energy are
It is also possible to find the translational ring speed (which is finite) of such isolated thin-core vortex ring:
which finally results in the well-known expression found by Kelvin and published in the English translation by Tait of von Helmholtz's paper:
Spherical vortices
Hill's spherical vortex is an example of steady vortex flow and may be used to model vortex rings having a vorticity distribution extending to the centerline. More precisely, the model supposes a linearly distributed vorticity distribution in the radial direction starting from the centerline and bounded by a sphere of radius as:
where is the constant translational speed of the vortex.
Finally, the Stokes stream function of Hill's spherical vortex can be computed and is given by:
The above expressions correspond to the stream function describing a steady flow. In a fixed frame of reference, the stream function of the bulk flow having a speed should be added.
The circulation, the hydrodynamic impulse and the kinetic energy can also be calculated in terms of the translational speed and radius :
</ref>
Such a structure or an electromagnetic equivalent has been suggested as an explanation for the internal structure of ball lightning. For example, Shafranov used a magnetohydrodynamic (MHD) analogy to Hill's stationary fluid mechanical vortex to consider the equilibrium conditions of axially symmetric MHD configurations, reducing the problem to the theory of stationary flow of an incompressible fluid. In axial symmetry, he considered general equilibrium for distributed currents and concluded under the Virial Theorem that if there were no gravitation, a bounded equilibrium configuration could exist only in the presence of an azimuthal current.
Fraenkel-Norbury model
The Fraenkel-Norbury model of isolated vortex ring, sometimes referred as the standard model, refers to the class of steady vortex rings having a linear distribution of vorticity in the core and parametrised by the mean core radius , where is the area of the vortex core and is the radius of the ring. Approximate solutions were found for thin-core rings, i.e. , and thick Hill's-like vortex rings, i.e. , Hill's spherical vortex having a mean core radius of precisely . For mean core radii in between, one must rely on numerical methods. Norbury (1973) found numerically the resulting steady vortex ring of given mean core radius, and this for a set of 14 mean core radii ranging from 0.1 to 1.35. The resulting streamlines defining the core of the ring were tabulated, as well as the translational speed. In addition, the circulation, the hydrodynamic impulse and the kinetic energy of such steady vortex rings were computed and presented in non-dimensional form.
Instabilities
A kind of azimuthal radiant-symmetric structure was observed by Maxworthy when the vortex ring traveled around a critical velocity, which is between the turbulence and laminar states. Later Huang and Chan reported that if the initial state of the vortex ring is not perfectly circular, another kind of instability would occur. An elliptical vortex ring undergoes an oscillation in which it is first stretched in the vertical direction and squeezed in the horizontal direction, then passes through an intermediate state where it is circular, then is deformed in the opposite way (stretched in the horizontal direction and squeezed in the vertical) before reversing the process and returning to the original state.
See also
Air vortex cannon
Bubble ring – underwater vortex ring
Mushroom cloud
Toroidal moment
Vortex ring gun
Vortex ring toy
References
External links
YouTube video of Vortex ring cannon
Fluid dynamics lecture covering vortices
An animation of a vortex ring
Giant vortex ring generator
Toy Box Physics: Vortices, Air Cannons, and Mushroom Clouds
Thesis on vortex ring formation and interactions
Vortex half-ring in a pool, Dianna Cowern (Physics Girl), YouTube
More experiments with vortex rings in a pool, Dianna Cowern (Physics Girl), YouTube
Aerodynamics
Aviation risks
Helicopter aerodynamics
Vortices | Vortex ring | [
"Chemistry",
"Mathematics",
"Engineering"
] | 3,481 | [
"Vortices",
"Aerodynamics",
"Aerospace engineering",
"Fluid dynamics",
"Dynamical systems"
] |
828,061 | https://en.wikipedia.org/wiki/A%20Mathematical%20Theory%20of%20Communication | "A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in Bell System Technical Journal in 1948. It was renamed The Mathematical Theory of Communication in the 1949 book of the same name, a small but significant title change after realizing the generality of this work. It has tens of thousands of citations, being one of the most influential and cited scientific papers of all time, as it gave rise to the field of information theory, with Scientific American referring to the paper as the "Magna Carta of the Information Age", while the electrical engineer Robert G. Gallager called the paper a "blueprint for the digital era". Historian James Gleick rated the paper as the most important development of 1948, placing the transistor second in the same time period, with Gleick emphasizing that the paper by Shannon was "even more profound and more fundamental" than the transistor.
It is also noted that "as did relativity and quantum theory, information theory radically changed the way scientists look at the universe". The paper also formally introduced the term "bit" and serves as its theoretical foundation.
Publication
The article was the founding work of the field of information theory. It was later published in 1949 as a book titled The Mathematical Theory of Communication (), which was published as a paperback in 1963 (). The book contains an additional article by Warren Weaver, providing an overview of the theory for a more general audience.
Contents
This work is known for introducing the concepts of channel capacity as well as the noisy channel coding theorem.
Shannon's article laid out the basic elements of communication:
An information source that produces a message
A transmitter that operates on the message to create a signal which can be sent through a channel
A channel, which is the medium over which the signal, carrying the information that composes the message, is sent
A receiver, which transforms the signal back into the message intended for delivery
A destination, which can be a person or a machine, for whom or which the message is intended
It also developed the concepts of information entropy, redundancy and the source coding theorem, and introduced the term bit (which Shannon credited to John Tukey) as a unit of information. It was also in this paper that the Shannon–Fano coding technique was proposed – a technique developed in conjunction with Robert Fano.
References
External links
(PDF) "A Mathematical Theory of Communication" by C. E. Shannon (reprint with corrections) hosted by the Harvard Mathematics Department, at Harvard University
Original publications: ,
Khan Academy video about "A Mathematical Theory of Communication"
1963 non-fiction books
Information theory
Computer science books
Mathematics books
Mathematics papers
Works originally published in American magazines
1948 documents
Works originally published in science and technology magazines
Texts related to the history of the Internet
Claude Shannon | A Mathematical Theory of Communication | [
"Mathematics",
"Technology",
"Engineering"
] | 563 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
829,697 | https://en.wikipedia.org/wiki/Spinthariscope | A spinthariscope () is a device for observing individual nuclear disintegrations caused by the interaction of ionizing radiation with a phosphor (see radioluminescence) or scintillator.
Invention
The spinthariscope was invented by William Crookes in 1903. While observing the apparently uniform fluorescence on a zinc sulfide screen created by the radioactive emissions (mostly alpha radiation) of a sample of radium bromide, he spilled some of the sample, and, owing to its extreme rarity and cost, he was eager to find and recover it. Upon inspecting the zinc sulfide screen under a microscope, he noticed separate flashes of light created by individual alpha particle collisions with the screen. Crookes took his discovery a step further and invented a device specifically intended to view these scintillations. It consisted of a small screen coated with zinc sulfide affixed to the end of a tube, with a tiny amount of radium salt suspended a short distance from the screen and a lens on the other end of the tube for viewing the screen. Crookes named his device from () "spark".
Crookes debuted the spinthariscope at a meeting of the Royal Society, London on 15 May 1903.
Toy spinthariscopes
Spinthariscopes were quickly replaced with more accurate and quantitative devices for measuring radiation in scientific experiments, but enjoyed a modest revival in the mid 20th century as children's educational toys. In 1947, Kix cereal offered a Lone Ranger atomic bomb ring that contained a small one, in exchange for a box top and US$0.15 (). Spinthariscopes can still be bought today as instructional novelties, but they now use americium or thorium. Looking into a properly focused toy spinthariscope, one can see many flashes of light spread randomly across the screen. Almost all are circular, with a very bright pinpoint centre surrounded by a dimmer circle of emission.
In museums
The American History Museum of the Smithsonian has several spinthariscopes in its collections, and an article discussing them. However, none are currently on display.
References
External links
Modern spinthariscope
Elements of electricity: a practical discussion of the fundamental laws and ... by Robert Andrews Millikan, Edwin Sherwood Bishop, American Technical Society
Particle detectors
Radioactivity
1903 introductions
Ionising radiation detectors
Educational toys | Spinthariscope | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 488 | [
"Radioactive contamination",
"Measuring instruments",
"Particle detectors",
"Ionising radiation detectors",
"Nuclear physics",
"Radioactivity"
] |
831,261 | https://en.wikipedia.org/wiki/Bhattacharyya%20distance | In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations.
It is not a metric, despite being named a "distance", since it does not obey the triangle inequality.
History
Both the Bhattacharyya distance and the Bhattacharyya coefficient are named after Anil Kumar Bhattacharyya, a statistician who worked in the 1930s at the Indian Statistical Institute. He has developed this through a series of papers. He developed the method to measure the distance between two non-normal distributions and illustrated this with the classical multinomial populations, this work despite being submitted for publication in 1941, appeared almost five years later in Sankhya. Consequently, Professor Bhattacharyya started working toward developing a distance metric for probability distributions that are absolutely continuous with respect to the Lebesgue measure and published his progress in 1942, at Proceedings of the Indian Science Congress and the final work has appeared in 1943 in the Bulletin of the Calcutta Mathematical Society.
Definition
For probability distributions and on the same domain , the Bhattacharyya distance is defined as
where
is the Bhattacharyya coefficient for discrete probability distributions.
For continuous probability distributions, with and where and are the probability density functions, the Bhattacharyya coefficient is defined as
.
More generally, given two probability measures on a measurable space , let be a (sigma finite) measure such that and are absolutely continuous with respect to i.e. such that , and for probability density functions with respect to defined -almost everywhere. Such a measure, even such a probability measure, always exists, e.g. . Then define the Bhattacharyya measure on by
It does not depend on the measure , for if we choose a measure such that and an other measure choice are absolutely continuous i.e. and , then
,
and similarly for . We then have
.
We finally define the Bhattacharyya coefficient
.
By the above, the quantity does not depend on , and by the Cauchy inequality . Using , and ,
Gaussian case
Let , , where is the normal distribution with mean and variance ; then
.
And in general, given two multivariate normal distributions ,
,
where Note that the first term is a squared Mahalanobis distance.
Properties
and .
does not obey the triangle inequality, though the Hellinger distance does.
Bounds on Bayes error
The Bhattacharyya distance can be used to upper and lower bound the Bayes error rate:
where and is the posterior probability.
Applications
The Bhattacharyya coefficient quantifies the "closeness" of two random statistical samples.
Given two sequences from distributions , bin them into buckets, and let the frequency of samples from in bucket be , and similarly for , then the sample Bhattacharyya coefficient is
which is an estimator of . The quality of estimation depends on the choice of buckets; too few buckets would overestimate , while too many would underestimate.
A common task in classification is estimating the separability of classes. Up to a multiplicative factor, the squared Mahalanobis distance is a special case of the Bhattacharyya distance when the two classes are normally distributed with the same variances. When two classes have similar means but significantly different variances, the Mahalanobis distance would be close to zero, while the Bhattacharyya distance would not be.
The Bhattacharyya coefficient is used in the construction of polar codes.
The Bhattacharyya distance is used in feature extraction and selection, image processing, speaker recognition, phone clustering, and in genetics.
See also
Bhattacharyya angle
Kullback–Leibler divergence
Hellinger distance
Mahalanobis distance
Chernoff bound
Rényi entropy
F-divergence
Fidelity of quantum states
References
External links
Statistical Intuition of Bhattacharyya's distance
Some of the properties of Bhattacharyya Distance
Nielsen, F.; Boltz, S. (2010). "The Burbea–Rao and Bhattacharyya centroids". IEEE Transactions on Information Theory. 57 (8): 5455–5466.
Kailath, T. (1967). "The Divergence and Bhattacharyya Distance Measures in Signal Selection". IEEE Transactions on Communication Technology. 15 (1): 52–60.
Djouadi, A.; Snorrason, O.; Garber, F. (1990). "The quality of Training-Sample estimates of the Bhattacharyya coefficient". IEEE Transactions on Pattern Analysis and Machine Intelligence. 12 (1): 92–97.
Statistical distance
Statistical deviation and dispersion
Anil Kumar Bhattacharya | Bhattacharyya distance | [
"Physics"
] | 1,029 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
831,350 | https://en.wikipedia.org/wiki/Distance%20matrix | In mathematics, computer science and especially graph theory, a distance matrix is a square matrix (two-dimensional array) containing the distances, taken pairwise, between the elements of a set. Depending upon the application involved, the distance being used to define this matrix may or may not be a metric. If there are elements, this matrix will have size . In graph-theoretic applications, the elements are more often referred to as points, nodes or vertices.
Non-metric distance matrix
In general, a distance matrix is a weighted adjacency matrix of some graph. In a network, a directed graph with weights assigned to the arcs, the distance between two nodes of the network can be defined as the minimum of the sums of the weights on the shortest paths joining the two nodes (where the number of steps in the path is bounded). This distance function, while well defined, is not a metric. There need be no restrictions on the weights other than the need to be able to combine and compare them, so negative weights are used in some applications. Since paths are directed, symmetry can not be guaranteed, and if negative-weight cycles exist the distance matrix may not be hollow (and in the absence of a bound on the step count, the matrix may be undefined).
An algebraic formulation of the above can be obtained by using the min-plus algebra. Matrix multiplication in this system is defined as follows: Given two matrices and , their distance product is defined as an matrix such that
Note that the off-diagonal elements that are not connected directly will need to be set to infinity or a suitable large value for the min-plus operations to work correctly. A zero in these locations will be incorrectly interpreted as an edge with no distance, cost, etc.
If is an matrix containing the edge weights of a graph, then (using this distance product) gives the distances between vertices using paths of length at most edges, and so is the distance matrix of the graph when the step count bound is set to k. If there are no loops of negative weight, will give the true distance matrix, with no bound, because removing repeated vertices from a path cannot lower its weight. On the other hand, if i and j are on a negative-weight loop, will decrease without bound as k increases.
An arbitrary graph on vertices can be modeled as a weighted complete graph on vertices by assigning a weight of one to each edge of the complete graph that corresponds to an edge of and infinity to all other edges. for this complete graph is the adjacency matrix of . The distance matrix of can be computed from as above; by contrast, if normal matrix multiplication is used, and unlinked vertices are represented with 0, would instead encode the number of paths between any two vertices of length exactly .
Metric distance matrix
The value of a distance matrix formalism in many applications is in how the distance matrix can manifestly encode the metric axioms and in how it lends itself to the use of linear algebra techniques. That is, if with is a distance matrix for a metric distance, then
the entries on the main diagonal are all zero (that is, the matrix is a hollow matrix), i.e. for all ,
all the off-diagonal entries are positive ( if ), (that is, a non-negative matrix),
the matrix is a symmetric matrix (), and
for any and , for all (the triangle inequality). This can be stated in terms of tropical matrix multiplication
When a distance matrix satisfies the first three axioms (making it a semi-metric) it is sometimes referred to as a pre-distance matrix. A pre-distance matrix that can be embedded in a Euclidean space is called a Euclidean distance matrix. For mixed-type data that contain numerical as well as categorical descriptors, Gower's distance is a common alternative.
Another common example of a metric distance matrix arises in coding theory when in a block code the elements are strings of fixed length over an alphabet and the distance between them is given by the Hamming distance metric. The smallest non-zero entry in the distance matrix measures the error correcting and error detecting capability of the code.
Additive distance matrix
An additive distance matrix is a special type of matrix used in bioinformatics to build a phylogenetic tree. Let be the lowest common ancestor between two species and , we expect . This is where the additive metric comes from. A distance matrix for a set of species is said to be additive if and only if there exists a phylogeny for such that:
Every edge in is associated with a positive weight
For every , equals the sum of the weights along the path from to in
For this case, is called an additive matrix and is called an additive tree. Below we can see an example of an additive distance matrix and its corresponding tree:
Ultrametric distance matrix
The ultrametric distance matrix is defined as an additive matrix which models the constant molecular clock. It is used to build a phylogenetic tree. A matrix is said to be ultrametric if there exists a tree such that:
equals the sum of the edge weights along the path from to in
A root of the tree can be identified with the distance to all the leaves being the same
Here is an example of an ultrametric distance matrix with its corresponding tree:
Bioinformatics
The distance matrix is widely used in the bioinformatics field, and it is present in several methods, algorithms and programs. Distance matrices are used to represent protein structures in a coordinate-independent manner, as well as the pairwise distances between two sequences in sequence space. They are used in structural and sequential alignment, and for the determination of protein structures from NMR or X-ray crystallography.
Sometimes it is more convenient to express data as a similarity matrix.
It is also used to define the distance correlation.
Sequence alignment
An alignment of two sequences is formed by inserting spaces in arbitrary locations along the sequences so that they end up with the same length and there are no two spaces at the same position of the two augmented sequences. One of the primary methods for sequence alignment is dynamic programming. The method is used to fill the distance matrix and then obtain the alignment. In typical usage, for sequence alignment a matrix is used to assign scores to amino-acid matches or mismatches, and a gap penalty for matching an amino-acid in one sequence with a gap in the other.
Global alignment
The Needleman–Wunsch algorithm used to calculate global alignment uses dynamic programming to obtain the distance matrix.
Local alignment
The Smith–Waterman algorithm is also dynamic programming based which consists also in obtaining the distance matrix and then obtain the local alignment.
Multiple sequence alignment
Multiple sequence alignment is an extension of pairwise alignment to align several sequences at a time. Different MSA methods are based on the same idea of the distance matrix as global and local alignments.
Center star method. This method defines a center sequence which minimizes the distance between the sequence and any other sequence . Then it generates a multiple alignment for the set of sequences so that for every the alignment distance is the optimal pairwise alignment. This method has the characteristic that the computed alignment for whose sum-of-pair distance is at most twice the optimal multiple alignment.
Progressive alignment method. This heuristic method to create MSA first aligns the two most related sequences, and then it progressively aligns the next two most related sequences until all sequences are aligned.
There are other methods that have their own program due to their popularity:
ClustalW
MUSCLE
MAFFT
MANGO
And many more
MAFFT
Multiple alignment using fast Fourier transform (MAFFT) is a program with an algorithm based on progressive alignment, and it offers various multiple alignment strategies. First, MAFFT constructs a distance matrix based on the number of shared 6-tuples. Second, it builds the guide tree based on the previous matrix. Third, it clusters the sequences with the help of the fast Fourier transform and starts the alignment. Based on the new alignment, it reconstructs the guide tree and align again.
Phylogenetic analysis
To perform phylogenetic analysis, the first step is to reconstruct the phylogenetic tree: given a collection of species, the problem is to reconstruct or infer the ancestral relationships among the species, i.e., the phylogenetic tree among the species. Distance matrix methods perform this activity.
Distance matrix methods
Distance matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore require multiple sequences as an input. Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the same interior node and whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. Given species, the input is an distance matrix where is the mutation distance between species and . The aim is to output a tree of degree which is consistent with the distance matrix.
They are frequently used as the basis for progressive and iterative types of multiple sequence alignment. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees. Despite potential problems, distance methods are extremely fast, and they often produce a reasonable estimate of phylogeny. They also have certain benefits over the methods that use characters directly. Notably, distance methods allow use of data that may not be easily converted to character data, such as DNA–DNA hybridization assays.
The following are distance based methods for phylogeny reconstruction:
Additive tree reconstruction
UPGMA
Neighbor joining
Fitch–Margoliash
Additive tree reconstruction
Additive tree reconstruction is based on additive and ultrametric distance matrices. These matrices have a special characteristic:
Consider an additive matrix . For any three species the corresponding tree is unique. Every ultrametric distance matrix is an additive matrix. We can observe this property for the tree below, which consists on the species .
The additive tree reconstruction technique starts with this tree. And then adds one more species each time, based on the distance matrix combined with the property mentioned above. For example, consider an additive matrix and 5 species and . First we form an additive tree for two species and . Then we chose a third one, let's say and attach it to a point on the edge between and . The edge weights are computed with the property above. Next we add the fourth species to any of the edges. If we apply the property then we identify that should be attached to only one specific edge. Finally, we add following the same procedure as before.
UPGMA
The basic principle of UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is that similar species should be closer in the phylogenetic tree. Hence, it builds the tree by clustering similar sequences iteratively. The method works by building the phylogenetic tree bottom up from its leaves. Initially, we have leaves (or singleton trees), each representing a species in . Those leaves are referred as clusters. Then, we perform iterations. In each iteration, we identify two clusters and with the smallest average distance and merge them to form a bigger cluster . If we suppose is ultrametric, for any cluster created by the UPGMA algorithm, is a valid ultrametric tree.
Neighbor joining
Neighbor is a bottom-up clustering method. It takes a distance matrix specifying the distance between each pair of sequences. The algorithm starts with a completely unresolved tree, whose topology corresponds to that of a star network, and iterates over the following steps until the tree is completely resolved and all branch lengths are known:
Based on the current distance matrix calculate the matrix (defined below).
Find the pair of distinct taxa i and j (i.e. with) for which has its lowest value. These taxa are joined to a newly created node, which is connected to the central node.
Calculate the distance from each of the taxa in the pair to this new node.
Calculate the distance from each of the taxa outside of this pair to the new node.
Start the algorithm again, replacing the pair of joined neighbors with the new node and using the distances calculated in the previous step.
Fitch–Margoliash
The Fitch–Margoliash method uses a weighted least squares method for clustering based on genetic distance. Closely related sequences are given more weight in the tree construction process to correct for the increased inaccuracy in measuring distances between distantly related sequences. The least-squares criterion applied to these distances is more accurate but less efficient than the neighbor-joining methods. An additional improvement that corrects for correlations between distances that arise from many closely related sequences in the data set can also be applied at increased computational cost.
Data Mining and Machine Learning
Data Mining
A common function in data mining is applying cluster analysis on a given set of data to group data based on how similar or more similar they are when compared to other groups. Distance matrices became heavily dependent and utilized in cluster analysis since similarity can be measured with a distance metric. Thus, distance matrix became the representation of the similarity measure between all the different pairs of data in the set.
Hierarchical clustering
A distance matrix is necessary for traditional hierarchical clustering algorithms which are often heuristic methods employed in biological sciences such as phylogeny reconstruction. When implementing any of the hierarchical clustering algorithms in data mining, the distance matrix will contain all pair-wise distances between every point and then will begin to create clusters between two different points or clusters based entirely on distances from the distance matrix.
If N be the number of points, the complexity of hierarchical clustering is:
Time complexity is due to the repetitive calculations done after every cluster to update the distance matrix
Space complexity is
Machine Learning
Distance metrics are a key part of several machine learning algorithms, which are used in both supervised and unsupervised learning. They are generally used to calculate the similarity between data points: this is where the distance matrix is an essential element. The use of an effective distance matrix improves the performance of the machine learning model, whether it is for classification tasks or for clustering.
K-Nearest Neighbors
A distance matrix is utilized in the k-NN algorithm which is one of the slowest but simplest and most used instance-based machine learning algorithms that can be used both in classification and regression tasks. It is one of the slowest machine learning algorithms since each test sample's predicted result requires a fully computed distance matrix between the test sample and each training sample in the training set. Once the distance matrix is computed, the algorithm selects the K number of training samples that are the closest to the test sample to predict the test sample's result based on the selected set's majority (classification) or average (regression) value.
Prediction time complexity is , to compute the distance between each test sample with every training sample to construct the distance matrix where:
k = number of nearest neighbors selected
n = size of the training set
d = number of dimensions being used for the data
This classification focused model predicts the label of the target based on the distance matrix between the target and each of the training samples to determine the K-number of samples that are the closest/nearest to the target.
Computer Vision
A distance matrix can be used in neural networks for 2D to 3D regression in image predicting machine learning models.
Information retrieval
Distance matrices using Gaussian mixture distance
* Gaussian mixture distance for performing accurate nearest neighbor search for information retrieval. Under an established Gaussian finite mixture model for the distribution of the data in the database, the Gaussian mixture distance is formulated based on minimizing the Kullback-Leibler divergence between the distribution of the retrieval data and the data in database. In the comparison of performance of the Gaussian mixture distance with the well-known Euclidean and Mahalanobis distances based on a precision performance measurement, experimental results demonstrate that the Gaussian mixture distance function is superior in the others for different types of testing data.
Potential basic algorithms worth noting on the topic of information retrieval is Fish School Search algorithm an information retrieval that partakes in the act of using distance matrices in order for gathering collective behavior of fish schools. By using a feeding operator to update their weights
Eq. A:
Eq. B:
Stepvol defines the size of the maximum volume displacement preformed with the distance matrix, specifically using a Euclidean distance matrix.
Evaluation of the similarity or dissimilarity of Cosine similarity and Distance matrices
* While the Cosine similarity measure is perhaps the most frequently applied proximity measure in information retrieval by measuring the angles between documents in the search space on the base of the cosine. Euclidean distance is invariant to mean-correction. The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance. For the data which can be negative as well as positive, the null distribution for cosine similarity is the distribution of the dot product of two independent random unit vectors. This distribution has a mean of zero and a variance of 1/n. While Euclidean distance will be invariant to this correction.
Clustering Documents
The implementation of hierarchical clustering with distance-based metrics to organize and group similar documents together will require the need and utilization of a distance matrix. The distance matrix will represent the degree of association that a document has with another document that will be used to create clusters of closely associated documents that will be utilized in retrieval methods of relevant documents for a user's query.
Isomap
Isomap incorporates distance matrices to utilize geodesic distances to able to compute lower-dimensional embeddings. This helps to address a collection of documents that reside within a massive number of dimensions and empowers to perform document clustering.
Neighborhood Retrieval Visualizer (NeRV)
An algorithm used for both unsupervised and supervised visualization that uses distance matrices to find similar data based on the similarities shown on a display/screen.
The distance matrix needed for Unsupervised NeRV can be computed through fixed input pairwise distances.
The distance matrix needed for Supervised NeRV requires formulating a supervised distance metric to be able to compute the distance of the input in a supervised manner.
Chemistry
The distance matrix is a mathematical object widely used in both graphical-theoretical (topological) and geometric (topographic) versions of chemistry. The distance matrix is used in chemistry in both explicit and implicit forms.
Interconversion mechanisms between two permutational isomers
Distance matrices were used as the main approach to depict and reveal the shortest path sequence needed to determine the rearrangement between the two permutational isomers.
Distance Polynomials and Distance Spectra
Explicit use of Distance matrices is required in order to construct the distance polynomials and distance spectra of molecular structures.
Structure-property model
Implicit use of Distance matrices was applied through the use of the distance based metric Weiner number/Weiner Index which was formulated to represent the distances in all chemical structures. The Weiner number is equal to half-sum of the elements of the distance matrix.
Graph-theoretical Distance matrix
Distance matrix in chemistry that are used for the 2-D realization of molecular graphs, which are used to illustrate the main foundational features of a molecule in a myriad of applications.
Creating a label tree that represents the carbon skeleton of a molecule based on its distance matrix. The distance matrix is imperative in this application because similar molecules can have a myriad of label tree variants of their carbon skeleton. The labeled tree structure of hexane (C6H14) carbon skeleton that is created based on the distance matrix in the example, has different carbon skeleton variants that affect both the distance matrix and the labeled tree
Creating a labeled graph with edge weights, used in chemical graph theory, that represent molecules with hetero-atoms.
Le Verrier-Fadeev-Frame (LVFF) method is a computer oriented used to speed up the process of detecting the graph center in polycyclic graphs. However, LVFF requires the input to be a diagonalized distance matrix which is easily resolved by implementing the Householder tridiagonal-QL algorithm that takes in a distance matrix and returns the diagonalized distance needed for the LVFF method.
Geometric-Distance Matrix
While the graph-theoretical distance matrix 2-D captures the constitutional features of the molecule, its three-dimensional (3D) character is encoded in the geometric-distance matrix. The geometric-distance matrix is a different type of distance matrix that is based on the graph-theoretical distance matrix of a molecule to represent and graph the 3-D molecule structure. The geometric-distance matrix of a molecular structure is a real symmetric matrix defined in the same way as a 2-D matrix. However, the matrix elements will hold a collection of shortest Cartesian distances between and in . Also known as topographic matrix, the geometric-distance matrix can be constructed from the known geometry of the molecule. As an example, the geometric-distance matrix of the carbon skeleton of 2,4-dimethylhexane is shown below:
Other Applications
Time Series Analysis
Dynamic Time Warping distance matrices are utilized with the clustering and classification algorithms of a collection/group of time series objects.
Examples
For example, suppose these data are to be analyzed, where pixel Euclidean distance is the distance metric.
The distance matrix would be:
These data can then be viewed in graphic form as a heat map. In this image, black denotes a distance of 0 and white is maximal distance.
See also
Computer vision
Data clustering
Distance set
Hollow matrix
Min-plus matrix multiplication
References
Metric geometry
Bioinformatics
Matrices
Graph distance | Distance matrix | [
"Mathematics",
"Engineering",
"Biology"
] | 4,428 | [
"Biological engineering",
"Mathematical objects",
"Graph theory",
"Matrices (mathematics)",
"Bioinformatics",
"Mathematical relations",
"Graph distance"
] |
833,499 | https://en.wikipedia.org/wiki/Prestressed%20concrete | Prestressed concrete is a form of concrete used in construction. It is substantially "prestressed" (compressed) during production, in a manner that strengthens it against tensile forces which will exist when in service. It was patented by Eugène Freyssinet in 1928.
This compression is produced by the tensioning of high-strength "tendons" located within or adjacent to the concrete and is done to improve the performance of the concrete in service. Tendons may consist of single wires, multi-wire strands or threaded bars that are most commonly made from high-tensile steels, carbon fiber or aramid fiber. The essence of prestressed concrete is that once the initial compression has been applied, the resulting material has the characteristics of high-strength concrete when subject to any subsequent compression forces and of ductile high-strength steel when subject to tension forces. This can result in improved structural capacity and/or serviceability compared with conventionally reinforced concrete in many situations. In a prestressed concrete member, the internal stresses are introduced in a planned manner so that the stresses resulting from the imposed loads are counteracted to the desired degree.
Prestressed concrete is used in a wide range of building and civil structures where its improved performance can allow for longer spans, reduced structural thicknesses, and material savings compared with simple reinforced concrete. Typical applications include high-rise buildings, residential concrete slabs, foundation systems, bridge and dam structures, silos and tanks, industrial pavements and nuclear containment structures.
First used in the late nineteenth century, prestressed concrete has developed beyond pre-tensioning to include post-tensioning, which occurs after the concrete is cast. Tensioning systems may be classed as either 'monostrand', where each tendon's strand or wire is stressed individually, or 'multi-strand', where all strands or wires in a tendon are stressed simultaneously. Tendons may be located either within the concrete volume (internal prestressing) or wholly outside of it (external prestressing). While pre-tensioned concrete uses tendons directly bonded to the concrete, post-tensioned concrete can use either bonded or unbonded tendons.
Pre-tensioned concrete
Pre-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned prior to the concrete being cast. The concrete bonds to the tendons as it cures, following which the end-anchoring of the tendons is released, and the tendon tension forces are transferred to the concrete as compression by static friction.
Pre-tensioning is a common prefabrication technique, where the resulting concrete element is manufactured off-site from the final structure location and transported to site once cured. It requires strong, stable end-anchorage points between which the tendons are stretched. These anchorages form the ends of a "casting bed" which may be many times the length of the concrete element being fabricated. This allows multiple elements to be constructed end-to-end in the one pre-tensioning operation, allowing significant productivity benefits and economies of scale to be realized.
The amount of bond (or adhesion) achievable between the freshly set concrete and the surface of the tendons is critical to the pre-tensioning process, as it determines when the tendon anchorages can be safely released. Higher bond strength in early-age concrete will speed production and allow more economical fabrication. To promote this, pre-tensioned tendons are usually composed of isolated single wires or strands, which provides a greater surface area for bonding than bundled-strand tendons.
Unlike those of post-tensioned concrete (see below), the tendons of pre-tensioned concrete elements generally form straight lines between end-anchorages. Where "profiled" or "harped" tendons are required, one or more intermediate deviators are located between the ends of the tendon to hold the tendon to the desired non-linear alignment during tensioning. Such deviators usually act against substantial forces, and hence require a robust casting-bed foundation system. Straight tendons are typically used in "linear" precast concrete elements, such as shallow beams, hollow-core slabs; whereas profiled tendons are more commonly found in deeper precast bridge beams and girders.
Pre-tensioned concrete is most commonly used for the fabrication of structural beams, floor slabs, hollow-core slabs, balconies, lintels, driven piles, water tanks and concrete pipes.
Post-tensioned concrete
Post-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned after the surrounding concrete structure has been cast.
The tendons are not placed in direct contact with the concrete, but are encapsulated within a protective sleeve or duct which is either cast into the concrete structure or placed adjacent to it. At each end of a tendon is an anchorage assembly firmly fixed to the surrounding concrete. Once the concrete has been cast and set, the tendons are tensioned ("stressed") by pulling the tendon ends through the anchorages while pressing against the concrete. The large forces required to tension the tendons result in a significant permanent compression being applied to the concrete once the tendon is "locked-off" at the anchorage. The method of locking the tendon-ends to the anchorage is dependent upon the tendon composition, with the most common systems being "button-head" anchoring (for wire tendons), split-wedge anchoring (for strand tendons), and threaded anchoring (for bar tendons).
Tendon encapsulation systems are constructed from plastic or galvanised steel materials, and are classified into two main types: those where the tendon element is subsequently bonded to the surrounding concrete by internal grouting of the duct after stressing (bonded post-tensioning); and those where the tendon element is permanently debonded from the surrounding concrete, usually by means of a greased sheath over the tendon strands (unbonded post-tensioning).
Casting the tendon ducts/sleeves into the concrete before any tensioning occurs allows them to be readily "profiled" to any desired shape including incorporating vertical and/or horizontal curvature. When the tendons are tensioned, this profiling results in reaction forces being imparted onto the hardened concrete, and these can be beneficially used to counter any loadings subsequently applied to the structure.
Bonded post-tensioning
In bonded post-tensioning, tendons are permanently bonded to the surrounding concrete by the in situ grouting of their encapsulating ducting (after tendon tensioning). This grouting is undertaken for three main purposes: to protect the tendons against corrosion; to permanently "lock-in" the tendon pre-tension, thereby removing the long-term reliance upon the end-anchorage systems; and to improve certain structural behaviors of the final concrete structure.
Bonded post-tensioning characteristically uses tendons each comprising bundles of elements (e.g., strands or wires) placed inside a single tendon duct, with the exception of bars which are mostly used unbundled. This bundling makes for more efficient tendon installation and grouting processes, since each complete tendon requires only one set of end-anchorages and one grouting operation. Ducting is fabricated from a durable and corrosion-resistant material such as plastic (e.g., polyethylene) or galvanised steel, and can be either round or rectangular/oval in cross-section. The tendon sizes used are highly dependent upon the application, ranging from building works typically using between 2 and 6 strands per tendon, to specialized dam works using up to 91 strands per tendon.
Fabrication of bonded tendons is generally undertaken on-site, commencing with the fitting of end-anchorages to formwork, placing the tendon ducting to the required curvature profiles, and reeving (or threading) the strands or wires through the ducting. Following concreting and tensioning, the ducts are pressure-grouted and the tendon stressing-ends sealed against corrosion.
Unbonded post-tensioning
Unbonded post-tensioning differs from bonded post-tensioning by allowing the tendons permanent freedom of longitudinal movement relative to the concrete. This is most commonly achieved by encasing each individual tendon element within a plastic sheathing filled with a corrosion-inhibiting grease, usually lithium based. Anchorages at each end of the tendon transfer the tensioning force to the concrete, and are required to reliably perform this role for the life of the structure.
Unbonded post-tensioning can take the form of:
Individual strand tendons placed directly into the concreted structure (e.g., buildings, ground slabs)
Bundled strands, individually greased-and-sheathed, forming a single tendon within an encapsulating duct that is placed either within or adjacent to the concrete (e.g., restressable anchors, external post-tensioning)
For individual strand tendons, no additional tendon ducting is used and no post-stressing grouting operation is required, unlike for bonded post-tensioning. Permanent corrosion protection of the strands is provided by the combined layers of grease, plastic sheathing, and surrounding concrete. Where strands are bundled to form a single unbonded tendon, an enveloping duct of plastic or galvanised steel is used and its interior free-spaces grouted after stressing. In this way, additional corrosion protection is provided via the grease, plastic sheathing, grout, external sheathing, and surrounding concrete layers.
Individually greased-and-sheathed tendons are usually fabricated off-site by an extrusion process. The bare steel strand is fed into a greasing chamber and then passed to an extrusion unit where molten plastic forms a continuous outer coating. Finished strands can be cut-to-length and fitted with "dead-end" anchor assemblies as required for the project.
Comparison between bonded and unbonded post-tensioning
Both bonded and unbonded post-tensioning technologies are widely used around the world, and the choice of system is often dictated by regional preferences, contractor experience, or the availability of alternative systems. Either one is capable of delivering code-compliant, durable structures meeting the structural strength and serviceability requirements of the designer.
The benefits that bonded post-tensioning can offer over unbonded systems are:
Reduced reliance on end-anchorage integrity. Following tensioning and grouting, bonded tendons are connected to the surrounding concrete along their full length by high-strength grout. Once cured, this grout can transfer the full tendon tension force to the concrete within a very short distance (approximately 1 metre). As a result, any inadvertent severing of the tendon or failure of an end anchorage has only a very localised impact on tendon performance, and almost never results in tendon ejection from the anchorage.
Increased ultimate strength in flexure. With bonded post-tensioning, any flexure of the structure is directly resisted by tendon strains at that same location (i.e. no strain re-distribution occurs). This results in significantly higher tensile strains in the tendons than if they were unbonded, allowing their full yield strength to be realised, and producing a higher ultimate load capacity.
Improved crack-control. In the presence of concrete cracking, bonded tendons respond similarly to conventional reinforcement (rebar). With the tendons fixed to the concrete at each side of the crack, greater resistance to crack expansion is offered than with unbonded tendons, allowing many design codes to specify reduced reinforcement requirements for bonded post-tensioning.
Improved fire performance. The absence of strain redistribution in bonded tendons may limit the impact that any localised overheating has on the overall structure. As a result, bonded structures may display a higher capacity to resist fire conditions than unbonded ones.
The benefits that unbonded post-tensioning can offer over bonded systems are:
Ability to be prefabricated. Unbonded tendons can be readily prefabricated off-site complete with end-anchorages, facilitating faster installation during construction. Additional lead time may need to be allowed for this fabrication process.
Improved site productivity. The elimination of the post-stressing grouting process required in bonded structures improves the site-labour productivity of unbonded post-tensioning.
Improved installation flexibility. Unbonded single-strand tendons have greater handling flexibility than bonded ducting during installation, allowing them a greater ability to be deviated around service penetrations or obstructions.
Reduced concrete cover. Unbonded tendons may allow some reduction in concrete element thickness, as their smaller size and increased corrosion protection may allow them to be placed closer to the concrete surface.
Simpler replacement and/or adjustment. Being permanently isolated from the concrete, unbonded tendons are able to be readily de-stressed, re-stressed and/or replaced should they become damaged or need their force levels to be modified in-service.
Superior overload performance. Although having a lower ultimate strength than bonded tendons, unbonded tendons' ability to redistribute strains over their full length can give them superior pre-collapse ductility. In extremes, unbonded tendons can resort to a catenary-type action instead of pure flexure, allowing significantly greater deformation before structural failure.
Tendon durability and corrosion protection
Long-term durability is an essential requirement for prestressed concrete given its widespread use.
Research on the durability performance of in-service prestressed structures has been undertaken since the 1960s, and anti-corrosion technologies for tendon protection have been continually improved since the earliest systems were developed.
The durability of prestressed concrete is principally determined by the level of corrosion protection provided to any high-strength steel elements within the prestressing tendons. Also critical is the protection afforded to the end-anchorage assemblies of unbonded tendons or cable-stay systems, as the anchorages of both of these are required to retain the prestressing forces. Failure of any of these components can result in the release of prestressing forces, or the physical rupture of stressing tendons.
Modern prestressing systems deliver long-term durability by addressing the following areas:
Tendon grouting (bonded tendons)Bonded tendons consist of bundled strands placed inside ducts located within the surrounding concrete. To ensure full protection to the bundled strands, the ducts must be pressure-filled with a corrosion-inhibiting grout, without leaving any voids, following strand-tensioning.
Tendon coating (unbonded tendons)Unbonded tendons comprise individual strands coated in an anti-corrosion grease or wax, and fitted with a durable plastic-based full-length sleeve or sheath. The sleeving is required to be undamaged over the tendon length, and it must extend fully into the anchorage fittings at each end of the tendon.
Double-layer encapsulationPrestressing tendons requiring permanent monitoring and/or force adjustment, such as stay-cables and re-stressable dam anchors, will typically employ double-layer corrosion protection. Such tendons are composed of individual strands, grease-coated and sleeved, collected into a strand-bundle and placed inside encapsulating polyethylene outer ducting. The remaining void space within the duct is pressure-grouted, providing a multi-layer polythene-grout-plastic-grease protection barrier system for each strand.
Anchorage protectionIn all post-tensioned installations, protection of the end-anchorages against corrosion is essential, and critically so for unbonded systems.
Several durability-related events are listed below:
Ynys-y-Gwas bridge, West Glamorgan, Wales, 1985A single-span, precast-segmental structure constructed in 1953 with longitudinal and transverse post-tensioning. Corrosion attacked the under-protected tendons where they crossed the in-situ joints between the segments, leading to sudden collapse.
Scheldt River bridge, Melle, Belgium, 1991A three-span prestressed cantilever structure constructed in the 1950s. Inadequate concrete cover in the side abutments resulted in tie-down cable corrosion, leading to a progressive failure of the main bridge span and the death of one person.
UK Highways Agency, 1992Following discovery of tendon corrosion in several bridges in England, the Highways Agency issued a moratorium on the construction of new internally grouted post-tensioned bridges and embarked on a 5-year programme of inspections on its existing post-tensioned bridge stock. The moratorium was lifted in 1996.
Pedestrian bridge, Charlotte Motor Speedway, North Carolina, US, 2000A multi-span steel and concrete structure constructed in 1995. An unauthorised chemical was added to the tendon grout to speed construction, leading to corrosion of the prestressing strands and the sudden collapse of one span, injuring many spectators.
Hammersmith Flyover London, England, 2011Sixteen-span prestressed structure constructed in 1961. Corrosion from road de-icing salts was detected in some of the prestressing tendons, necessitating initial closure of the road while additional investigations were done. Subsequent repairs and strengthening using external post-tensioning was carried out and completed in 2015.
Petrulla Viaduct ("Viadotto Petrulla"), Sicily, Italy, 2014One span of a 12-span viaduct collapsed on 7 July 2014, causing 4 injuries, due to corrosion of the post-tensioning tendons.
Genoa bridge collapse, 2018. The Ponte Morandi was a cable-stayed bridge characterised by a prestressed concrete structure for the piers, pylons and deck, very few stays, as few as two per span, and a hybrid system for the stays constructed from steel cables with prestressed concrete shells poured on. The concrete was only prestressed to 10 MPa, resulting in it being prone to cracks and water intrusion, which caused corrosion of the embedded steel.
Churchill Way flyovers, Liverpool, EnglandThe flyovers were closed in September 2018 after inspections revealed poor quality concrete, tendon corrosion and signs of structural distress. They were demolished in 2019.
Applications
Prestressed concrete is a highly versatile construction material as a result of it being an almost ideal combination of its two main constituents: high-strength steel, pre-stretched to allow its full strength to be easily realised; and modern concrete, pre-compressed to minimise cracking under tensile forces. Its wide range of application is reflected in its incorporation into the major design codes covering most areas of structural and civil engineering, including buildings, bridges, dams, foundations, pavements, piles, stadiums, silos, and tanks.
Building structures
Building structures are typically required to satisfy a broad range of structural, aesthetic and economic requirements. Significant among these include: a minimum number of (intrusive) supporting walls or columns; low structural thickness (depth), allowing space for services, or for additional floors in high-rise construction; fast construction cycles, especially for multi-storey buildings; and a low cost-per-unit-area, to maximise the building owner's return on investment.
The prestressing of concrete allows "load-balancing" forces to be introduced into the structure to counter in-service loadings. This provides many benefits to building structures:
Longer spans for the same structural depthLoad balancing results in lower in-service deflections, which allows spans to be increased (and the number of supports reduced) without adding to structural depth.
Reduced structural thicknessFor a given span, lower in-service deflections allows thinner structural sections to be used, in turn resulting in lower floor-to-floor heights, or more room for building services.
Faster stripping timeTypically, prestressed concrete building elements are fully stressed and self-supporting within five days. At this point they can have their formwork stripped and re-deployed to the next section of the building, accelerating construction "cycle-times".
Reduced material costsThe combination of reduced structural thickness, reduced conventional reinforcement quantities, and fast construction often results in prestressed concrete showing significant cost benefits in building structures compared to alternative structural materials.
Some notable building structures constructed from prestressed concrete include: Sydney Opera House and World Tower, Sydney; St George Wharf Tower, London; CN Tower, Toronto; Kai Tak Cruise Terminal and International Commerce Centre, Hong Kong; Ocean Heights 2, Dubai; Eureka Tower, Melbourne; Torre Espacio, Madrid; Guoco Tower (Tanjong Pagar Centre), Singapore; Zagreb International Airport, Croatia; and Capital Gate, Abu Dhabi UAE.
Civil structures
Bridges
Concrete is the most popular structural material for bridges, and prestressed concrete is frequently adopted. When investigated in the 1940s for use on heavy-duty bridges, the advantages of this type of bridge over more traditional designs was that it is quicker to install, more economical and longer-lasting with the bridge being less lively. One of the first bridges built in this way is the Adam Viaduct, a railway bridge constructed 1946 in the UK. By the 1960s, prestressed concrete largely superseded reinforced concrete bridges in the UK, with box girders being the dominant form.
In short-span bridges of around , prestressing is commonly employed in the form of precast pre-tensioned girders or planks. Medium-length structures of around , typically use precast-segmental, in-situ balanced-cantilever and incrementally-launched designs. For the longest bridges, prestressed concrete deck structures often form an integral part of cable-stayed designs.
Dams
Concrete dams have used prestressing to counter uplift and increase their overall stability since the mid-1930s. Prestressing is also frequently retro-fitted as part of dam remediation works, such as for structural strengthening, or when raising crest or spillway heights.
Most commonly, dam prestressing takes the form of post-tensioned anchors drilled into the dam's concrete structure and/or the underlying rock strata. Such anchors typically comprise tendons of high-tensile bundled steel strands or individual threaded bars. Tendons are grouted to the concrete or rock at their far (internal) end, and have a significant "de-bonded" free-length at their external end which allows the tendon to stretch during tensioning. Tendons may be full-length bonded to the surrounding concrete or rock once tensioned, or (more commonly) have strands permanently encapsulated in corrosion-inhibiting grease over the free-length to permit long-term load monitoring and re-stressability.
Silos and tanks
Circular storage structures such as silos and tanks can use prestressing forces to directly resist the outward pressures generated by stored liquids or bulk-solids.
Horizontally curved tendons are installed within the concrete wall to form a series of hoops, spaced vertically up the structure. When tensioned, these tendons exert both axial (compressive) and radial (inward) forces onto the structure, which can directly oppose the subsequent storage loadings. If the magnitude of the prestress is designed to always exceed the tensile stresses produced by the loadings, a permanent residual compression will exist in the wall concrete, assisting in maintaining a watertight crack-free structure.
Nuclear and blast
Prestressed concrete has been established as a reliable construction material for high-pressure containment structures such as nuclear reactor vessels and containment buildings, and petrochemical tank blast-containment walls. Using pre-stressing to place such structures into an initial state of bi-axial or tri-axial compression increases their resistance to concrete cracking and leakage, while providing a proof-loaded, redundant and monitorable pressure-containment system.
Nuclear reactor and containment vessels will commonly employ separate sets of post-tensioned tendons curved horizontally or vertically to completely envelop the reactor core. Blast containment walls, such as for liquid natural gas (LNG) tanks, will normally utilize layers of horizontally-curved hoop tendons for containment in combination with vertically looped tendons for axial wall pre-stressing.
Hardstands and pavements
Heavily loaded concrete ground-slabs and pavements can be sensitive to cracking and subsequent traffic-driven deterioration. As a result, prestressed concrete is regularly used in such structures as its pre-compression provides the concrete with the ability to resist the crack-inducing tensile stresses generated by in-service loading. This crack-resistance also allows individual slab sections to be constructed in larger pours than for conventionally reinforced concrete, resulting in wider joint spacings, reduced jointing costs and less long-term joint maintenance issues. Initial works have also been successfully conducted on the use of precast prestressed concrete for road pavements, where the speed and quality of the construction has been noted as being beneficial for this technique.
Some notable civil structures constructed using prestressed concrete include: Gateway Bridge, Brisbane Australia; Incheon Bridge, South Korea; Roseires Dam, Sudan; Wanapum Dam, Washington, US; LNG tanks, South Hook, Wales; Cement silos, Brevik Norway; Autobahn A73 bridge, Itz Valley, Germany; Ostankino Tower, Moscow, Russia; CN Tower, Toronto, Canada; and Ringhals nuclear reactor, Videbergshamn Sweden.
Design agencies and regulations
Worldwide, many professional organizations exist to promote best practices in the design and construction of prestressed concrete structures. In the United States, such organizations include the Post-Tensioning Institute (PTI) and the Precast/Prestressed Concrete Institute (PCI). Similar bodies include the Canadian Precast/Prestressed Concrete Institute (CPCI), the UK's Post-Tensioning Association, the Post Tensioning Institute of Australia and the South African Post Tensioning Association. Europe has similar country-based associations and institutions.
These organizations are not the authorities of building codes or standards, but rather exist to promote the understanding and development of prestressed concrete design, codes and best practices.
Rules and requirements for the detailing of reinforcement and prestressing tendons are specified by individual national codes and standards such as:
European Standard EN 1992-2:2005 – Eurocode 2: Design of Concrete Structures;
US Standard ACI318: Building Code Requirements for Reinforced Concrete; and
Australian Standard AS 3600-2009: Concrete Structures.
See also
References
External links
The story of prestressed concrete from 1930 to 1945: A step towards the European Union
Guidelines for Sampling, Assessing, and Restoring Defective Grout in Prestressed Concrete Bridge Post-Tensioning Ducts Federal Highway Administration
Historical Patents and the Evolution of Twentieth Century Architectural Construction with Reinforced and Pre-stressed Concrete
Building materials
Concrete buildings and structures
Reinforced concrete
Structural engineering
fr:Béton#Béton précontraint
sv:Armerad betong#Spännarmerad betong | Prestressed concrete | [
"Physics",
"Engineering"
] | 5,589 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Civil engineering",
"Matter",
"Building materials"
] |
835,157 | https://en.wikipedia.org/wiki/Adhesion | Adhesion is the tendency of dissimilar particles or surfaces to cling to one another. (Cohesion refers to the tendency of similar or identical particles and surfaces to cling to one another.)
The forces that cause adhesion and cohesion can be divided into several types. The intermolecular forces responsible for the function of various kinds of stickers and sticky tape fall into the categories of chemical adhesion, dispersive adhesion, and diffusive adhesion. In addition to the cumulative magnitudes of these intermolecular forces, there are also certain emergent mechanical effects.
Surface energy
Surface energy is conventionally defined as the work that is required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cleave a bulk sample, creating two surfaces. If the new surfaces are identical, the surface energy γ of each surface is equal to half the work of cleavage, W: γ = (1/2)W11.
If the surfaces are unequal, the Young-Dupré equation applies:
W12 = γ1 + γ2 – γ12, where γ1 and γ2 are the surface energies of the two new surfaces, and γ12 is the interfacial energy.
This methodology can also be used to discuss cleavage that happens in another medium: γ12 = (1/2)W121 = (1/2)W212. These two energy quantities refer to the energy that is needed to cleave one species into two pieces while it is contained in a medium of the other species. Likewise for a three species system: γ13 + γ23 – γ12 = W12 + W33 – W13 – W23 = W132, where W132 is the energy of cleaving species 1 from species 2 in a medium of species 3.
A basic understanding of the terminology of cleavage energy, surface energy, and surface tension is very helpful for understanding the physical state and the events that happen at a given surface, but as discussed below, the theory of these variables also yields some interesting effects that concern the practicality of adhesive surfaces in relation to their surroundings.
Mechanisms
There is no single theory covering adhesion, and particular mechanisms are specific to particular material scenarios.
Five mechanisms of adhesion have been proposed to explain why one material sticks to another:
Mechanical
Adhesive materials fill the voids or pores of the surfaces and hold surfaces together by interlocking. Other interlocking phenomena are observed on different length scales. Sewing is an example of two materials forming a large scale mechanical bond, velcro forms one on a medium scale, and some textile adhesives (glue) form one at a small scale.
Chemical
Two materials may form a compound at the joint. The strongest joints are where atoms of the two materials share or swap electrons (known respectively as covalent bonding or ionic bonding). A weaker bond is formed if a hydrogen atom in one molecule is attracted to an atom of nitrogen, oxygen, or fluorine in another molecule, a phenomenon called hydrogen bonding.
Chemical adhesion occurs when the surface atoms of two separate surfaces form ionic, covalent, or hydrogen bonds. The engineering principle behind chemical adhesion in this sense is fairly straightforward: if surface molecules can bond, then the surfaces will be bonded together by a network of these bonds. It bears mentioning that these attractive ionic and covalent forces are effective over only very small distances – less than a nanometer. This means in general not only that surfaces with the potential for chemical bonding need to be brought very close together, but also that these bonds are fairly brittle, since the surfaces then need to be kept close together.
Dispersive
In dispersive adhesion, also known as physisorption, two materials are held together by van der Waals forces: the attraction between two molecules, each of which has a region of slight positive and negative charge. In the simple case, such molecules are therefore polar with respect to average charge density, although in larger or more complex molecules, there may be multiple "poles" or regions of greater positive or negative charge. These positive and negative poles may be a permanent property of a molecule (Keesom forces) or a transient effect which can occur in any molecule, as the random movement of electrons within the molecules may result in a temporary concentration of electrons in one region (London forces).
In surface science, the term adhesion almost always refers to dispersive adhesion. In a typical solid-liquid-gas system (such as a drop of liquid on a solid surrounded by air) the contact angle is used to evaluate adhesiveness indirectly, while a Centrifugal Adhesion Balance allows for direct quantitative adhesion measurements. Generally, cases where the contact angle is low are considered of higher adhesion per unit area. This approach assumes that the lower contact angle corresponds to a higher surface energy. Theoretically, the more exact relation between contact angle and work of adhesion is more involved and is given by the Young-Dupre equation. The contact angle of the three-phase system is a function not only of dispersive adhesion (interaction between the molecules in the liquid and the molecules in the solid) but also cohesion (interaction between the liquid molecules themselves). Strong adhesion and weak cohesion results in a high degree of wetting, a lyophilic condition with low measured contact angles. Conversely, weak adhesion and strong cohesion results in lyophobic conditions with high measured contact angles and poor wetting.
London dispersion forces are particularly useful for the function of adhesive devices, because they do not require either surface to have any permanent polarity. They were described in the 1930s by Fritz London, and have been observed by many researchers. Dispersive forces are a consequence of statistical quantum mechanics. London theorized that attractive forces between molecules that cannot be explained by ionic or covalent interaction can be caused by polar moments within molecules. Multipoles could account for attraction between molecules having permanent multipole moments that participate in electrostatic interaction. However, experimental data showed that many of the compounds observed to experience van der Waals forces had no multipoles at all. London suggested that momentary dipoles are induced purely by virtue of molecules being in proximity to one another. By solving the quantum mechanical system of two electrons as harmonic oscillators at some finite distance from one another, being displaced about their respective rest positions and interacting with each other's fields, London showed that the energy of this system is given by:
While the first term is simply the zero-point energy, the negative second term describes an attractive force between neighboring oscillators. The same argument can also be extended to a large number of coupled oscillators, and thus skirts issues that would negate the large scale attractive effects of permanent dipoles cancelling through symmetry, in particular.
The additive nature of the dispersion effect has another useful consequence. Consider a single such dispersive dipole, referred to as the origin dipole. Since any origin dipole is inherently oriented so as to be attracted to the adjacent dipoles it induces, while the other, more distant dipoles are not correlated with the original dipole by any phase relation (thus on average contributing nothing), there is a net attractive force in a bulk of such particles. When considering identical particles, this is called cohesive force.
When discussing adhesion, this theory needs to be converted into terms relating to surfaces. If there is a net attractive energy of cohesion in a bulk of similar molecules, then cleaving this bulk to produce two surfaces will yield surfaces with a dispersive surface energy, since the form of the energy remain the same. This theory provides a basis for the existence of van der Waals forces at the surface, which exist between any molecules having electrons. These forces are easily observed through the spontaneous jumping of smooth surfaces into contact. Smooth surfaces of mica, gold, various polymers and solid gelatin solutions do not stay apart when their separating becomes small enough – on the order of 1–10 nm. The equation describing these attractions was predicted in the 1930s by De Boer and Hamaker:
where P is the force (negative for attraction), z is the separation distance, and A is a material-specific constant called the Hamaker constant.
The effect is also apparent in experiments where a polydimethylsiloxane (PDMS) stamp is made with small periodic post structures. The surface with the posts is placed face down on a smooth surface, such that the surface area in between each post is elevated above the smooth surface, like a roof supported by columns. Because of these attractive dispersive forces between the PDMS and the smooth substrate, the elevated surface – or "roof" – collapses down onto the substrate without any external force aside from the van der Waals attraction. Simple smooth polymer surfaces – without any microstructures – are commonly used for these dispersive adhesive properties. Decals and stickers that adhere to glass without using any chemical adhesives are fairly common as toys and decorations and useful as removable labels because they do not rapidly lose their adhesive properties, as do sticky tapes that use adhesive chemical compounds.
These forces also act over very small distances – 99% of the work necessary to break van der Waals bonds is done once surfaces are pulled more than a nanometer apart. As a result of this limited motion in both the van der Waals and ionic/covalent bonding situations, practical effectiveness of adhesion due to either or both of these interactions leaves much to be desired. Once a crack is initiated, it propagates easily along the interface because of the brittle nature of the interfacial bonds.
As an additional consequence, increasing surface area often does little to enhance the strength of the adhesion in this situation. This follows from the aforementioned crack failure – the stress at the interface is not uniformly distributed, but rather concentrated at the area of failure.
Electrostatic
Some conducting materials may pass electrons to form a difference in electrical charge at the joint. This results in a structure similar to a capacitor and creates an attractive electrostatic force between the materials.
Diffusive
Some materials may merge at the joint by diffusion. This may occur when the molecules of both materials are mobile and soluble in each other. This would be particularly effective with polymer chains where one end of the molecule diffuses into the other material. It is also the mechanism involved in sintering. When metal or ceramic powders are pressed together and heated, atoms diffuse from one particle to the next. This joins the particles into one.
Diffusive forces are somewhat like mechanical tethering at the molecular level. Diffusive bonding occurs when species from one surface penetrate into an adjacent surface while still being bound to the phase of their surface of origin. One instructive example is that of polymer-on-polymer surfaces. Diffusive bonding in polymer-on-polymer surfaces is the result of sections of polymer chains from one surface interdigitating with those of an adjacent surface. The freedom of movement of the polymers has a strong effect on their ability to interdigitate, and hence, on diffusive bonding. For example, cross-linked polymers are less capable of diffusion and interdigitation because they are bonded together at many points of contact, and are not free to twist into the adjacent surface. Uncrosslinked polymers (thermoplastics), on the other hand are freer to wander into the adjacent phase by extending tails and loops across the interface.
Another circumstance under which diffusive bonding occurs is "scission". Chain scission is the cutting up of polymer chains, resulting in a higher concentration of distal tails. The heightened concentration of these chain ends gives rise to a heightened concentration of polymer tails extending across the interface. Scission is easily achieved by ultraviolet irradiation in the presence of oxygen gas, which suggests that adhesive devices employing diffusive bonding actually benefit from prolonged exposure to heat/light and air. The longer such a device is exposed to these conditions, the more tails are scissed and branch out across the interface.
Once across the interface, the tails and loops form whatever bonds are favorable. In the case of polymer-on-polymer surfaces, this means more van der Waals forces. While these may be brittle, they are quite strong when a large network of these bonds is formed. The outermost layer of each surface plays a crucial role in the adhesive properties of such interfaces, as even a tiny amount of interdigitation – as little as one or two tails of 1.25 angstrom length – can increase the van der Waals bonds by an order of magnitude.
Strength
The strength of the adhesion between two materials depends on which of the above mechanisms occur between the two materials, and the surface area over which the two materials contact. Materials that wet against each other tend to have a larger contact area than those that do not. Wetting depends on the surface energy of the materials.
Low surface energy materials such as polyethylene, polypropylene, polytetrafluoroethylene and polyoxymethylene are difficult to bond without special surface preparation.
Another factor determining the strength of an adhesive contact is its shape. Adhesive contacts of complex shape begin to detach at the "edges" of the contact area. The process of destruction of adhesive contacts can be seen in the film.
Other effects
In concert with the primary surface forces described above, there are several circumstantial effects in play. While the forces themselves each contribute to the magnitude of the adhesion between the surfaces, the following play a crucial role in the overall strength and reliability of an adhesive device.
Stringing
Stringing is perhaps the most crucial of these effects, and is often seen on adhesive tapes. Stringing occurs when a separation of two surfaces is beginning and molecules at the interface bridge out across the gap, rather than cracking like the interface itself. The most significant consequence of this effect is the restraint of the crack. By providing the otherwise brittle interfacial bonds with some flexibility, the molecules that are stringing across the gap can stop the crack from propagating. Another way to understand this phenomenon is by comparing it to the stress concentration at the point of failure mentioned earlier. Since the stress is now spread out over some area, the stress at any given point has less of a chance of overwhelming the total adhesive force between the surfaces. If failure does occur at an interface containing a viscoelastic adhesive agent, and a crack does propagate, it happens by a gradual process called "fingering", rather than a rapid, brittle fracture.
Stringing can apply to both the diffusive bonding regime and the chemical bonding regime. The strings of molecules bridging across the gap would either be the molecules that had earlier diffused across the interface or the viscoelastic adhesive, provided that there was a significant volume of it at the interface.
Microstructures
The interplay of molecular scale mechanisms and hierarchical surface structures is known to result in high levels of static friction and bonding between pairs of surfaces. Technologically advanced adhesive devices sometimes make use of microstructures on surfaces, such as tightly packed periodic posts. These are biomimetic technologies inspired by the adhesive abilities of the feet of various arthropods and vertebrates (most notably, geckos). By intermixing periodic breaks into smooth, adhesive surfaces, the interface acquires valuable crack-arresting properties. Because crack initiation requires much greater stress than does crack propagation, surfaces like these are much harder to separate, as a new crack has to be restarted every time the next individual microstructure is reached.
Hysteresis
Hysteresis, in this case, refers to the restructuring of the adhesive interface over some period of time, with the result being that the work needed to separate two surfaces is greater than the work that was gained by bringing them together (W > γ1 + γ2). For the most part, this is a phenomenon associated with diffusive bonding. The more time is given for a pair of surfaces exhibiting diffusive bonding to restructure, the more diffusion will occur, the stronger the adhesion will become. The aforementioned reaction of certain polymer-on-polymer surfaces to ultraviolet radiation and oxygen gas is an instance of hysteresis, but it will also happen over time without those factors.
In addition to being able to observe hysteresis by determining if W > γ1 + γ2 is true, one can also find evidence of it by performing "stop-start" measurements. In these experiments, two surfaces slide against one another continuously and occasionally stopped for some measured amount of time. Results from experiments on polymer-on-polymer surfaces show that if the stopping time is short enough, resumption of smooth sliding is easy. If, however, the stopping time exceeds some limit, there is an initial increase of resistance to motion, indicating that the stopping time was sufficient for the surfaces to restructure.
Wettability and absorption
Some atmospheric effects on the functionality of adhesive devices can be characterized by following the theory of surface energy and interfacial tension. It is known that γ12 = (1/2)W121 = (1/2)W212. If γ12 is high, then each species finds it favorable to cohere while in contact with a foreign species, rather than dissociate and mix with the other. If this is true, then it follows that when the interfacial tension is high, the force of adhesion is weak, since each species does not find it favorable to bond to the other. The interfacial tension of a liquid and a solid is directly related to the liquid's wettability (relative to the solid), and thus one can extrapolate that cohesion increases in non-wetting liquids and decreases in wetting liquids. One example that verifies this is polydimethyl siloxane rubber, which has a work of self-adhesion of 43.6 mJ/m2 in air, 74 mJ/m2 in water (a nonwetting liquid) and 6 mJ/m2 in methanol (a wetting liquid).
This argument can be extended to the idea that when a surface is in a medium with which binding is favorable, it will be less likely to adhere to another surface, since the medium is taking up the potential sites on the surface that would otherwise be available to adhere to another surface. Naturally this applies very strongly to wetting liquids, but also to gas molecules that could adsorb onto the surface in question, thereby occupying potential adhesion sites. This last point is actually fairly intuitive: Leaving an adhesive exposed to air too long gets it dirty, and its adhesive strength will decrease. This is observed in the experiment: when mica is cleaved in air, its cleavage energy, W121 or Wmica/air/mica, is smaller than the cleavage energy in vacuum, Wmica/vac/mica, by a factor of 13.
Lateral adhesion
Lateral adhesion is associated with sliding one object on a substrate, such as sliding a drop on a surface. When the two objects are solids, either with or without a liquid between them, the lateral adhesion is described as friction. However, the behavior of lateral adhesion between a drop and a surface is tribologically very different from friction between solids, and the naturally adhesive contact between a flat surface and a liquid drop makes the lateral adhesion in this case, an individual field. Lateral adhesion can be measured using the centrifugal adhesion balance (CAB), which uses a combination of centrifugal and gravitational forces to decouple the normal and lateral forces in the problem.
See also
Adhesive
Adhesive bonding
Bacterial adhesin
Capillary action
Cell adhesion
Contact mechanics
Electroadhesion
Fracture mechanics
Galling
Insect adhesion
Meniscus
Mucoadhesion
Pressure-sensitive adhesive
Rail adhesion
Synthetic setae
Wetting
Cohesion number
References
Further reading
John Comyn, Adhesion Science, Royal Society of Chemistry Paperbacks, 1997
A.J. Kinloch, Adhesion and Adhesives: Science and Technology, Chapman and Hall, 1987
Materials science
Chemical properties
Intermolecular forces
Articles containing video clips | Adhesion | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,204 | [
"Molecular physics",
"Applied and interdisciplinary physics",
"Materials science",
"Intermolecular forces",
"nan"
] |
18,693,930 | https://en.wikipedia.org/wiki/Kurosh%20subgroup%20theorem | In the mathematical field of group theory, the Kurosh subgroup theorem describes the algebraic structure of subgroups of free products of groups. The theorem was obtained by Alexander Kurosh, a Russian mathematician, in 1934. Informally, the theorem says that every subgroup of a free product is itself a free product of a free group and of its intersections with the conjugates of the factors of the original free product.
History and generalizations
After the original 1934 proof of Kurosh, there were many subsequent proofs of the Kurosh subgroup theorem, including proofs of Harold W. Kuhn (1952), Saunders Mac Lane (1958) and others. The theorem was also generalized for describing subgroups of amalgamated free products and HNN extensions. Other generalizations include considering subgroups of free pro-finite products and a version of the Kurosh subgroup theorem for topological groups.
In modern terms, the Kurosh subgroup theorem is a straightforward corollary of the basic structural results of Bass–Serre theory about groups acting on trees.
Statement of the theorem
Let be the free product of groups A and B and let be a subgroup of G. Then there exist a family of subgroups , a family of subgroups , families and of elements of G, and a subset such that
This means that X freely generates a subgroup of G isomorphic to the free group F(X) with free basis X and that, moreover, giAigi−1, fjBjfj−1 and X generate H in G as a free product of the above form.
There is a generalization of this to the case of free products with arbitrarily many factors. Its formulation is:
If H is a subgroup of ∗i∈IGi = G, then
where X ⊆ G and J is some index set and gj ∈ G and each Hj is a subgroup of some Gi.
Proof using Bass–Serre theory
The Kurosh subgroup theorem easily follows from the basic structural results in Bass–Serre theory, as explained, for example in the book of Cohen (1987):
Let G = A∗B and consider G as the fundamental group of a graph of groups Y consisting of a single non-loop edge with the vertex groups A and B and with the trivial edge group. Let X be the Bass–Serre universal covering tree for the graph of groups Y. Since H ≤ G also acts on X, consider the quotient graph of groups Z for the action of H on X. The vertex groups of Z are subgroups of G-stabilizers of vertices of X, that is, they are conjugate in G to subgroups of A and B. The edge groups of Z are trivial since the G-stabilizers of edges of X were trivial. By the fundamental theorem of Bass–Serre theory, H is canonically isomorphic to the fundamental group of the graph of groups Z. Since the edge groups of Z are trivial, it follows that H is equal to the free product of the vertex groups of Z and the free group F(X) which is the fundamental group (in the standard topological sense) of the underlying graph Z of Z. This implies the conclusion of the Kurosh subgroup theorem.
Extension
The result extends to the case that G is the amalgamated product along a common subgroup C, under the condition that H meets every conjugate of C only in the identity element.
See also
HNN extension
Geometric group theory
Nielsen–Schreier theorem
References
Geometric group theory
Theorems in group theory | Kurosh subgroup theorem | [
"Physics"
] | 720 | [
"Geometric group theory",
"Group actions",
"Symmetry"
] |
18,694,928 | https://en.wikipedia.org/wiki/Ambient%20pressure | The ambient pressure on an object is the pressure of the surrounding medium, such as a gas or liquid, in contact with the object.
Atmosphere
Within the atmosphere, the ambient pressure decreases as elevation increases. By measuring ambient atmospheric pressure, a pilot may determine altitude (see pitot-static system). Near sea level, a change in ambient pressure of 1 millibar is taken to represent a change in height of .
Underwater
The ambient pressure in water with a free surface is a combination of the hydrostatic pressure due to the weight of the water column and the atmospheric pressure on the free surface. This increases approximately linearly with depth. Since water is much denser than air, much greater changes in ambient pressure can be experienced under water. Each of depth adds another bar to the ambient pressure.
Ambient-pressure diving is underwater diving exposed to the water pressure at depth, rather than in a pressure-excluding atmospheric diving suit or a submersible.
Other environments
The concept is not limited to environments frequented by people. Almost any place in the universe will have an ambient pressure, from the hard vacuum of deep space to the interior of an exploding supernova. At extremely small scales the concept of pressure becomes irrelevant, and it is undefined at a gravitational singularity.
Units of pressure
The SI unit of pressure is the pascal (Pa), which is a very small unit relative to atmospheric pressure on Earth, so kilopascals (kPa) are more commonly used in this context. The ambient atmospheric pressure at sea level is not constant: it varies with the weather, but averages around 100 kPa. In fields such as meteorology and underwater diving, it is common to see ambient pressure expressed in bar or millibar. One bar is 100 kPa or approximately ambient pressure at sea level. Ambient pressure may in other circumstances be measured in pounds per square inch (psi) or in standard atmospheres (atm). The ambient pressure at sea level is approximately one atmosphere, which is equal to , which is close enough for bar and atm to be used interchangeably in many applications. In underwater diving the industry convention is to measure ambient pressure in terms of water column. The metric unit is the metre sea water which is defined as 1/10 bar.
Examples of ambient pressure in various environments
Pressures are given in terms of the normal ambient pressure experienced by humans – standard atmospheric pressure at sea level on earth.
References
Further reading
Pressure
Underwater diving physics | Ambient pressure | [
"Physics"
] | 491 | [
"Scalar physical quantities",
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Underwater diving physics",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
18,695,732 | https://en.wikipedia.org/wiki/Molecular%20scale%20electronics | Molecular scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures imaginable, this miniaturization is the ultimate goal for shrinking electrical circuits.
The field is often termed simply as "molecular electronics", but this term is also used to refer to the distantly related field of conductive polymers and organic electronics, which uses the properties of molecules to affect the bulk properties of a material. A nomenclature distinction has been suggested so that molecular materials for electronics refers to this latter field of bulk applications, while molecular scale electronics refers to the nanoscale single-molecule applications treated here.
Fundamental concepts
Conventional electronics have traditionally been made from bulk materials. Ever since their invention in 1958, the performance and complexity of integrated circuits has undergone exponential growth, a trend named Moore’s law, as feature sizes of the embedded components have shrunk accordingly. As the structures shrink, the sensitivity to deviations increases. In a few technology generations, the composition of the devices must be controlled to a precision of a few atoms
for the devices to work. With bulk methods growing increasingly demanding and costly as they near inherent limits, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) versus carving them out of bulk material (top down). This is the idea behind molecular electronics, with the ultimate miniaturization being components contained in single molecules.
In single-molecule electronics, the bulk material is replaced by single molecules. Instead of forming structures by removing or applying material after a pattern scaffold, the atoms are put together in a chemistry lab. In this way, billions of billions of copies are made simultaneously (typically more than 1020 molecules are made at once) while the composition of molecules are controlled down to the last atom. The molecules used have properties that resemble traditional electronic components such as a wire, transistor or rectifier.
Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular sized compounds are still very far from being realized. However, the unceasing demand for more computing power, along with the inherent limits of lithographic methods , make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes.
Theoretical basis
Molecular electronics operates at distances of less than 100 nanometers. The miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In conventional electronic components, electrons can be filled in or drawn out more or less like a continuous flow of electric charge. In contrast, in molecular electronics the transfer of one electron alters the system significantly. For example, when an electron has been transferred from a source electrode to a molecule, the molecule gets charged up, which makes it far harder for the next electron to transfer (see also Coulomb blockade). The significant amount of energy due to charging must be accounted for when making calculations about the electronic properties of the setup, and is highly sensitive to distances to conducting surfaces nearby.
The theory of single-molecule devices is especially interesting since the system under consideration is an open quantum system in nonequilibrium (driven by voltage). In the low bias voltage regime, the nonequilibrium nature of the molecular junction can be ignored, and the current-voltage traits of the device can be calculated using the equilibrium electronic structure of the system. However, in stronger bias regimes a more sophisticated treatment is required, as there is no longer a variational principle. In the elastic tunneling case (where the passing electron does not exchange energy with the system), the formalism of Rolf Landauer can be used to calculate the transmission through the system as a function of bias voltage, and hence the current. In inelastic tunneling, an elegant formalism based on the non-equilibrium Green's functions of Leo Kadanoff and Gordon Baym, and independently by Leonid Keldysh was advanced by Ned Wingreen and Yigal Meir. This Meir-Wingreen formulation has been used to great success in the molecular electronics community to examine the more difficult and interesting cases where the transient electron exchanges energy with the molecular system (for example through electron-phonon coupling or electronic excitations).
Further, connecting single molecules reliably to a larger scale circuit has proven a great challenge, and constitutes a significant hindrance to commercialization.
Examples
Common for molecules used in molecular electronics is that the structures contain many alternating double and single bonds (see also Conjugated system). This is done because such patterns delocalize the molecular orbitals, making it possible for electrons to move freely over the conjugated area.
Wires
The sole purpose of molecular wires is to electrically connect different parts of a molecular electrical circuit. As the assembly of these and their connection to a macroscopic circuit is still not mastered, the focus of research in single-molecule electronics is primarily on the functionalized molecules: molecular wires are characterized by containing no functional groups and are hence composed of plain repetitions of a conjugated building block. Among these are the carbon nanotubes that are quite large compared to the other suggestions but have shown very promising electrical properties.
The main problem with the molecular wires is to obtain good electrical contact with the electrodes so that electrons can move freely in and out of the wire.
Transistors
Single-molecule transistors are fundamentally different from the ones known from bulk electronics. The gate in a conventional (field-effect) transistor determines the conductance between the source and drain electrode by controlling the density of charge carriers between them, whereas the gate in a single-molecule transistor controls the possibility of a single electron to jump on and off the molecule by modifying the energy of the molecular orbitals. One of the effects of this difference is that the single-molecule transistor is almost binary: it is either on or off. This opposes its bulk counterparts, which have quadratic responses to gate voltage.
It is the quantization of charge into electrons that is responsible for the markedly different behavior compared to bulk electronics. Because of the size of a single molecule, the charging due to a single electron is significant and provides means to turn a transistor on or off (see Coulomb blockade). For this to work, the electronic orbitals on the transistor molecule cannot be too well integrated with the orbitals on the electrodes. If they are, an electron cannot be said to be located on the molecule or the electrodes and the molecule will function as a wire.
A popular group of molecules, that can work as the semiconducting channel material in a molecular transistor, is the oligopolyphenylenevinylenes (OPVs) that works by the Coulomb blockade mechanism when placed between the source and drain electrode in an appropriate way. Fullerenes work by the same mechanism and have also been commonly used.
Semiconducting carbon nanotubes have also been demonstrated to work as channel material but although molecular, these molecules are sufficiently large to behave almost as bulk semiconductors.
The size of the molecules, and the low temperature of the measurements being conducted, makes the quantum mechanical states well defined. Thus, it is being researched if the quantum mechanical properties can be used for more advanced purposes than simple transistors (e.g. spintronics).
Physicists at the University of Arizona, in collaboration with chemists from the University of Madrid, have designed a single-molecule transistor using a ring-shaped molecule similar to benzene. Physicists at Canada's National Institute for Nanotechnology have designed a single-molecule transistor using styrene. Both groups expect (the designs were experimentally unverified ) their respective devices to function at room temperature, and to be controlled by a single electron.
Rectifiers (diodes)
Molecular rectifiers are mimics of their bulk counterparts and have an asymmetric construction so that the molecule can accept electrons in one end but not the other. The molecules have an electron donor (D) in one end and an electron acceptor (A) in the other. This way, the unstable state D+ – A− will be more readily made than D− – A+. The result is that an electric current can be drawn through the molecule if the electrons are added through the acceptor end, but less easily if the reverse is attempted.
Methods
One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (on the order of nanometers), alternative strategies are applied.
Molecular gaps
One way to produce electrodes with a molecular sized gap between them is break junctions, in which a thin electrode is stretched until it breaks. Another is electromigration. Here a current is led through a thin wire until it melts and the atoms migrate to produce the gap. Further, the reach of conventional photolithography can be enhanced by chemically etching or depositing metal on the electrodes.
Probably the easiest way to conduct measurements on several molecules is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate.
Anchoring
A popular way to anchor molecules to the electrodes is to make use of sulfur's high chemical affinity to gold. In these setups, the molecules are synthesized so that sulfur atoms are placed strategically to function as crocodile clips connecting the molecules to the gold electrodes. Though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces. Further, the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection.
To circumvent the latter issue, experiments has shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than one atom of sulfur.
Fullerene nanoelectronics
In polymers, classical organic molecules are composed of both carbon and hydrogen (and sometimes additional compounds such as nitrogen, chlorine or sulphur). They are obtained from petrol and can often be synthesized in large amounts. Most of these molecules are insulating when their length exceeds a few nanometers. However, naturally occurring carbon is conducting, especially graphite recovered from coal or encountered otherwise. From a theoretical viewpoint, graphite is a semi-metal, a category in between metals and semi-conductors. It has a layered structure, each sheet being one atom thick. Between each sheet, the interactions are weak enough to allow an easy manual cleavage.
Tailoring the graphite sheet to obtain well defined nanometer-sized objects remains a challenge. However, by the close of the twentieth century, chemists were exploring methods to fabricate extremely small graphitic objects that could be considered single molecules. After studying the interstellar conditions under which carbon is known to form clusters, Richard Smalley's group (Rice University, Texas) set up an experiment in which graphite was vaporized via laser irradiation. Mass spectrometry revealed that clusters containing specific magic numbers of atoms were stable, especially those clusters of 60 atoms. Harry Kroto, an English chemist who assisted in the experiment, suggested a possible geometry for these clusters – atoms covalently bound with the exact symmetry of a soccer ball. Coined buckminsterfullerenes, buckyballs, or C60, the clusters retained some properties of graphite, such as conductivity. These objects were rapidly envisioned as possible building blocks for molecular electronics.
Problems
Artifacts
When trying to measure electronic traits of molecules, artificial phenomena can occur that can be hard to distinguish from truly molecular behavior. Before they were discovered, these artifacts have mistakenly been published as being features pertaining to the molecules in question.
Applying a voltage drop on the order of volts across a nanometer sized junction results in a very strong electrical field. The field can cause metal atoms to migrate and eventually close the gap by a thin filament, which can be broken again when carrying a current. The two levels of conductance imitate molecular switching between a conductive and an isolating state of a molecule.
Another encountered artifact is when the electrodes undergo chemical reactions due to the high field strength in the gap. When the voltage bias is reversed, the reaction will cause hysteresis in the measurements that can be interpreted as being of molecular origin.
A metallic grain between the electrodes can act as a single electron transistor by the mechanism described above, thus resembling the traits of a molecular transistor. This artifact is especially common with nanogaps produced by the electromigration method.
History and progress
In their treatment of so-called donor-acceptor complexes in the 1940s, Robert Mulliken and Albert Szent-Györgyi advanced the concept of charge transfer in molecules. They subsequently further refined the study of both charge transfer and energy transfer in molecules. Likewise, a 1974 paper from Mark Ratner and Ari Aviram illustrated a theoretical molecular rectifier.
In 1988, Aviram described in detail a theoretical single-molecule field-effect transistor. Further concepts were proposed by Forrest Carter of the Naval Research Laboratory, including single-molecule logic gates. A wide range of ideas were presented, under his aegis, at a conference entitled Molecular Electronic Devices in 1988. These were theoretical constructs and not concrete devices. The direct measurement of the electronic traits of individual molecules awaited the development of methods for making molecular-scale electrical contacts. This was no easy task. Thus, the first experiment directly-measuring the conductance of a single molecule was only reported in 1995 on a single C60 molecule by C. Joachim and J. K. Gimzewsky in their seminal Physical Review Letter paper and later in 1997 by Mark Reed and co-workers on a few hundred molecules. Since then, this branch of the field has advanced rapidly. Likewise, as it has grown possible to measure such properties directly, the theoretical predictions of the early workers have been confirmed substantially.
The concept of molecular electronics was published in 1974 when Aviram and Ratner suggested an organic molecule that could work as a rectifier. Having both huge commercial and fundamental interest, much effort was put into proving its feasibility, and 16 years later in 1990, the first demonstration of an intrinsic molecular rectifier was realized by Ashwell and coworkers for a thin film of molecules.
The first measurement of the conductance of a single molecule was realised in 1994 by C. Joachim and J. K. Gimzewski and published in 1995 (see the corresponding Phys. Rev. Lett. paper). This was the conclusion of 10 years of research started at IBM TJ Watson, using the scanning tunnelling microscope tip apex to switch a single molecule as already explored by A. Aviram, C. Joachim and M. Pomerantz at the end of the 1980s (see their seminal Chem. Phys. Lett. paper during this period). The trick was to use a UHV Scanning Tunneling microscope to allow the tip apex to gently touch the top of a single molecule adsorbed on an Au(110) surface. A resistance of 55 MOhms was recorded along with a low voltage linear I-V. The contact was certified by recording the I-z current distance property, which allows measurement of the deformation of the cage under contact. This first experiment was followed by the reported result using a mechanical break junction method to connect two gold electrodes to a sulfur-terminated molecular wire by Mark Reed and James Tour in 1997.
The scanning tunneling microscope (STM) and later the atomic force microscope (AFM) have facilitated manipulating single-molecule electronics. Also, theoretical advances in molecular electronics have facilitated further understanding of non-adiabatic charge transfer events at electrode-electrolyte interfaces.
A single-molecule amplifier was implemented by C. Joachim and J.K. Gimzewski in IBM Zurich. This experiment, involving one molecule, demonstrated that one such molecule can provide gain in a circuit via intramolecular quantum interference effects alone.
A collaboration of researchers at Hewlett-Packard (HP) and University of California, Los Angeles (UCLA), led by James Heath, Fraser Stoddart, R. Stanley Williams, and Philip Kuekes, has developed molecular electronics based on rotaxanes and catenanes.
Work is also occurring on the use of single-wall carbon nanotubes as field-effect transistors. Most of this work is being done by International Business Machines (IBM).
Some specific reports of a field-effect transistor based on molecular self-assembled monolayers were shown to be fraudulent in 2002 as part of the Schön scandal.
The Aviram-Ratner model for a unimolecular rectifier has been confirmed experimentally. Many rectifying molecules have so far been identified, and the number and efficiency of these systems is growing rapidly.
Supramolecular electronics is a new field involving electronics at a supramolecular level.
An important issue in molecular electronics is the determination of the resistance of a single molecule (both theoretical and experimental). For example, Bumm, et al. used STM to analyze a single molecular switch in a self-assembled monolayer to determine how conductive such a molecule can be. Another problem faced by this field is the difficulty of performing direct characterization since imaging at the molecular scale is often difficult in many experimental devices.
See also
Molecular electronics
Single-molecule magnet
Stereoelectronics
Organic semiconductor
Conductive polymer
Molecular conductance
Comparison of software for molecular mechanics modeling
Unconventional computing
References
Molecular electronics | Molecular scale electronics | [
"Chemistry",
"Materials_science"
] | 3,717 | [
"Nanotechnology",
"Molecular physics",
"Molecular electronics"
] |
18,696,066 | https://en.wikipedia.org/wiki/Diphenylethylenediamine | 1,2-Diphenyl-1,2-ethylenediamine, DPEN, is an organic compound with the formula H2NCHPhCHPhNH2, where Ph is phenyl (C6H5). DPEN exists as three stereoisomers: meso and two enantiomers S,S- and R,R-. The chiral diastereomers are used in asymmetric hydrogenation. Both diastereomers are bidentate ligands.
Preparation and optical resolution
1,2-Diphenyl-1,2-ethylenediamine can be prepared from benzil by reductive amination.
DPEN can be obtained as both the chiral and meso diastereomers, depending on the relative stereochemistry of the two CHPhNH2 subunits. The chiral diastereomer, which is of greater value, can be resolved into the R,R- and S,S- enantiomers using tartaric acid as the resolving agent. In methanol, the R,R enantiomer has a specific rotation of [α]23 +106±1°.
Asymmetric catalysis
N-tosylated derivative, TsDPEN, is a ligand precursor for catalysts for asymmetric transfer hydrogenation. For example, (cymene)Ru(S,S-TsDPEN) catalyzes the hydrogenation of benzil into (R,R)-hydrobenzoin. In this reaction, formate serves as the source of H2:
PhC(O)C(O)Ph + 2 H2 → PhCH(OH)CH(OH)Ph (R,R isomer)
This transformation is an example of desymmetrization, the symmetric molecule benzil is converted to the dissymmetric product.
DPEN is a key ingredients of Ryōji Noyori's 2nd generation ruthenium-based chiral hydration catalyst, for which he earned the Nobel Prize in Chemistry in 2001.
References
Diamines
Chelating agents
Phenyl compounds | Diphenylethylenediamine | [
"Chemistry"
] | 440 | [
"Chelating agents",
"Process chemicals"
] |
18,698,592 | https://en.wikipedia.org/wiki/Haloform%20reaction | In chemistry, the haloform reaction (also referred to as the Lieben haloform reaction) is a chemical reaction in which a haloform (, where X is a halogen) is produced by the exhaustive halogenation of an acetyl group (, where R can be either a hydrogen atom, an alkyl or an aryl group), in the presence of a base. The reaction can be used to transform acetyl groups into carboxyl groups () or to produce chloroform (), bromoform (), or iodoform (). Note that fluoroform () can't be prepared in this way.
Mechanism
In the first step, the halogen dis-proportionates in the presence of hydroxide to give the halide and hypohalite.
Br2 + 2 OH- -> Br- + BrO- + H2O
If a secondary alcohol is present, it is oxidized to a ketone by the hypohalite:
If a methyl ketone is present, it reacts with the hypohalite in a three-step process:
1. Under basic conditions, the ketone undergoes keto-enol tautomerisation. The enolate undergoes electrophilic attack by the hypohalite (containing a halogen with a formal +1 charge).
2. When the α(alpha) position has been exhaustively halogenated, the molecule reacts with hydroxide, with being the leaving group stabilized by three electron-withdrawing groups. In the third step the anion abstracts a proton from either the solvent or the carboxylic acid formed in the previous step, and forms the haloform. At least in some cases (chloral hydrate) the reaction may stop and the intermediate product isolated if conditions are acidic and hypohalite is used.
Scope
Substrates are broadly limited to methyl ketones and secondary alcohols oxidizable to methyl ketones, such as isopropanol. The only primary alcohol and aldehyde to undergo this reaction are ethanol and acetaldehyde, respectively. 1,3-Diketones such as acetylacetone also undergo this reaction. β-ketoacids such as acetoacetic acid will also give the test upon heating. Acetyl chloride and acetamide do not undergo this reaction. The halogen used may be chlorine, bromine, iodine or sodium hypochlorite. Fluoroform (CHF3) cannot be prepared by this method as it would require the presence of the highly unstable hypofluorite ion. However ketones with the structure RCOCF3 do cleave upon treatment with base to produce fluoroform; this is equivalent to the second and third steps in the process shown above.
Applications
Laboratory scale
This reaction forms the basis of the iodoform test which was commonly used in history as a chemical test to determine the presence of a methyl ketone, or a secondary alcohol oxidizable to a methyl ketone. When iodine and sodium hydroxide are used as the reagents a positive reaction gives iodoform, which is a solid at room temperature and tends to precipitate out of solution causing a distinctive cloudiness.
In organic chemistry, this reaction may be used to convert a terminal methyl ketone into the analogous carboxylic acid.
Industrially
It was formerly used to produce iodoform, bromoform, and even chloroform industrially.
A variant of this reaction is used to manufacture deuterated chloroform, in reaction of hexachloroacetone with heavy water catalysed by base:
Further variant uses decomposition of calcium trichloroacetate in heavy water:
As a by-product of water chlorination
Water chlorination can result in the formation of haloforms if the water contains suitable reactive impurities (e.g. humic acid). There is a concern that such reactions may lead to the presence of carcinogenic compounds in drinking water.
History
The haloform reaction is one of the oldest organic reactions known. In 1822, Georges-Simon Serullas added potassium metal to a solution of iodine in ethanol and water to form potassium formate and iodoform, called in the language of that time hydroiodide of carbon. In 1832, Justus von Liebig reported the reaction of chloral with calcium hydroxide to form chloroform and calcium formate. The reaction was rediscovered by Adolf Lieben in 1870. The iodoform test is also called the Lieben iodoform reaction. A review of the haloform reaction with a history section was published in 1934.
References
Organic redox reactions
Carbon-heteroatom bond forming reactions
Halogenation reactions | Haloform reaction | [
"Chemistry"
] | 999 | [
"Carbon-heteroatom bond forming reactions",
"Organic redox reactions",
"Organic reactions"
] |
18,699,474 | https://en.wikipedia.org/wiki/Faujasite | Faujasite (FAU-type zeolite) is a mineral group in the zeolite family of silicate minerals. The group consists of faujasite-Na, faujasite-Mg and faujasite-Ca. They all share the same basic formula by varying the amounts of sodium, magnesium and calcium. Faujasite occurs as a rare mineral in several locations worldwide.
Faujasite materials are widely synthesized industrially. The relatively low-silica (Si/Al<2) synthetic faujasite is called Zeolite X and the high-silica (Si/Al>2) one is called Zeolite Y. In addition, the aluminum component in zeolite Y can be removed by acid-treatment and/or steam-treatment, and the resulting faujasite is called USY (Ultrastable zeolite Y). USY is used in fluid catalytic cracking process as a catalyst.
Discovery and occurrence
Faujasite was first described in 1842 from an occurrence in the Limberg Quarries, Sasbach, Kaiserstuhl, Baden-Württemberg, Germany. The sodium modifier faujasite-Na was added following the discovery of the magnesium and calcium rich phases in the 1990s. It was named for Barthélemy Faujas de Saint-Fond (1741–1819), French geologist and volcanologist.
Faujasite occurs in vesicles within basalt and phonolite lava and tuff as an alteration or authigenic mineral. It occurs with other zeolites, olivine, augite and nepheline.
Structure
The faujasite framework has been attributed the code FAU by the International Zeolite Association. It consists of sodalite cages which are connected through hexagonal prisms. The pore, which is formed by a 12-membered ring, has a relatively large diameter of 7.4 Å. The inner cavity has a diameter of 12 Å and is surrounded by 10 sodalite cages. The unit cell is cubic; Pearson symbol cF576, symmetry Fd3m, No. 227, lattice constant 24.7 Å. Of the two types (X and Y) of zeolites coded with FAU, zeolite Y, which is the one with the higher range of silica-to-alumina content, has a void fraction of 48% and a Si/Al ratio of 2.43. It thermally decomposes at 793 °C.
Synthesis
Faujasite is synthesized, as are other zeolites, from alumina sources such as sodium aluminate and silica sources such as sodium silicate. Other aluminosilicates such as kaolin are used as well. The ingredients are dissolved in a basic environment such as sodium hydroxide aqueous solution and crystallized at 70 to 300 °C (usually at 100 °C). After crystallization the faujasite is in its sodium form and must be ion exchanged with ammonium to improve stability. The ammonium ion is removed later by calcination which renders the zeolite in its acid form. Depending on the silica-to-alumina ratio of their framework, synthetic faujasite zeolites are divided into X and Y zeolites. In X zeolites that ratio is between 2 and 3, while in Y zeolites it is 3 or higher. The negative charges of the framework are balanced by the positive charges of cations (usually either sodium from the NaOH solution, or ammonium or H+ after exchanges) in non-framework positions. Such zeolites have ion-exchange, catalytic and adsorptive properties. The stability of the zeolite increases with the silica-to-alumina ratio of the framework (Lowenstein's rule). It is also affected by the type and amount of cations located in non-framework positions. For catalytic cracking, the Y zeolite is often used in a rare earth-hydrogen exchanged form.
By using thermal, hydrothermal or chemical methods, some of the alumina can be removed from the Y zeolite framework, resulting in high-silica Y zeolites. Such zeolites are used in cracking and hydrocracking catalysts. Complete dealumination results in faujasite-silica.
Use
Faujasite is used above all as a catalyst in fluid catalytic cracking to convert high-boiling fractions of petroleum crude to more valuable gasoline, diesel and other products. Zeolite Y has superseded zeolite X in this use because it is both more active and more stable at high temperatures due to the higher Si/Al ratio. It is also used in the hydrocracking units as a platinum/palladium support to increase aromatic content of reformulated refinery products.
Type X zeolite can be used to selectively adsorb CO2 from gas streams and is used in the prepurification of air for industrial air separation.
Due to its widely known structure, behaviour and properties, Faujasite is often used as a standard in catalytic and (ad/de)sorption studies on zeolites, along with MFI, FER and CHA
See also
Cu Y Zeolite, copper-containing high-silica derivatives of the faujasite mineral group
References
Literature
Subhash Bhatia, Zeolite Catalysis: Principles and Applications, CRC Press, Inc., Boca Raton, Florida, 1990.
Ribeiro, F. R., et al., ed., Zeolites: Science and Technology, Martinus Nijhoff Publishers, The Hague, 1984.
External links
Structure type FAU
Catalysts
Zeolites
Cubic minerals
Minerals in space group 227 | Faujasite | [
"Chemistry"
] | 1,199 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
18,699,610 | https://en.wikipedia.org/wiki/Ammonium%20hexafluorophosphate | Ammonium hexafluorophosphate is the inorganic compound with the formula NH4PF6. It is a white water-soluble, hygroscopic solid. The compound is a salt consisting of the ammonium cation and hexafluorophosphate anion. It is commonly used as a source of the hexafluorophosphate anion, a weakly coordinating anion. It is prepared by combining neat ammonium fluoride and phosphorus pentachloride. Alternatively it can also be produced from phosphonitrilic chloride:
PCl5 + 6 NH4F → NH4PF6 + 5 NH4Cl
PNCl2 + 6 HF → NH4PF6 + 2 HCl
References
Ammonium compounds
Hexafluorophosphates | Ammonium hexafluorophosphate | [
"Chemistry"
] | 167 | [
"Ammonium compounds",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
3,600,345 | https://en.wikipedia.org/wiki/Optical%20frequency%20multiplier | An optical frequency multiplier is a nonlinear optical device in which photons interacting with a nonlinear material are effectively "combined" to form new photons with greater energy, and thus higher frequency (and shorter wavelength). Two types of devices are currently common: frequency doublers, often based on lithium niobate (LN), lithium tantalate (LT), potassium titanyl phosphate (KTP) or lithium triborate (LBO), and frequency triplers typically made of potassium dihydrogen phosphate (KDP). Both are widely used in optical experiments that use lasers as a light source.
Harmonic generation
There are two processes that are commonly used to achieve the conversion: second-harmonic generation (SHG, also called frequency doubling), or sum-frequency generation which sums two non-similar frequencies. Direct third-harmonic generation (THG, also called frequency tripling) also exists and can be used to detect an interface between materials of different excitability. For example, it has been used to extract the outline of cells in embryos, where the cells are separated by water.
Lasers
Optical frequency multipliers are common in high-power lasers, notably those used for inertial confinement fusion (ICF) experiments. ICF attempts to use a laser to heat and compress a target containing fusion fuel, and it was found in experiments with the Shiva laser that the infrared frequencies generated by the laser lost most of its energy in the hot electrons being generated early in the heating process. In order to avoid this problem much shorter wavelengths needed to be used, and experiments on the OMEGA laser and Novette laser validated the use of frequency tripling KDP crystals to convert the laser light into the ultraviolet, a process that has been used on almost every laser-driven ICF experiment since then, including the National Ignition Facility.
References
Second-harmonic generation
Frequency multiplier
Nonlinear optics
Laser science | Optical frequency multiplier | [
"Materials_science",
"Engineering"
] | 392 | [
"Glass engineering and science",
"Optical devices"
] |
3,600,408 | https://en.wikipedia.org/wiki/Volume%20fraction | In chemistry and fluid mechanics, the volume fraction is defined as the volume of a constituent Vi divided by the volume of all constituents of the mixture V prior to mixing:
Being dimensionless, its unit is 1; it is expressed as a number, e.g., 0.18. It is the same concept as volume percent (vol%) except that the latter is expressed with a denominator of 100, e.g., 18%.
The volume fraction coincides with the volume concentration in ideal solutions where the volumes of the constituents are additive (the volume of the solution is equal to the sum of the volumes of its ingredients).
The sum of all volume fractions of a mixture is equal to 1:
The volume fraction (percentage by volume, vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others.
Volume concentration and volume percent
Volume percent is the concentration of a certain solute, measured by volume, in a solution. It has as a denominator the volume of the mixture itself, as usual for expressions of concentration, rather than the total of all the individual components’ volumes prior to mixing:
Volume percent is usually used when the solution is made by mixing two fluids, such as liquids or gases. However, percentages are only additive for ideal gases.
The percentage by volume (vol%, % v/v) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others.
In the case of a mixture of ethanol and water, which are miscible in all proportions, the designation of solvent and solute is arbitrary. The volume of such a mixture is slightly less than the sum of the volumes of the components. Thus, by the above definition, the term "40% alcohol by volume" refers to a mixture of 40 volume units of ethanol with enough water to make a final volume of 100 units, rather than a mixture of 40 units of ethanol with 60 units of water. The "enough water" is actually slightly more than 60 volume units, since water-ethanol mixture loses volume due to intermolecular attraction.
Relation to mass fraction
Volume fraction is related to mass fraction,
by
where is the constituent density, and is the mixture density.
See also
Alcohol by volume
Breathalyzer
Alcohol proof
Apparent molar property
For non-ideal mixtures, see Partial molar volume and Excess molar quantity
Percentage
Mass fraction (chemistry)
References
Dimensionless quantities of chemistry
Physical chemistry
Thermodynamics
Analytical chemistry | Volume fraction | [
"Physics",
"Chemistry",
"Mathematics"
] | 558 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Dimensionless quantities of chemistry",
"Thermodynamics",
"nan",
"Dimensionless quantities",
"Physical chemistry",
"Dimensionless numbers of chemistry",
"Dynamical systems"
] |
3,601,462 | https://en.wikipedia.org/wiki/Shock%20diamond | Shock diamonds (also known as Mach diamonds or thrust diamonds, and less commonly Mach disks) are a formation of standing wave patterns that appear in the supersonic exhaust plume of an aerospace propulsion system, such as a supersonic jet engine, rocket, ramjet, or scramjet, when it is operated in an atmosphere. The "diamonds" are actually a complex flow field made visible by abrupt changes in local density and pressure as the exhaust passes through a series of standing shock waves and expansion fans. The physicist Ernst Mach was the first to describe a strong shock normal to the direction of fluid flow, the presence of which causes the diamond pattern.
Mechanism
Shock diamonds form when the supersonic exhaust from a propelling nozzle is slightly over-expanded, meaning that the static pressure of the gases exiting the nozzle is less than the ambient air pressure. The higher ambient pressure compresses the flow, and since the resulting pressure increase in the exhaust gas stream is adiabatic, a reduction in velocity causes its static temperature to be substantially increased. The exhaust is typically over-expanded at low altitudes, where air pressure is higher.
As the flow exits the nozzle, ambient air pressure will compress the flow. The external compression is caused by oblique shock waves inclined at an angle to the flow. The compressed flow is alternately expanded by Prandtl-Meyer expansion fans, and each "diamond" is formed by the pairing of an oblique shock with an expansion fan. When the compressed flow becomes parallel to the center line, a shock wave perpendicular to the flow forms, called a normal shock wave or Mach disk. This locates the first shock diamond, and the space between it and the nozzle is called the "zone of silence". The distance from the nozzle to the first shock diamond can be approximated by
where x is the distance, D0 is the nozzle diameter, P0 is flow pressure, and P1 is atmospheric pressure.
As the exhaust passes through the normal shock wave, its temperature increases, igniting excess fuel and causing the glow that makes the shock diamonds visible. The illuminated regions either appear as disks or diamonds, giving them their name.
Eventually the flow expands enough so that its pressure is again below ambient, at which point the expansion fan reflects from the contact discontinuity (the outer edge of the flow). The reflected waves, called the compression fan, cause the flow to compress. If the compression fan is strong enough, another oblique shock wave will form, creating a second Mach disk and shock diamond. The pattern of disks and diamonds would repeat indefinitely if the gases were ideal and frictionless; however, turbulent shear at the contact discontinuity causes the wave pattern to dissipate with distance.
Diamond patterns can similarly form when a nozzle is under-expanded (exit pressure higher than ambient) in lower atmospheric pressure at higher altitudes. In this case, the expansion fan is first to form, followed by the oblique shock.
Alternative sources
Shock diamonds are most commonly associated with jet and rocket propulsion, but they can form in other systems.
Artillery
When artillery pieces are fired, gas exits the cannon muzzle at supersonic speeds and produces a series of shock diamonds. The diamonds cause a bright muzzle flash which can expose the location of gun emplacements to the enemy. It was found that when the ratio between the flow pressure and atmospheric pressure is close, which can be achieved with a flash suppressor, the shock diamonds were greatly minimized. Adding a muzzle brake to the end of the muzzle balances the pressures and prevents shock diamonds.
Radio jets
Some radio jets, powerful jets of plasma that emanate from quasars and radio galaxies, are observed to have regularly-spaced knots of enhanced radio emissions. The jets travel at supersonic speed through a thin "atmosphere" of gas in space, so it is hypothesized that these knots are shock diamonds.
See also
Index of aviation articles
Plume (hydrodynamics)
Rocket engine nozzle
References
External links
"Methane blast" - shock diamonds forming in NASA's methane engine built by XCOR Aerospace, NASA website, 4 May 2007
"Shock Diamonds and Mach Disks" - This link has useful diagrams. Aerospaceweb.org is a non-profit site operated by engineers and scientists in the aerospace field.
Physical phenomena
Shock waves
Aerospace
Aerodynamics
Ernst Mach | Shock diamond | [
"Physics",
"Chemistry",
"Engineering"
] | 881 | [
"Physical phenomena",
"Aerospace",
"Shock waves",
"Aerodynamics",
"Waves",
"Space",
"Aerospace engineering",
"Spacetime",
"Fluid dynamics"
] |
3,602,830 | https://en.wikipedia.org/wiki/T-cell%20vaccination | T-cell vaccination is immunization with inactivated autoreactive T cells. The concept of T-cell vaccination is, at least partially, analogous to classical vaccination against infectious disease. However, the agents to be eliminated or neutralized are not foreign microbial agents but a pathogenic autoreactive T-cell population. Research on T-cell vaccination so far has focused mostly on multiple sclerosis and to a lesser extent on rheumatoid arthritis, Crohn's disease and AIDS.
References
Immunology | T-cell vaccination | [
"Biology"
] | 117 | [
"Immunology"
] |
3,602,834 | https://en.wikipedia.org/wiki/T-cell%20vaccine | A T-cell vaccine is a vaccine designed to induce protective T-cells. It is not a vaccine whereby T-cells are administered to the patient.
T-cell vaccines are designed to induce cellular immunity. They are also referred to as cell-mediated immune (CMI) vaccines.
It is believed that CMI vaccines can be more effective than conventional B-cell vaccines for yielding protection against microbes which tend to hide within the host cell, and rapidly mutating microbes (such as HIV or the influenza virus).
T-cell vaccines underwent clinical trials for HIV/AIDS in about 2009.
none had been approved.
However as at December 2020, the Pfizer-BioNTech SARS-CoV-2 vaccine was authorised pursuant to the US FDA's emergency use authorization and became the first FDA authorized T cell vaccine.
References
Vaccines | T-cell vaccine | [
"Biology"
] | 175 | [
"Vaccination",
"Vaccines"
] |
3,603,035 | https://en.wikipedia.org/wiki/Nielsen%20theory | Nielsen theory is a branch of mathematical research with its origins in topological fixed-point theory. Its central ideas were developed by Danish mathematician Jakob Nielsen, and bear his name.
The theory developed in the study of the so-called minimal number of a map f from a compact space to itself, denoted MF[f]. This is defined as:
where ~ indicates homotopy of mappings, and #Fix(g) indicates the number of fixed points of g. The minimal number was very difficult to compute in Nielsen's time, and remains so today. Nielsen's approach is to group the fixed-point set into classes, which are judged "essential" or "nonessential" according to whether or not they can be "removed" by a homotopy.
Nielsen's original formulation is equivalent to the following:
We define an equivalence relation on the set of fixed points of a self-map f on a space X. We say that x is equivalent to y if and only if there exists a path c from x to y with f(c) homotopic to c as paths. The equivalence classes with respect to this relation are called the Nielsen classes of f, and the Nielsen number N(f) is defined as the number of Nielsen classes having non-zero fixed-point index sum.
Nielsen proved that
making his invariant a good tool for estimating the much more difficult MF[f]. This leads immediately to what is now known as the Nielsen fixed-point theorem: Any map f has at least N(f) fixed points.
Because of its definition in terms of the fixed-point index, the Nielsen number is closely related to the Lefschetz number. Indeed, shortly after Nielsen's initial work, the two invariants were combined into a single "generalized Lefschetz number" (more recently called the Reidemeister trace) by Wecken and Reidemeister.
Bibliography
External links
Survey article on Nielsen theory by Robert F. Brown at Topology Atlas
Fixed-point theorems
Fixed points (mathematics)
Topology | Nielsen theory | [
"Physics",
"Mathematics"
] | 419 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Fixed points (mathematics)",
"Fixed-point theorems",
"Theorems in topology",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Dynamical systems"
] |
3,605,571 | https://en.wikipedia.org/wiki/Relativistic%20particle | In particle physics, a relativistic particle is an elementary particle with kinetic energy greater than or equal to its rest-mass energy given by Einstein's relation, , or specifically, of which the velocity is comparable to the speed of light .
This is achieved by photons to the extent that effects described by special relativity are able to describe those of such particles themselves. Several approaches exist as a means of describing the motion of single and multiple relativistic particles, with a prominent example being postulations through the Dirac equation of single particle motion.
Since the energy-momentum relation of an particle can be written as:
where is the energy, is the momentum, and is the rest mass,
when the rest mass tends to be zero, e.g. for a photon, or the momentum tends to be large, e.g. for a large-speed proton, this relation will collapses into a linear dispersion, i.e.
This is different from the parabolic energy-momentum relation for classical particles. Thus, in practice, the linearity or the non-parabolicity of the energy-momentum relation is considered as a key feature for relativistic particles. These two types of relativistic particles are remarked as massless and massive, respectively.
In experiments, massive particles are relativistic when their kinetic energy is comparable to or greater than the energy corresponding to their rest mass. In other words, a massive particle is relativistic when its total mass-energy is at least twice its rest mass. This condition implies that the speed of the particle is close to the speed of light. According to the Lorentz factor formula, this requires the particle to move at roughly 85% of the speed of light. Such relativistic particles are generated in particle accelerators, as well as naturally occurring in cosmic radiation. In astrophysics, jets of relativistic plasma are produced by the centers of active galaxies and quasars.
A charged relativistic particle crossing the interface of two media with different dielectric constants emits transition radiation. This is exploited in the transition radiation detectors of high-velocity particles.
Desktop relativistic particles
Relativistic electrons can also exist in some solid state materials, including semimetals such as graphene, topological insulators, bismuth antimony alloys, and semiconductors such as transitional metal dichalcogenide and black phosphorene layers. These lattice confined electrons with relativistic effects that can be described using the Dirac equation are also called desktop relativistic electrons or Dirac electrons.
See also
Ultrarelativistic particle
Special relativity
Relativistic wave equations
Lorentz factor
Relativistic mass
Relativistic plasma
Relativistic jet
Relativistic beaming
Notes
References
Quantum mechanics
Particle
Accelerator physics | Relativistic particle | [
"Physics"
] | 584 | [
"Applied and interdisciplinary physics",
"Theoretical physics",
"Quantum mechanics",
"Special relativity",
"Experimental physics",
"Theory of relativity",
"Accelerator physics"
] |
27,162,282 | https://en.wikipedia.org/wiki/Laboratory%20for%20Energy%20Conversion | The Laboratory for Energy Conversion (LEC) formerly known as Turbomachinery Laboratory (LSM) was founded in 1892 by Aurel Boleslav Stodola. As part of the Federal Institute of Technology Zurich (ETH). The laboratory has been headed by some of the most prominent mechanical engineers in the history of turbomachinery.
Areas of research
The current research projects at LEC cover the fields of:
energy economics and policy
performance and reliability of wind energy
minimizing high-cycle fatigue failure of compressors
efficiency improvements of turbomachines
aircraft noise suppression
cooling and thermal management
laser produced plasma source (EUV) and debris mitigation
development of a mobile power pack
novel measurement techniques
biomedical diagnostics
Awards
Amongst many noted achievements, LEC has recently developed the FENT probe. This probe, for the first time, enables measurement of entropy generation in Turbomachinery. The highly rated peer-review journal Measurement Science and Technology recognised the development of this probe as the most outstanding contribution in the field of fluid mechanics in 2008.
Professors since 1892
1892 - 1929 Aurel Boleslav Stodola
1929 - 1954 Henri Quiby
1954 - 1983 Prof. Walter Traupel
1983 - 1998 George Gyarmathy
1998 - Prof. Reza Abhari
Industry partners
ABB Group, Switzerland
BKW FMB Energie AG, Switzerland
EOS Holding, Switzerland
General Electric, US
MAN Turbo AG, Switzerland
Mitsubishi Heavy Industries, Japan
MTU, Germany
Siemens, US, Germany
Swisselectric Research, Switzerland
Toshiba, Japan
See also
Brown, Boveri & Cie
Charles Algernon Parsons
Gustaf de Laval
References
External links
ETH Zurich
Laboratory for Energy Conversion
Adlyte
New Enterprise for Engineers
ETH Zurich
Mechanical engineering organizations
Turbines | Laboratory for Energy Conversion | [
"Chemistry",
"Engineering"
] | 357 | [
"Mechanical engineering",
"Mechanical engineering organizations",
"Turbines",
"Turbomachinery"
] |
27,170,659 | https://en.wikipedia.org/wiki/Metamaterials%3A%20Physics%20and%20Engineering%20Explorations | Metamaterials: Physics and Engineering Explorations is a book length introduction to the fundamental research and advancements in electromagnetic composite substances known as electromagnetic metamaterials. The discussion encompasses examination of the physics of metamaterial interactions, the designs, and the perspectives of engineering regarding these materials. Also included throughout the book are potential applications, which are discussed at various points in each section of each chapter. The book encompasses a variety of theoretical, numerical, and experimental perspectives.
This book has been cited by a few hundred other peer-reviewed research efforts, mostly peer-reviewed science articles.
Authors
Nader Engheta received his Ph.D. in Electrical Engineering (with a minor in Physics), in 1982 from the California Institute of Technology. Currently he is a Professor of Electrical and Systems Engineering, and Professor of Bioengineering at the University of Pennsylvania. His current research activities include metamaterials, plasmonics, nano-optics, nanophotonics, bio-inspired sensing and imaging, miniaturized antennas and nanoantennas.
Richard W. Ziolkowski received both his M.S. and Ph.D. in physics, in 1975 and 1980, respectively from the University of Illinois at Urbana-Champaign. Currently he has a dual appointment at the University of Arizona. He is a Professor of Electrical and Computer Engineering, and a Professor of the Optical Sciences. His current research includes metamaterial physics and engineering related to low frequency and high frequency antenna systems, and includes nanoparticle lasers.
Through their respective research, both Engheta and Ziolkowski have each contributed significantly to advancing metamaterials. Ziolkowski has been described as being at the leading edge of metamaterials research since a Defense Advanced Research Projects Agency (DARPA) workshop, in November, 1999.
Research
Nader Engheta and Richard W. Ziolkowski, are also the editors of this book. They have compiled the published research related to metamaterials at the end of each chapter of this book. The content of each chapter describes the path the current research is taking in its respective domain. Included are descriptions of basic research (physics), and how it is applied (engineering). The chapters are written by contributors who are carrying out the actual research and applications, including some chapter contributions by Engheta and Ziolkowski.
Hence, the content of the book also consists of original research papers by researchers in the field, who are knowledgeable about metamaterials, and who have made significant contributions, to the advancement and understanding of metamaterials. These persons were invited to present their discoveries and some conclusions, while researching metamaterials. Included in their findings are the state of the art developments in applications for antennas, waveguides, and related devices, and components.
Scope
The first chapter opens with a very brief overview of the history of metamaterials. Afterwards, a history treatment is interspersed throughout the book, which frames the discussion of the related section or chapter.
The organizational structure of the book begins with dividing the subject, electromagnetic metamaterials, into two major classes of metamaterials. The first major class is the SNG and DNG metamaterials, and the second major class is EBG structured metamaterials.
The organizational format relates the SNG and DNG metamaterials into one class. This class is described by its common structure which is the subwavelength size of the inclusions, and the periodicity of the structure. The inclusions, or cells, are artificially arrayed into an ordered, repeating pattern, of equal dimensions and equidistant spacing. Such structures are then conceptually described as being homogenous and as effective media.
EBG metamaterials, on the other hand, can be described by other periodic media concepts.
These classes are sub-divided further into their three-dimensional (3D volumetric) and two-dimensional (2D planar or surface) realizations. Examples of the aforementioned types of metamaterials are provided and their known and anticipated properties are described.
In all, there are 14 chapters, along with a preface by the authors.
Coverage
The book presents broad coverage of electromagnetic metamaterials. Coverage also includes theoretical, numerical, and experimental perspectives of the contributors, along with current and intended applications. The extensive peer reviewed article reference lists, at the end of each chapter, are noteworthy.
See also
Metamaterials Handbook
Victor Veselago (Originator of metamaterials)
History of metamaterials
Metamaterials (journal)
References
Notes
External links
What Nature Cannot Provide, Engineers Invent Author interview article - University of Arizona, 'a best selling book'.
JOM Book Review Program. Reviews this book. 05/6/2008
Metamaterials
Engineering textbooks
IEEE publications
Physics books | Metamaterials: Physics and Engineering Explorations | [
"Materials_science",
"Engineering"
] | 998 | [
"Metamaterials",
"Materials science"
] |
12,986,940 | https://en.wikipedia.org/wiki/IAPWS | The International Association for the Properties of Water and Steam (IAPWS) is an international non-profit association of national organizations concerned with the properties of water and steam, particularly thermophysical properties and other aspects of high-temperature steam, water and aqueous mixtures that are relevant to thermal power cycles and other industrial applications.
The organization publishes a range of 'releases.' Specifically, these relate to the thermal and expansion properties of steam.
Both free software and commercial software implementations of the IAPWS correlations are available.
References
External links
Official Website
Thermodynamics | IAPWS | [
"Physics",
"Chemistry",
"Mathematics"
] | 122 | [
"Thermodynamics",
"Dynamical systems"
] |
12,987,040 | https://en.wikipedia.org/wiki/CLAW%20hypothesis | The CLAW hypothesis proposes a negative feedback loop that operates between ocean ecosystems and the Earth's climate. The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses act to stabilise the temperature of the Earth's atmosphere. The CLAW hypothesis was originally proposed by Robert Jay Charlson, James Lovelock, Meinrat Andreae and Stephen G. Warren, and takes its acronym from the first letter of their surnames.
CLAW hypothesis
The hypothesis describes a feedback loop that begins with an increase in the available energy from the sun acting to increase the growth rates of phytoplankton by either a physiological effect (due to elevated temperature) or enhanced photosynthesis (due to increased irradiance). Certain phytoplankton, such as coccolithophorids, synthesise dimethylsulfoniopropionate (DMSP), and their enhanced growth increases the production of this osmolyte. In turn, this leads to an increase in the concentration of its breakdown product, dimethyl sulfide (DMS), first in seawater, and then in the atmosphere. DMS is oxidised in the atmosphere to form sulfur dioxide, and this leads to the production of sulfate aerosols. These aerosols act as cloud condensation nuclei and increase cloud droplet number, which in turn elevate the liquid water content of clouds and cloud area. This acts to increase cloud albedo, leading to greater reflection of incident sunlight, and a decrease in the forcing that initiated this chain of events. The figure to the right shows a summarising schematic diagram. Note that the feedback loop can operate in the reverse direction, such that a decline in solar energy leads to reduced cloud cover and thus to an increase in the amount of solar energy reaching the Earth's surface.
A significant feature of the chain of interactions described above is that it creates a negative feedback loop, whereby a change to the climate system (increased/decreased solar input) is ultimately counteracted and damped by the loop. As such, the CLAW hypothesis posits an example of planetary-scale homeostasis or complex adaptive system, consistent with the Gaia hypothesis framed by one of the original authors of the CLAW hypothesis, James Lovelock.
Some subsequent studies of the CLAW hypothesis have uncovered evidence to support its mechanism, although this is not unequivocal. Other researchers have suggested that a CLAW-like mechanism may operate in the Earth's sulfur cycle without the requirement of an active biological component. A 2014 review article criticised the hypothesis for being an oversimplification and that the effect might be much weaker than proposed.
Anti-CLAW hypothesis
In his 2006 book The Revenge of Gaia, Lovelock proposed that instead of providing negative feedback in the climate system, the components of the CLAW hypothesis may act to create a positive feedback loop.
Under future global warming, increasing temperature may stratify the world ocean, decreasing the supply of nutrients from the deep ocean to its productive euphotic zone. Consequently, phytoplankton activity will decline with a concomitant fall in the production of DMS. In a reverse of the CLAW hypothesis, this decline in DMS production will lead to a decrease in cloud condensation nuclei and a fall in cloud albedo. The consequence of this will be further climate warming which may lead to even less DMS production (and further climate warming). The figure to the right shows a summarising schematic diagram.
Evidence for the anti-CLAW hypothesis is constrained by similar uncertainties as those of the sulfur cycle feedback loop of the CLAW hypothesis. However, researchers simulating future oceanic primary production have found evidence of declining production with increasing ocean stratification, leaving open the possibility that such a mechanism may exist.
See also
Earth science
Geophysiology
Iron fertilization
References
External links
Gaia and CLAW, Max Planck Institute for Chemistry, Mainz
DMS and climate, Pacific Marine Environmental Laboratory, Seattle
Atmospheric radiation
Climate change feedbacks
Particulates
Satellite meteorology | CLAW hypothesis | [
"Chemistry"
] | 844 | [
"Particulates",
"Particle technology"
] |
12,987,835 | https://en.wikipedia.org/wiki/The%20Ambidextrous%20Universe | The Ambidextrous Universe is a popular science book by Martin Gardner, covering aspects of symmetry and asymmetry in human culture, science and the wider universe. It culminates in a discussion of whether nature's conservation of parity (the symmetry of mirrored quantum systems) is ever violated, which had been proven experimentally in 1956.
The book was originally published in 1964 with the subtitle Left, Right, and the Fall of Parity, with a revised version following in 1969. A second edition was released in 1979 with the new subtitle Mirror Asymmetry and Time-Reversed Worlds. The third edition was released in 1990 under the title The New Ambidextrous Universe: Symmetry and Asymmetry from Mirror Reflections to Superstrings; this was with minor revisions in 2005.
Content
The book begins with the subject of mirror reflection, and from there passes through symmetry in geometry, poetry, art, music, galaxies, stars, planets and living organisms. It then moves down into the molecular scale and looks at how symmetry and asymmetry have evolved from the beginning of life on Earth. There is a chapter on carbon and its versatility and on chirality in biochemistry.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universe. This discussion concerns various aspects of atomic and subatomic physics and how they relate to mirror asymmetry and the related concepts of chirality, antimatter, magnetic and electrical polarity, parity, charge and spin. Time invariance (and reversal) is discussed. Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and .
The Ozma Problem
The 18th chapter, "The Ozma Problem", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project Ozma. This is the problem of how to communicate the meaning of left and right, where the two communicants are conditionally not allowed to view any one object in common.
The problem was first implied in Immanuel Kant's discussion of a hand isolated in space, which would have no meaning as left or right by itself; Gardner posits that Kant would today explain his problem using the reversibility of objects through a higher dimension. A three-dimensional hand can be reversed in a mirror or a hypothetical fourth dimension. In more easily visualizable terms, an outline of a hand in Flatland could be flipped over; the meaning of left or right would not apply until a being missing a corresponding hand came along. Charles Howard Hinton expressed the essential problem in 1888, as did William James in his The Principles of Psychology (1890). Gardner follows the thread of several false leads on the road to the solution of the problem, such as the magnetic poles of astronomical bodies and the chirality of life molecules, which could be arbitrary based on how life locally originated.
The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60. At a conference earlier that year, Richard Feynman had asked (on behalf of Martin M. Block) whether parity was sometimes violated, leading Tsung-Dao Lee and Chen-Ning Yang to propose Wu's experiment, for which Lee and Yang were awarded the 1957 Nobel Prize in Physics. It was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrials. An earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized.
Literary references
The Ambidextrous Universe references several physics-themed poems and certain works of literature which help to illustrate various points. Additionally, some other works have referenced Gardner's book.
W. H. Auden
W. H. Auden alludes to The Ambidextrous Universe in his poem "Josef Weinheber" (1965).
Vladimir Nabokov
Pale Fire
In the original 1964 edition of The Ambidextrous Universe, Gardner quoted two lines of poetry from Vladimir Nabokov's 1962 novel Pale Fire which are supposed to have been written by a poet, "John Shade", who is actually fictional. As a joke, Gardner credited the lines only to Shade and put Shade's name in the index as if he were a real person. In his 1969 novel Ada or Ardor: A Family Chronicle, Nabokov returned the favor by having the character Van Veen "quote" the Gardner book along with the two lines of verse:
"Space is a swarming in the eyes, and Time a singing in the ears," says John Shade, a modern poet, as quoted by an invented philosopher ("Martin Gardiner" ) in The Ambidextrous Universe, page 165 .
Look at the Harlequins!
Nabokov's 1974 novel Look at the Harlequins!, about a man who cannot distinguish left from right, was heavily influenced by his reading of The Ambidextrous Universe.
Reviews
Games
References
Bibliography
1964 non-fiction books
Asymmetry
Science books
Symmetry
Works by Martin Gardner | The Ambidextrous Universe | [
"Physics",
"Mathematics"
] | 1,140 | [
"Geometry",
"Symmetry",
"Asymmetry"
] |
12,989,754 | https://en.wikipedia.org/wiki/GAL4/UAS%20system | The GAL4-UAS system is a biochemical method used to study gene expression and function in organisms such as the fruit fly. It is based on the finding by Hitoshi Kakidani and Mark Ptashne, and Nicholas Webster and Pierre Chambon in 1988 that Gal4 binding to UAS sequences activates gene expression. The method was introduced into flies by Andrea Brand and Norbert Perrimon in 1993 and is considered a powerful technique for studying the expression of genes. The system has two parts: the Gal4 gene, encoding the yeast transcription activator protein Gal4, and the UAS (Upstream Activation Sequence), an enhancer to which GAL4 specifically binds to activate gene transcription.
Overview
The Gal4 system allows separation of the problems of defining which cells express a gene or protein and what the experimenter wants to do with this knowledge. Geneticists have created genetic variants of model organisms (typically fruit flies), called GAL4 lines, each of which expresses GAL4 in some subset of the animal's tissues. For example, some lines might express GAL4 only in muscle cells, or only in nerves, or only in the antennae, and so on. For fruit flies in particular, there are tens of thousands of such lines, with the most useful expressing GAL4 in only a very specific subset of the animal—perhaps, for example, only those neurons that connect two specific compartments of the fly's brain. The presence of GAL4, by itself, in these cells has little or no effect, since GAL4's main effect is to bind to a UAS region, and most cells have no (or innocuous) UAS regions.
Since Gal4 by itself is not visible, and has little effect on cells, the other necessary part of this system are the "reporter lines". These are strains of flies with the special UAS region next to a desired gene. These genetic instructions occur in every cell of the animal, but in most cells nothing happens since that cell is not producing GAL4. In the cells that are producing GAL4, however, the UAS is activated, the gene next to it is turned on, and it starts producing its resulting protein. This may report to the investigator which cells are expressing GAL4, hence the term "reporter line", but genes intended to manipulate the cell behavior are often used as well.
Typical reporter genes include:
Fluorescent proteins like green (GFP) or red fluorescent proteins (RFP), which allow scientists to see which cells express Gal4
Channelrhodopsin, which allows light-sensitive triggering of nerve cells
Halorhodopsin, which conversely allows light to suppress the firing of neurons
Shibire, which shuts neurons off, but only at higher temperatures (30 °C and above). Flies with this gene can be raised and tested at lower temperatures where their neurons will behave normally. Then the body temperature of the flies can be raised (since they are cold-blooded), and these neurons turn off. If the fly's behavior changes, this gives a strong clue to what those neurons do.
GECI (Genetically Encoded Calcium Indicator), often a member of the GCaMP family of proteins. These proteins fluoresce when exposed to calcium, which, in most cells, happens when the neuron fires. This allows scientists to take pictures, or movies, that show the nervous system in operation.
For example, scientists can first visualize a class of neurons by choosing a fly from a GAL4 line that expresses GAL4 in the desired set of neurons, and crossing it with a reporter line that express GFP. In the offspring, the desired subset of cells will make GAL4, and in these cells the GAL4 will bind to the UAS, and enable the production of GFP. So the desired subset of cells will now fluoresce green and can be followed with a fluorescence microscope. Next, to figure out what these cells might do, the experimenter might express channelrhodopsin in each of these cells, by crossing the same GAL4 line with a channelrhodopsin reporter line. In the offspring the selected cells, and only those cells, will contain channelrhodopsin and can be triggered by a bright light. Now the scientist can trigger these particular cells at will, and examine the resulting behavior to see what these cells might do.
Operation
Gal4 is a modular protein consisting broadly of a DNA-binding domain and an activation domain. The UAS to which GAL4 binds is CGG-N11-CCG, where N can be any base. Although GAL4 is a yeast protein not normally present in other organisms it has been shown to work as a transcription activator in a variety of organisms such as Drosophila, and human cells, highlighting that the same mechanisms for gene expression have been conserved over the course of evolution.
For study in Drosophila, the GAL4 gene is placed under the control of a native gene promoter, or driver gene, while the UAS controls expression of a target gene. GAL4 is then only expressed in cells where the driver gene is usually active. In turn, GAL4 should only activate gene transcription where a UAS has been introduced. For example, by fusing a gene encoding a visible marker like GFP (Green Fluorescent Protein) the expression pattern of the driver genes can be determined. GAL4 and the UAS are very useful for studying gene expression in Drosophila as they are not normally present and their expression does not interfere with other processes in the cell. For example, GAL4/UAS-regulated transgenes in Drosophila have been used to alter glial expression to produce arrhythmic behavior in a known rhythmic circadian output called pigment dispersing factor (PDF). However, some research has indicated that over-expression of GAL4 in Drosophila can have side-effects, probably relating to immune and stress responses to what is essentially an alien protein.
The GAL4-UAS system has also been employed to study gene expression in organisms besides Drosophila such as the African clawed frog Xenopus and zebrafish.
The GAL4/UAS system is also utilized in Two-Hybrid Screening, a method of identifying interactions between two proteins or a protein with DNA.
Extensions
Gal4 expression can be made even more specific by means of "intersectional strategies". These can combine two different GAL4 lines—say, A and B—in a way that GAL4 is only expressed in the cells that are in line A but not line B, or those that are in both lines A and B. When combined with intrinsically sparse GAL4 lines, this offers very specific selection, often limited to a single cell type. The disadvantage is that at least three independent insertion sites are required, so the lines must use different and independent insertion sites, and creating the desired final organisms needs more than a single cross. This is a very active field of research, and there are many such intersectional strategies, of which two are discussed below.
One way to create GAL4 expression in the cells that are in line A but not line B, requires line A to be made to express GAL4, and line B made to express Gal80, which is a GAL4 inhibitor. Therefore, only the cells that are in A but not B will have active GAL4, which can then drive the reporter gene.
To express GAL4 in only the cells contained in both A and B, a technique called "split-GAL4" can be used. Line A is made to express half of the GAL4 protein, which is inactive by itself. Similarly, line B is made to express the other half of GAL4, also inactive by itself. Only the cells that are in both lines make both halves, which self-assemble by leucine zipper into GAL4 and activate the reporter gene.
References
Genetics | GAL4/UAS system | [
"Engineering",
"Biology"
] | 1,611 | [
"Genetics techniques",
"Genetic engineering"
] |
12,989,803 | https://en.wikipedia.org/wiki/Non-positive%20curvature | In mathematics, spaces of non-positive curvature occur in many contexts and form a generalization of hyperbolic geometry. In the category of Riemannian manifolds, one can consider the sectional curvature of the manifold and require that this curvature be everywhere less than or equal to zero. The notion of curvature extends to the category of geodesic metric spaces, where one can use comparison triangles to quantify the curvature of a space; in this context, non-positively curved spaces are known as (locally) CAT(0) spaces.
Riemann Surfaces
If is a closed, orientable Riemann surface then it follows from the Uniformization theorem that may be endowed with a complete Riemannian metric with constant Gaussian curvature of either , or . As a result of the Gauss–Bonnet theorem one can determine that the surfaces which have a Riemannian metric of constant curvature i.e. Riemann surfaces with a complete, Riemannian metric of non-positive constant curvature, are exactly those whose genus is at least . The Uniformization theorem and the Gauss–Bonnet theorem can both be applied to orientable Riemann surfaces with boundary to show that those surfaces which have a non-positive Euler characteristic are exactly those which admit a Riemannian metric of non-positive curvature. There is therefore an infinite family of homeomorphism types of such surfaces whereas the Riemann sphere is the only closed, orientable Riemann surface of constant Gaussian curvature .
The definition of curvature above depends upon the existence of a Riemannian metric and therefore lies in the field of geometry. However the Gauss–Bonnet theorem ensures that the topology of a surface places constraints on the complete Riemannian metrics which may be imposed on a surface so the study of metric spaces of non-positive curvature is of vital interest in both the mathematical fields of geometry and topology. Classical examples of surfaces of non-positive curvature are the Euclidean plane and flat torus (for curvature ) and the hyperbolic plane and pseudosphere (for curvature ). For this reason these metrics as well as the Riemann surfaces which on which they lie as complete metrics are referred to as Euclidean and hyperbolic respectively.
Generalizations
The characteristic features of the geometry of non-positively curved Riemann surfaces are used to generalize the notion of non-positive beyond the study of Riemann surfaces. In the study of manifolds or orbifolds of higher dimension, the notion of sectional curvature is used wherein one restricts one's attention to two-dimensional subspaces of the tangent space at a given point. In dimensions greater than the Mostow–Prasad rigidity theorem ensures that a hyperbolic manifold of finite area has a unique complete hyperbolic metric so the study of hyperbolic geometry in this setting is integral to the study of topology.
In an arbitrary geodesic metric space the notions of being Gromov hyperbolic or of being a CAT(0) space generalise the notion that on a Riemann surface of non-positive curvature, triangles whose sides are geodesics appear thin whereas in settings of positive curvature they appear fat. This notion of non-positive curvature allows the notion of non-positive curvature is most commonly applied to graphs and is therefore of great use in the fields of combinatorics and geometric group theory.
See also
Margulis lemma
References
Curvature (mathematics)
Hyperbolic geometry
Metric geometry | Non-positive curvature | [
"Physics"
] | 689 | [
"Geometric measurement",
"Physical quantities",
"Curvature (mathematics)"
] |
12,993,112 | https://en.wikipedia.org/wiki/Magic%20angle%20%28EELS%29 | The magic angle is a particular value of the collection angle of an electron microscope at which the measured energy-loss spectrum "magically" becomes independent of the tilt angle of the sample with respect to the beam direction. The magic angle is not uniquely defined for isotropic samples, but the definition is unique in the (typical) case of small angle scattering on materials with a "c-axis", such as graphite.
The "magic" angle depends on both the incoming electron energy (which is typically fixed) and the energy loss suffered by the electron. The ratio of the magic angle to the characteristic angle is roughly independent of the energy loss and roughly independent of the particular type of sample considered.
Mathematical definition
For the case of a relativistic incident electron, the "magic" angle is defined by the equality of two different functions
(denoted below by and ) of the collection angle :
and
where is the speed of the incoming electron divided by the speed of light (N.B., the symbol is also often used in the older literature to denote the collection angle instead of ).
Of course, the above integrals may easily be evaluated in terms of elementary functions, but they are presented as above because in the above form it is easier to see that the former integral is due to momentum transfers which are perpendicular to the beam direction, whereas the latter is due to momentum transfers parallel to the beam direction.
Using the above definition, it is then found that
References
Materials science
Spectroscopy
Chemical physics | Magic angle (EELS) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 302 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Molecular physics",
"Instrumental analysis",
"Materials science",
"nan",
"Spectroscopy",
"Chemical physics"
] |
12,994,056 | https://en.wikipedia.org/wiki/Evolutionary%20invasion%20analysis | Evolutionary invasion analysis, also known as adaptive dynamics, is a set of mathematical modeling techniques that use differential equations to study the long-term evolution of traits in asexually and sexually reproducing populations. It rests on the following three assumptions about mutation and population dynamics:
Mutations are infrequent. The population can be assumed to be at equilibrium when a new mutant arises.
The number of individuals with the mutant trait is initially negligible in the large, established resident population.
Mutant phenotypes are only slightly different from the resident phenotype.
Evolutionary invasion analysis makes it possible to identify conditions on model parameters for which the mutant population dies out, replaces the resident population, and/or coexists with the resident population. Long-term coexistence of the two phenotypes is known as evolutionary branching. When branching occurs, the mutant establishes itself as a second resident in the environment.
Central to evolutionary invasion analysis is the mutant's invasion fitness. This is a mathematical expression for the long-term exponential growth rate of the mutant subpopulation when it is introduced into the resident population in small numbers. If the invasion fitness is positive (in continuous time), the mutant population can grow in the environment set by the resident phenotype. If the invasion fitness is negative, the mutant population swiftly goes extinct.
Introduction and background
The basic principle of evolution via natural selection was outlined by Charles Darwin in his 1859 book, On the Origin of Species. Though controversial at the time, the central ideas remain largely unchanged to this date, even though much more is now known about the biological basis of inheritance. Darwin expressed his arguments verbally, but many attempts have since then been made to formalise the theory of evolution. The best known are population genetics which models inheritance at the expense of ecological detail, quantitative genetics which incorporates quantitative traits influenced by genes at many loci, and evolutionary game theory which ignores genetic detail but incorporates a high degree of ecological realism, in particular that the success of any given strategy depends on the frequency at which strategies are played in the population, a concept known as frequency dependence.
Adaptive dynamics is a set of techniques developed during the 1990s for understanding the long-term consequences of small mutations in the traits expressing the phenotype. They link population dynamics to evolutionary dynamics and incorporate and generalise the fundamental idea of frequency-dependent selection from game theory.
Fundamental ideas
Two fundamental ideas of adaptive dynamics are that the resident population is in a dynamical equilibrium when new mutants appear, and that the eventual fate of such mutants can be inferred from their initial growth rate when rare in the environment consisting of the resident. This rate is known as the invasion exponent when measured as the initial exponential growth rate of mutants, and as the basic reproductive number when it measures the expected total number of offspring that a mutant individual produces in a lifetime. It is sometimes called the invasion fitness of mutants.
To make use of these ideas, a mathematical model must explicitly incorporate the traits undergoing evolutionary change. The model should describe both the environment and the population dynamics given the environment, even if the variable part of the environment consists only of the demography of the current population. The invasion exponent can then be determined. This can be difficult, but once determined, the adaptive dynamics techniques can be applied independent of the model structure.
Monomorphic evolution
A population consisting of individuals with the same trait is called monomorphic. If not explicitly stated otherwise, the trait is assumed to be a real number, and r and m are the trait value of the monomorphic resident population and that of an invading mutant, respectively.
Invasion exponent and selection gradient
The invasion exponent is defined as the expected growth rate of an initially rare mutant in the environment set by the resident (r), which means the frequency of each phenotype (trait value) whenever this suffices to infer all other aspects of the equilibrium environment, such as the demographic composition and the
availability of resources. For each r, the invasion exponent can be thought of as the fitness landscape experienced by an initially rare mutant. The landscape changes with each successful invasion, as is the case in evolutionary game theory, but in contrast with the classical view of evolution as an optimisation process towards ever higher fitness.
We will always assume that the resident is at its demographic attractor, and as a consequence for all r, as otherwise the population would grow indefinitely.
The selection gradient is defined as the slope of the invasion exponent at , . If the sign of the selection gradient is positive (negative) mutants with slightly higher (lower) trait values may successfully invade. This follows from the linear approximation
which holds whenever .
Pairwise-invasibility plots
The invasion exponent represents the fitness landscape as experienced by a rare mutant. In a large (infinite) population only mutants with trait values for which is positive are able to
successfully invade. The generic outcome of an invasion is that the mutant replaces the resident, and the fitness landscape as experienced by a rare mutant changes. To determine the outcome of the resulting series of invasions pairwise-invasibility plots (PIPs) are often used. These show for each resident trait value all mutant trait values for which is positive. Note that is zero at the diagonal . In PIPs the fitness landscapes as experienced by a rare mutant correspond to the vertical lines where the resident trait value is constant.
Evolutionarily singular strategies
The selection gradient determines the direction of evolutionary change. If it is positive (negative) a mutant with a slightly higher (lower) trait-value will generically invade and replace the resident. But what will happen if vanishes? Seemingly evolution should come to a halt at such a point. While this is a possible outcome, the general situation is more complex. Traits or strategies for which , are known as evolutionarily singular strategies. Near such points the fitness landscape as experienced by a rare mutant is locally `flat'. There are three qualitatively different ways in which this can occur. First, a degenerate case similar to the saddle point of a qubic function where finite evolutionary steps would lead past the local 'flatness'. Second, a fitness maximum which is known as an evolutionarily stable strategy (ESS) and which, once established, cannot be invaded by nearby
mutants. Third, a fitness minimum where disruptive selection will occur and the population branch into two morphs. This process is known as evolutionary branching. In a pairwise invasibility plot the singular strategies are found where the boundary of the region of positive invasion fitness intersects the diagonal.
Singular strategies can be located and classified once the selection gradient is known. To locate singular strategies, it is sufficient to find the points for which the selection gradient vanishes, i.e. to find such that . These can be classified then using the second derivative test from basic calculus. If the second derivative evaluated at is negative (positive) the strategy represents a local fitness maximum (minimum). Hence, for an evolutionarily stable strategy we have
If this does not hold the strategy is evolutionarily unstable and, provided that it is also convergence stable, evolutionary branching will eventually occur. For a singular strategy to be convergence stable monomorphic populations with slightly lower or slightly higher trait values must be invadable by mutants with trait values closer to . That this can happen the selection gradient in a neighbourhood of must be positive for and negative for . This means that the slope of as a function of
at is negative, or equivalently
The criterion for convergence stability given above can also be expressed using second derivatives of the invasion exponent, and the classification can be refined to span more than the simple cases considered here.
Polymorphic evolution
The normal outcome of a successful invasion is that the mutant replaces the resident. However, other outcomes are also possible; in particular both the resident and the mutant may persist, and the population then becomes dimorphic. Assuming that a trait persists in the population if and only if its expected growth-rate when rare is positive, the condition for coexistence among two traits and is
and
where and are often referred to as morphs. Such a pair is a protected dimorphism. The set of all protected dimorphisms is known as the region of coexistence. Graphically, the region consists of the overlapping parts when a pair-wise invasibility plot is mirrored over the diagonal
Invasion exponent and selection gradients in polymorphic populations
The invasion exponent is generalised to dimorphic populations straightforwardly, as the expected growth rate of a rare mutant in the environment set by the two morphs and
. The slope of the local fitness landscape for a mutant close to or is now given by the selection gradients
and
In practise, it is often difficult to determine the dimorphic
selection gradient and invasion exponent analytically, and one often
has to resort to numerical computations.
Evolutionary branching
The emergence of protected dimorphism near singular points during the course of evolution is not unusual, but its significance depends on whether selection is stabilising or disruptive. In the latter case, the traits of the two morphs will diverge in a process often referred to as evolutionary branching. Geritz 1998 presents a compelling
argument that disruptive selection only occurs near fitness minima. To understand this heuristically, consider a dimorphic population and near a singular point. By continuity
and, since
the fitness landscape for the dimorphic population must be a perturbation of that for a monomorphic resident near the singular strategy.
Trait evolution plots
Evolution after branching is illustrated using trait evolution plots. These show the region of coexistence, the direction of evolutionary change and whether points where the selection gradient vanishes are fitness maxima or minima. Evolution may well lead the dimorphic population outside the region of coexistence, in which case one morph is extinct and the population once again becomes monomorphic.
Other uses
Adaptive dynamics effectively combines game theory and population dynamics. As such, it can be very useful in investigating how evolution affects the dynamics of populations. One interesting finding to come out of this is that individual-level adaptation can sometimes result in the extinction of the whole population/species, a phenomenon known as evolutionary suicide.
References
External links
Diekmann, Odo (2004). A beginner's guide to adaptive dynamics.
Metz, J.A.J.; Geritz, S.A.H.; Meszéna, G.; Jacobs, F.J.A.; van Heerwaarden, J.S. (September 1995). Adaptive Dynamics: A Geometrical Study of the Consequences of Nearly Faithful Reproduction.
Brännström, Åke; Johansson, Jacob; von Festenberg, Niels (24 June 2013). The Hitchhiker's guide to adaptive dynamics.
Adaptive Dynamics Papers, a list of academic papers about adaptive dynamics.
Hauert, Christof (2004). The origin of cooperators and defectors, an interactive tutorial introducing adaptive dynamics from a game theoretical perspective.
Evolutionary dynamics
Differential equations | Evolutionary invasion analysis | [
"Mathematics"
] | 2,253 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.