id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,853,911 | https://en.wikipedia.org/wiki/Neuraminidase%20inhibitor | Neuraminidase inhibitors (NAIs) are a class of drugs which block the neuraminidase enzyme. They are a commonly used antiviral drug type against influenza. Viral neuraminidases are essential for influenza reproduction, facilitating viral budding from the host cell. Oseltamivir (Tamiflu), zanamivir (Relenza), laninamivir (Inavir), and peramivir belong to this class. Unlike the M2 inhibitors, which work only against the influenza A virus, NAIs act against both influenza A and influenza B.
The NAIs oseltamivir and zanamivir were approved in the US and Europe for treatment and prevention of influenza A and B. Peramivir acts by strongly binding to the neuraminidase of the influenza viruses and inhibits activation of neuraminidase much longer than oseltamivir or zanamivir. However, laninamivir in the cells is slowly released into the respiratory tract, resulting in long-lasting anti-influenza virus activity. Thus the mechanism of the long-lasting activity of laninamivir is basically different from that of peramivir.
The efficacy was highly debated in recent years. However, after the pandemic caused by H1N1 in 2009, the effectiveness of early treatment with neuraminidase inhibitors in reducing serious cases and deaths was reported in various countries.
In countries where influenza-like illness is treated using NAIs on a national level, statistical reports show a low fatality record for symptomatic illness because of the universal implementation of early treatment using this class of drugs. Although oseltamivir is widely used in these countries, there have been no outbreaks caused by oseltamivir-resistant viruses and also no serious illness caused by oseltamivir-resistant viruses has ever been reported. The United States Centers for Disease Control and Prevention continues to recommend the use of oseltamavir treatment for people at high risk for complications and the elderly and those at lower risk who present within 48 hours of first symptoms of infection.
Common side effects include nausea and vomiting. The abnormal behaviors of children after taking oseltamivir that have been reported may be an extension of delirium or hallucinations caused by influenza. It occurs in the early stages of the illness, such as within 48 hours after onset of the illness. Therefore, children with influenza are advised to be observed by their parents until 48 hours after the onset of the influenza illness, regardless of whether the child is treated with NAIs.
Specific neuraminidase inhibitors
Laninamivir
Oseltamivir (Tamiflu)
Peramivir (Rapivab)
Zanamivir (Relenza)
Structures of the viral neuraminidase inhibitors in use
Natural products
Cyanidin-3-sambubioside (extracted from black elderberry)
Coptisine
Berberine
See also
Discovery and development of neuraminidase inhibitors
References
External links
Carbohydrate chemistry | Neuraminidase inhibitor | [
"Chemistry"
] | 617 | [
"Neuraminidase inhibitors",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
2,854,628 | https://en.wikipedia.org/wiki/Ambient%20space%20%28mathematics%29 | In mathematics, especially in geometry and topology, an ambient space is the space surrounding a mathematical object along with the object itself. For example, a 1-dimensional line may be studied in isolation —in which case the ambient space of is , or it may be studied as an object embedded in 2-dimensional Euclidean space —in which case the ambient space of is , or as an object embedded in 2-dimensional hyperbolic space —in which case the ambient space of is . To see why this makes a difference, consider the statement "Parallel lines never intersect." This is true if the ambient space is , but false if the ambient space is , because the geometric properties of are different from the geometric properties of . All spaces are subsets of their ambient space.
See also
Configuration space
Geometric space
Manifold and ambient manifold
Submanifolds and Hypersurfaces
Riemannian manifolds
Ricci curvature
Differential form
Further reading
Geometry
Topology | Ambient space (mathematics) | [
"Physics",
"Mathematics"
] | 188 | [
"Topology",
"Space",
"Geometry",
"Geometry stubs",
"Spacetime"
] |
2,854,670 | https://en.wikipedia.org/wiki/Genetic%20use%20restriction%20technology | Genetic use restriction technology (GURT), also known as terminator technology or suicide seeds, is designed to restrict access to "genetic materials and their associated phenotypic traits." The technology works by activating (or deactivating) specific genes using a controlled stimulus in order to cause second generation seeds to be either infertile or to not have one or more of the desired traits of the first generation plant. GURTs can be used by agricultural firms to enhance protection of their innovations in genetically modified organisms by making it impossible for farmers to reproduce the desired traits on their own. Another possible use is to prevent the escape of genes from genetically modified organisms into the surrounding environment.
The technology was originally developed under a cooperative research and development agreement between the Agricultural Research Service of the United States Department of Agriculture and Delta & Pine Land Company in the 1990s. The purpose of the development was to protect the intellectual property of biotechnology firms that the United States Department of Agriculture viewed as being a specifically American technological competence. The technology, while still being developed, is not yet commercially available due to the political and scientific controversies that accompanied its development.
GURT was first reported on by the Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA) to the UN Convention on Biological Diversity and discussed during the 8th Conference of the Parties to the United Nations Convention on Biological Diversity in Curitiba, Brazil, March 20–31, 2006.
Process
The GURT process is typically composed of four genetic components: a target gene, a promoter, a trait switch, and a genetic switch, sometimes with slightly different names given in different papers. A typical GURT involves the engineering of a plant that has a target gene in its DNA that expresses when activated by a promoter gene. However, it is separated from the target gene by a blocker sequence that prevents the promoter from accessing the target. When the plant receives a given external input, a genetic switch in the plant takes the input, amplifies it, and converts it into a biological signal. When a trait switch receives the amplified signal, it creates an enzyme that cuts the blocker sequence out. With the blocker sequence eliminated, the promoter gene allows the target gene to express itself in the plant.
In other versions of the process, an operator must bind to the trait switch in order for it to make the enzymes that cut out the blocker sequence. However, there are repressors that bind to the trait switch and prevent it from doing so. In this case, when the external input is applied, the repressors bond to the input instead of to the trait switch, allowing the enzymes to be created that cut the blocker sequence, thereby allowing the trait to be expressed.
Other GURTs embody alternative approaches, such as letting the genetic switch directly affect the blocker sequence and bypass the need for a trait switch.
Variants
There are two broad categories of GURTs: Variety-specific genetic use restriction technologies (V-GURTs) and Trait specific genetic use restriction technologies (T-GURTs). The two variants have been described as follows:V-GURTs are designed to restrict the use of all genetic materials contained in an entire plant variety. Prior to being sold to growers, the seeds of V-GURTs are activated by the seed company. The seeds can germinate, and the plants grow and reproduce normally, but their offspring will be sterile... . Thus, farmers could not save seed from year-to-year to replant. In contrast, T-GURTs only restrict the use of particular traits conferred by a transgene, but seeds are fertile. Growers could replant seed from the previous harvest, but they would not contain the transgenic trait.
Variety specific GURTs or V-GURTs
Variety-specific genetic use restriction technologies destroy seed development and plant fertility by means of a "genetic process triggered by a chemical inducer that will allow the plant to grow and to form seeds, but will cause the embryo of each of those seeds to produce a cell toxin that will prevent its germination if replanted, thus causing second generation seeds to be sterile... ." The toxin degrades the DNA or RNA of the plant. Thus, the seed from the crop is not viable and cannot be used as seeds to produce subsequent crops, but only for sale as food or fodder.
Trait specific GURTs or T-GURTs
Trait specific genetic use restriction technologies modify a crop in such a way that the genetic enhancement engineered into the crop does not function until the plant is treated with a specific chemical. The chemical acts as the external input, activating the target gene. One difference in T-GURTs is the possibility that the gene could be toggled on and off with different chemical inputs, resulting in the same toggling on or off an associated trait. With T-GURTs, seeds could possibly be saved for planting with a condition that the new plants do not get any enhanced traits unless the external input is added.
Benefits and risks
GURTs have a number of potential uses, though they have not yet been used in commercial agricultural products available on the market or in pharmaceutical applications. These uses include protection of intellectual property for biotechnological innovations, and bio-confinement (preventing escape of genetically engineered genes into nature).
Intellectual property protection
The original aim of the developers of GURTs was the protection of intellectual property in agricultural biotechnology. That is, the developers sought to prevent farmers from reusing patented seeds in cases where patents for biological innovations did not exist or could not be easily enforced. This problem is not generally posed for farmers using hybrid seeds (which, in any case, are not fertile or do not breed true) and, thus, could not be used to grow subsequent crops. However, the V-GURTS make it impossible for farmers to use seeds they have produced to grow crops in subsequent seasons because the entire genome of the targeted cells is destroyed. The T-GURTs could be used by seed companies to allow for the commercialisation of seeds that are fertile, but that develop into plants with desired traits only when sprayed with an activator chemical sold by the company.
Bio-confinement
An ongoing fear raised by GURTs and other biotechnologies is that the genes of genetically modified plants might escape into nature via sexual reproduction with compatible wild plants or with other cultivated plants. This is known as 'transgene escape' and is among the highest priority risks posed by genetic engineering of plants. This risk of escape is one of the reasons that the GURT process has not yet been used in commercial applications (indeed, the main producing companies have vowed to not commercialise these products, though they still have related research programs). Ironically, GURTs – themselves a process for the genetic modification of plants – may also be used to secure the 'bio-confinement' of the transgenes of genetically modified plants. GURTs, because they control plant fertility in various ways, could be used to prevent the escape of transgenes into wild relatives and help reduce risks of deleterious impacts on biodiversity. For bio-confinement, both "V- and T GURTs could be targeted to reproductive tissues, most typically pollen and seed (or embryo)." Crops modified to produce non-food products (eg. in pharmacology, therapeutic proteins, monoclonal antibodies and vaccines) could be armed with GURTs to prevent accidental transmission of these traits into crops meant for foods.
Other uses
Another possible advantage is that non-viable seeds produced on V-GURT plants may reduce the propagation of volunteer plants. Volunteer plants can become an economic problem for larger-scale mechanized farming systems that incorporate crop rotation. Furthermore, under warm, wet harvest conditions non V-GURT grain can sprout, lowering the quality of grain produced. It is likely that this problem would not occur with the use of V-GURT grain varieties.
Another proposed use is in synthetic biology, where a restricted activator chemical must be added to the fermentation medium to produce a desired output chemical.
Controversy
As of 2006, GURT seeds have not been commercialized anywhere in the world due to opposition from farmers, consumers, indigenous peoples, NGOs, and some governments. Using the technology, companies that manufacture genetic use restriction technologies could potentially acquire an advantageous position vis-a-vis farmers because the seeds sold could not be resown. V-GURTs would not have an immediate impact on the many farmers who use hybrid seeds, as they do not produce their own planting seeds, buying instead specialized hybrid seeds from seed production companies. However, approximately 80 percent of farmers in Brazil and Pakistan grow crops using seeds saved from previous harvests. Another concern is that farmers purchasing the seeds would be greatly impacted, given they would have to buy new seeds every year. It has been argued that this would result in higher prices in food.
Some analysts have expressed concerns that GURT seeds might adversely impact biodiversity and threaten native species of plants. However, proponents of the technology dispute these claims, arguing that because non-GMO hybrid plants are used in the same way and GURT seeds could help farmers deal with cross pollination, the benefits outweigh the potential negatives.
In 2000, the United Nations Convention on Biological Diversity recommended a de facto moratorium on field-testing and commercial sale of terminator seeds; the moratorium was re-affirmed and the language strengthened in March 2006, at the COP8 meeting of the UNCBD. Specifically, the moratorium recommended that, due to a lack of research on the technology's potential risks, no field testing of GURTs nor products using them should be allowed until there was a sufficiently justified reason to do so. India and Brazil have passed national laws to prohibit the technology.
See also
Cartagena Protocol on Biosafety
Diamond v. Chakrabarty
Digital rights management
Genetic pollution
Genetically modified organism
Seed saving
Transgenic maize
References
External links
- UNEP/CBD/COP/5/2 - 11 November 1999 - Mention of genetic use restriction tech on pages 22, 42
UN Convention on Biological Diversity - Cartagena Protocol on Biosafety
USPTO Patent Number 5,723,765 - method for producing a seed incapable of germination, (claim no. 10)
Genetic engineering
Genetics techniques | Genetic use restriction technology | [
"Chemistry",
"Engineering",
"Biology"
] | 2,099 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
2,855,255 | https://en.wikipedia.org/wiki/Limits%20of%20computation | The limits of computation are governed by a number of different factors. In particular, there are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy.
Hardware limits or physical limits
Processing and memory density
The Bekenstein bound limits the amount of information that can be stored within a spherical volume to the entropy of a black hole with the same surface area.
Thermodynamics limit the data storage of a system based on its energy, number of particles and particle modes. In practice, it is a stronger bound than the Bekenstein bound.
Processing speed
Bremermann's limit is the maximum computational speed of a self-contained system in the material universe, and is based on mass–energy versus quantum uncertainty constraints.
Communication delays
The Margolus–Levitin theorem sets a bound on the maximum computational speed per unit of energy: 6 × 1033 operations per second per joule. This bound, however, can be avoided if there is access to quantum memory. Computational algorithms can then be designed that require arbitrarily small amounts of energy/time per one elementary computation step.
Energy supply
Landauer's principle defines a lower theoretical limit for energy consumption: consumed per irreversible state change, where k is the Boltzmann constant and T is the operating temperature of the computer. Reversible computing is not subject to this lower bound. T cannot, even in theory, be made lower than 3 kelvins, the approximate temperature of the cosmic microwave background radiation, without spending more energy on cooling than is saved in computation. However, on a timescale of 109 – 1010 years, the cosmic microwave background radiation will be decreasing exponentially, which has been argued to eventually enable 1030 as much computations per unit of energy. Important parts of this argument have been disputed.
Building devices that approach physical limits
Several methods have been proposed for producing computing devices or data storage devices that approach physical and practical limits:
A cold degenerate star could conceivably be used as a giant data storage device, by carefully perturbing it to various excited states, in the same manner as an atom or quantum well used for these purposes. Such a star would have to be artificially constructed, as no natural degenerate stars will cool to this temperature for an extremely long time. It is also possible that nucleons on the surface of neutron stars could form complex "molecules", which some have suggested might be used for computing purposes, creating a type of computronium based on femtotechnology, which would be faster and denser than computronium based on nanotechnology.
It may be possible to use a black hole as a data storage or computing device, if a practical mechanism for extraction of contained information can be found. Such extraction may in principle be possible (Stephen Hawking's proposed resolution to the black hole information paradox). This would achieve storage density exactly equal to the Bekenstein bound. Seth Lloyd calculated the computational abilities of an "ultimate laptop" formed by compressing a kilogram of matter into a black hole of radius 1.485 × 10−27 meters, concluding that it would only last about 10−19 seconds before evaporating due to Hawking radiation, but that during this brief time it could compute at a rate of about 5 × 1050 operations per second, ultimately performing about 1032 operations on 1016 bits (~1 PB). Lloyd notes that "Interestingly, although this hypothetical computation is performed at ultra-high densities and speeds, the total number of bits available to be processed is not far from the number available to current computers operating in more familiar surroundings."
In The Singularity Is Near, Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 1090 operations per second. The mass of the universe can be estimated at 3 × 1052 kilograms. If all matter in the universe was turned into a black hole, it would have a lifetime of 2.8 × 10139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10229 operations.
Abstract limits in computer science
In the field of theoretical computer science the computability and complexity of computational problems are often sought-after. Computability theory describes the degree to which problems are computable, whereas complexity theory describes the asymptotic degree of resource consumption. Computational problems are therefore confined into complexity classes. The arithmetical hierarchy and polynomial hierarchy classify the degree to which problems are respectively computable and computable in polynomial time. For instance, the level of the arithmetical hierarchy classifies computable, partial functions. Moreover, this hierarchy is strict such that at any other class in the arithmetic hierarchy classifies strictly uncomputable functions.
Loose and tight limits
Many limits derived in terms of physical constants and abstract models of computation in computer science are loose. Very few known limits directly obstruct leading-edge technologies, but many engineering obstacles currently cannot be explained by closed-form limits.
See also
Digital physics
Hypercomputation
Matrioshka brain
Physics of computation
Programmable matter
Quantum computing
Supertask
Transcomputational problem
References
Theory of computation | Limits of computation | [
"Physics"
] | 1,087 | [
"Physical phenomena",
"Limits of computation"
] |
2,856,006 | https://en.wikipedia.org/wiki/Melt%20flow%20index | The Melt Flow Index (MFI) is a measure of the ease of flow of the melt of a thermoplastic polymer. It is defined as the mass of polymer, in grams, flowing in ten minutes through a capillary of a specific diameter and length by a pressure applied via prescribed alternative gravimetric weights for alternative prescribed temperatures. Polymer processors usually correlate the value of MFI with the polymer grade that they have to choose for different processes, and most often this value is not accompanied by the units, because it is taken for granted to be g/10min. Similarly, the test conditions of MFI measurement are normally expressed in kilograms rather than any other units. The method is described in the similar standards ASTM D1238 and ISO 1133.
Melt flow rate is a measure of the ability of the material's melt to flow under pressure, and is an indirect measure of molecular weight, with high melt flow rate corresponding to low molecular weight. Melt flow rate is inversely proportional to viscosity of the melt at the conditions of the test, though it should be borne in mind that the viscosity for any such material depends on the applied force. Ratios between two melt flow rate values for one material at different gravimetric weights are often used as a measure for the broadness of the molecular weight distribution.
Melt flow rate is very commonly used for polyolefins, polyethylene being measured at 190 °C and polypropylene at 230 °C. The plastics engineer should choose a material with a melt index high enough that the molten polymer can be easily formed into the article intended, but low enough that the mechanical strength of the final article will be sufficient for its use.
Measurement
ISO standard 1133-1 governs the procedure for measurement of the melt flow rate. The procedure for determining MFI is as follows:
A small amount of the polymer sample (around 4 to 5 grams) is taken in the specially designed MFI apparatus. A die with an opening of typically around 2 mm diameter is inserted into the apparatus.
The material is packed properly inside the barrel to avoid formation of air pockets.
A piston is introduced which acts as the medium that causes extrusion of the molten polymer.
The sample is preheated for a specified amount of time: 5 min at 190 °C for polyethylene and 6 min at 230 °C for polypropylene.
After the preheating a specified weight is introduced onto the piston. Examples of standard weights are 2.16 kg, 5 kg, etc.
The weight exerts a force on the molten polymer and it immediately starts flowing through the die.
A sample of the melt is taken after the desired period of time and is weighed accurately.
MFI is expressed in grams of polymer per 10 minutes of duration of the test.
Synonyms of Melt Flow Index are Melt Flow Rate and Melt Index. More commonly used are their abbreviations: MFI, MFR and MI.
Confusingly, MFR may also indicate "melt flow ratio", the ratio between two melt flow rates at different gravimetric weights. More accurately, this should be reported as FRR (flow rate ratio), or simply flow ratio. FRR is commonly used as an indication of the way in which rheological behavior is influenced by the molecular weight distribution of the material.
formerly: (MFI = Melt Flow Index) → currently: (MFR = Melt mass-Flow Rate)
formerly: (MVI = Melt Volume Index) → currently: (MVR = Melt Volume-flow Rate)
formerly: (MFR = Melt Flow Ratio) → currently: (FRR = Flow Rate Ratio)
The flow parameter that is readily accessible to most processors is the MFI. MFI is often used to determine how a polymer will process. However, MFI takes no account of the shear, shear rate or shear history and as such is not a good measure of the processing window of a polymer. It is a single-point viscosity measurement at a relatively low shear rate and temperature. Earlier, it was often said that MFI give a ‘dot’ when actually what is needed is a ‘plot’ for the polymer processors. However, this is not true now because of a unique approach developed for estimating the rheogram merely from the knowledge of the MFI.
The MFI device is not an extruder in the conventional polymer processing sense in that there is no screw to compress, heat and shear the polymer. MFI additionally does not take account of long chain branching nor the differences between shear and elongational rheology. Therefore, two polymers with the same MFI will not behave the same under any given processing conditions.
The relationship between MFI and temperature can be used to obtain the activation energies for polymers. The activation energies developed from MFI values has the advantage of simplicity and easy availability. The concept of obtaining activation energy from MFI can be extended to copolymers as well wherein there exists an anomalous temperature dependence of melt viscosity leading to the existence of two distinct values of activation energies for each copolymer.
For a detailed numerical simulation of the melt flow index, see or.
Melt Flow Index Formula
formerly MFI (currently MFR) = Weight (gram) of Melted samples in 10 minutes
Melt Flow Index Tester
References
Polymer chemistry
Polymer physics
Viscosity | Melt flow index | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,097 | [
"Polymer physics",
"Physical phenomena",
"Physical quantities",
"Materials science",
"Polymer chemistry",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties"
] |
2,856,466 | https://en.wikipedia.org/wiki/Strongly%20correlated%20material | Strongly correlated materials are a wide class of compounds that include insulators and electronic materials, and show unusual (often technologically useful) electronic and magnetic properties, such as metal-insulator transitions, heavy fermion behavior, half-metallicity, and spin-charge separation. The essential feature that defines these materials is that the behavior of their electrons or spinons cannot be described effectively in terms of non-interacting entities. Theoretical models of the electronic (fermionic) structure of strongly correlated materials must include electronic (fermionic) correlation to be accurate. As of recently, the label quantum materials is also used to refer to strongly correlated materials, among others.
Transition metal oxides
Many transition metal oxides belong to this class which may be subdivided according to their behavior, e.g. high-Tc, spintronic materials, multiferroics, Mott insulators, spin Peierls materials, heavy fermion materials, quasi-low-dimensional materials, etc. The single most intensively studied effect is probably high-temperature superconductivity in doped cuprates, e.g. La2−xSrxCuO4. Other ordering or magnetic phenomena and temperature-induced phase transitions in many transition-metal oxides are also gathered under the term "strongly correlated materials."
Electronic structures
Typically, strongly correlated materials have incompletely filled d- or f-electron shells with narrow energy bands. One can no longer consider any electron in the material as being in a "sea" of the averaged motion of the others (also known as mean field theory). Each single electron has a complex influence on its neighbors.
The term strong correlation refers to behavior of electrons in solids that is not well-described (often not even in a qualitatively correct manner) by simple one-electron theories such as the local-density approximation (LDA) of density-functional theory or Hartree–Fock theory. For instance, the seemingly simple material NiO has a partially filled 3d band (the Ni atom has 8 of 10 possible 3d-electrons) and therefore would be expected to be a good conductor. However, strong Coulomb repulsion (a correlation effect) between d electrons makes NiO instead a wide-band gap insulator. Thus, strongly correlated materials have electronic structures that are neither simply free-electron-like nor completely ionic, but a mixture of both.
Theories
Extensions to the LDA (LDA+U, GGA, SIC, GW, etc.) as well as simplified models Hamiltonians (e.g. Hubbard-like models) have been proposed and developed in order to describe phenomena that are due to strong electron correlation. Among them, dynamical mean field theory (DMFT) successfully captures the main features of correlated materials. Schemes that use both LDA and DMFT explain many experimental results in the field of correlated electrons.
Structural studies
Experimentally, optical spectroscopy, high-energy electron spectroscopies, resonant photoemission, and more recently resonant inelastic (hard and soft) X-ray scattering (RIXS) and neutron spectroscopy have been used to study the electronic and magnetic structure of strongly correlated materials. Spectral signatures seen by these techniques that are not explained by one-electron density of states are often related to strong correlation effects. The experimentally obtained spectra can be compared to predictions of certain models or may be used to establish constraints on the parameter sets. One has for instance established a classification scheme of transition metal oxides within the so-called Zaanen–Sawatzky–Allen diagram.
Applications
The manipulation and use of correlated phenomena has applications like superconducting magnets and in magnetic storage (CMR) technologies. Other phenomena like the metal-insulator transition in VO2 have been explored as a means to make smart windows to reduce the heating/cooling requirements of a room. Furthermore, metal-insulator transitions in Mott insulating materials like LaTiO3 can be tuned through adjustments in band filling to potentially be used to make transistors that would use conventional field effect transistor configurations to take advantage of the material's sharp change in conductivity. Transistors using metal-insulator transitions in Mott insulators are often referred to as Mott transistors, and have been successfully fabricated using VO2 before, but they have required the larger electric fields induced by ionic liquids as a gate material to operate.
See also
Electronic correlation
Emergent behavior
References
Further reading
External links
Materials science
Condensed matter physics
Quantum mechanics
Magnetism | Strongly correlated material | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 932 | [
"Applied and interdisciplinary physics",
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"nan",
"Matter"
] |
34,516,380 | https://en.wikipedia.org/wiki/Piezotronics | Piezotronics effect is using the piezoelectric potential (piezopotential) created in materials with piezoelectricity as a “gate” voltage to tune/control the charge carrier transport properties for fabricating new devices.
Neil A Downie showed how simple it was to build simple demonstrations on a macro-scale using a sandwich of piezoelectric material and carbon piezoresistive material to make an FET-like amplifying device and put it in a book of science projects for students in 2006.
The fundamental principle of piezotronics was introduced by Prof. Zhong Lin Wang at Georgia Institute of Technology in 2007.
From 2006, a series of electronic devices have been demonstrated based on this effect, including piezopotential gated field-effect transistor, piezopotential gated diode, strain sensors, force/flow sensors, hybrid field-effect transistor, piezotronic logic gates, electromechanical memories, etc.
Piezotronic devices are regarded as a new semiconductor-device category. Piezotronics is likely to have important applications in sensor, human-silicon technology interfacing, MEMS, nanorobotics and active flexible electronics.
Mechanism
Due to the non-central symmetry in materials such as the wurtzite structured ZnO, GaN and InN, a piezopotential is created in the crystal by applying a stress. Owing to the simultaneous possession of piezoelectricity and semiconductor properties, the piezopotential created in the crystal has a strong effect on the carrier transport process.
Generally, the construction of the basic piezotronic devices can be divided into two categories. Here we use the nanowires as the example. The first kind is that the piezoelectric nanowire was put on a flexible substrate with two ends fixed by the electrodes. In this case, when the substrate is bended, the nanowire will be purely stretched or compressed. Piezopotential will be introduced along its axis. It will modify the electric field or the Schottky barrier (SB) height at the contact area. The induced positive piezopotential at one end will reduce the SB height, while the negative piezopotential at the other end will increase it. Thus the electric transport properties will be changed. The second kind of the piezotronic device is that one end of the nanowire is fixed with electrode, while the other end is free. In this case, when a force is applied at the free end of the nanowire to bend it, the piezopotential distribution will be perpendicular to the axis of the nanowire. The introduced piezoelectric field is perpendicular to electron transport direction, just like applying a gate voltage in the traditional field-effect transistor. Thus the electron transport properties will also be changed.
The materials for piezotronics should be piezoelectric semiconductors, such as ZnO, GaN and InN. Three-way coupling among piezoelectricity, photoexcitation and semiconductor is the basis of piezotronics (piezoelectricity-semiconductor coupling), piezophotonics (piezoelectric-photon excitation coupling), optoelectronics, and piezophototronics (piezoelectricity-semiconductor-photoexcitation). The core of these coupling relies on the piezopotential created by the piezoelectric materials.
See also
Piezoelectricity
Non linear piezoelectric effects in polar semiconductors
Wurtzite
- EU project
References
Microtechnology
Nanoelectronics
Semiconductor devices | Piezotronics | [
"Materials_science",
"Engineering"
] | 752 | [
"Nanotechnology",
"Nanoelectronics",
"Microtechnology",
"Materials science"
] |
34,518,260 | https://en.wikipedia.org/wiki/Piezophototronics | Piezo-phototronic effect is a three-way coupling effect of piezoelectric, semiconductor and photonic properties in non-central symmetric semiconductor materials, using the piezoelectric potential (piezopotential) that is generated by applying a strain to a semiconductor with piezoelectricity to control the carrier generation, transport, separation and/or recombination at metal–semiconductor junction or p–n junction for improving the performance of optoelectronic devices, such as photodetector, solar cell and light-emitting diode. Prof. Zhong Lin Wang at Georgia Institute of Technology proposed the fundamental principle of this effect in 2010.
Mechanism
When a p-type semiconductor and a n-type semiconductor form a junction, the holes in the p-type side and the electrons in the n-type side tend to redistribute around the interface area to balance the local electric field, which results in a charge depletion layer. The diffusion and recombination of the electrons and holes in the junction region is closely related to the optoelectronic properties of the device, which is greatly affected by the local electric field distribution. The existence of the piezo-charges at the interface introduces three effects: a shift in local electronic band structure due to the introduced local potential, a tilt of the electronic band structure over the junction region for the polarization existing in the piezoelectric semiconductor, and a change in the charge depletion layer due to the redistribution of the local charge carriers to balance the local piezo-charges. The positive piezoelectric charges at the junction lower the energy band and the negative piezoelectric charges raise the energy band in n-type semiconductor region near the junction region. A modification in the local band by piezopotential may be effective for trapping charges so that the electron-hole recombination rate can be largely enhanced, which is very beneficial for improving the efficiency of a light-emitting diode. Furthermore, the inclined band tends to change the mobility of the carriers moving toward the junction.
The materials for piezo-phototronics should have three basic properties: piezoelectricity, semiconductor property, and photon excitation property [5]. Typical materials are the wurtzite structures, such as ZnO, GaN and InN. the three-way coupling among piezoelectricity, photoexcitation and semiconductor properties, which is the basis of piezotronics (piezoelectricity-semiconductor coupling), piezophotonics (piezoelectric-photon excitation coupling), optoelectronics, and piezo-phototronics piezoelectricity-semiconductor-photoexcitation). The core of these coupling relies on the piezopotential created by the piezoelectric materials.
Experimental realization
Van der Waals heterostructures based on graphene and transition metal dichalcogenides (TMD) are promising for the realization of piezophototronic effect. It has been shown that the photo-response of graphene/MoS2 junction can be tuned by means of tensile stress manifesting piezophototronic effect in TMD devices.
References
Condensed matter physics
Electrical phenomena
Energy harvesting | Piezophototronics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 680 | [
"Physical phenomena",
"Phases of matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Matter"
] |
33,034,771 | https://en.wikipedia.org/wiki/Defining%20equation%20%28physical%20chemistry%29 | In physical chemistry, there are numerous quantities associated with chemical compounds and reactions; notably in terms of amounts of substance, activity or concentration of a substance, and the rate of reaction. This article uses SI units.
Introduction
Theoretical chemistry requires quantities from core physics, such as time, volume, temperature, and pressure. But the highly quantitative nature of physical chemistry, in a more specialized way than core physics, uses molar amounts of substance rather than simply counting numbers; this leads to the specialized definitions in this article. Core physics itself rarely uses the mole, except in areas overlapping thermodynamics and chemistry.
Notes on nomenclature
Entity refers to the type of particle/s in question, such as atoms, molecules, complexes, radicals, ions, electrons etc.
Conventionally for concentrations and activities, square brackets [ ] are used around the chemical molecular formula. For an arbitrary atom, generic letters in upright non-bold typeface such as A, B, R, X or Y etc. are often used.
No standard symbols are used for the following quantities, as specifically applied to a substance:
the mass of a substance m,
the number of moles of the substance n,
partial pressure of a gas in a gaseous mixture p (or P),
some form of energy of a substance (for chemistry enthalpy H is common),
entropy of a substance S
the electronegativity of an atom or chemical bond χ.
Usually the symbol for the quantity with a subscript of some reference to the quantity is used, or the quantity is written with the reference to the chemical in round brackets. For example, the mass of water might be written in subscripts as mH2O, mwater, maq, mw (if clear from context) etc., or simply as m(H2O). Another example could be the electronegativity of the fluorine-fluorine covalent bond, which might be written with subscripts χF-F, χFF or χF-F etc., or brackets χ(F-F), χ(FF) etc.
Neither is standard. For the purpose of this article, the nomenclature is as follows, closely (but not exactly) matching standard use.
For general equations with no specific reference to an entity, quantities are written as their symbols with an index to label the component of the mixture - i.e. qi. The labeling is arbitrary in initial choice, but once chosen fixed for the calculation.
If any reference to an actual entity (say hydrogen ions H+) or any entity at all (say X) is made, the quantity symbol q is followed by curved ( ) brackets enclosing the molecular formula of X, i.e. q(X), or for a component i of a mixture q(Xi). No confusion should arise with the notation for a mathematical function.
Quantification
General basic quantities
General derived quantities
Kinetics and equilibria
The defining formulae for the equilibrium constants Kc (all reactions) and Kp (gaseous reactions) apply to the general chemical reaction:
{\nu_1 X1} + {\nu_2 X2} + \cdots + \nu_\mathit{r} X_\mathit{r} <=> {\eta_1 Y1} + {\eta_2 Y2} + \cdots + \eta_\mathit{p} {Y}_\mathit{p}
and the defining equation for the rate constant k applies to the simpler synthesis reaction (one product only):
{\nu_1 X1} + {\nu_2 X2} + \cdots + \nu_\mathit{r} X_\mathit{r} -> \eta {Y}
where:
i = dummy index labelling component i of reactant mixture,
j = dummy index labelling component i of product mixture,
Xi = component i of the reactant mixture,
Yj = reactant component j of the product mixture,
r (as an index) = number of reactant components,
p (as an index) = number of product components,
νi = stoichiometry number for component i in product mixture,
ηj = stoichiometry number for component j in product mixture,
σi = order of reaction for component i in reactant mixture.
The dummy indices on the substances X and Y label the components (arbitrary but fixed for calculation); they are not the numbers of each component molecules as in usual chemistry notation.
The units for the chemical constants are unusual since they can vary depending on the stoichiometry of the reaction, and the number of reactant and product components. The general units for equilibrium constants can be determined by usual methods of dimensional analysis. For the generality of the kinetics and equilibria units below, let the indices for the units be;
For the constant Kc;
Substitute the concentration units into the equation and simplify:,
The procedure is exactly identical for Kp.
For the constant k
Electrochemistry
Notation for half-reaction standard electrode potentials is as follows. The redox reaction
A + BX <=> B + AX
split into:
a reduction reaction: B+ + e^- <=> B
and an oxidation reaction: A+ + e^- <=> A
(written this way by convention) the electrode potential for the half reactions are written as and respectively.
For the case of a metal-metal half electrode, letting M represent the metal and z be its valency, the half reaction takes the form of a reduction reaction:
{M^{+\mathit{z}}} + \mathit{z} e^- <=> M
Quantum chemistry
References
Sources
Physical chemistry, P.W. Atkins, Oxford University Press, 1978,
Chemistry, Matter and the Universe, R.E. Dickerson, I. Geis, W.A. Benjamin Inc. (USA), 1976,
Chemical thermodynamics, D.J.G. Ives, University Chemistry Series, Macdonald Technical and Scientific co. .
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
Further reading
Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974,
Molecular Quantum Mechanics Parts I and II: An Introduction to QUANTUM CHEMISTRY (Volume 1), P.W. Atkins, Oxford University Press, 1977,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Properties of matter, B.H. Flowers, E. Mendoza, Manchester Physics Series, J. Wiley and Sons, 1970,
Measurement
Mathematical chemistry
Chemical properties
Physical chemistry
Equations | Defining equation (physical chemistry) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,440 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Drug discovery",
"Applied mathematics",
"Quantity",
"Mathematical objects",
"Measurement",
"Size",
"Equations",
"Theoretical chemistry",
"Mathematical chemistry",
"Molecular modelling",
"nan",
"Physical chemistry"
] |
33,034,775 | https://en.wikipedia.org/wiki/Rashba%20effect | The Rashba effect, also called Bychkov–Rashba effect, is a momentum-dependent splitting of spin bands in bulk crystals and low-dimensional condensed matter systems (such as heterostructures and surface states) similar to the splitting of particles and anti-particles in the Dirac Hamiltonian. The splitting is a combined effect of spin–orbit interaction and asymmetry of the crystal potential, in particular in the direction perpendicular to the two-dimensional plane (as applied to surfaces and heterostructures). This effect is named in honour of Emmanuel Rashba, who discovered it with Valentin I. Sheka in 1959 for three-dimensional systems and afterward with
Yurii A. Bychkov in 1984 for two-dimensional systems.
Remarkably, this effect can drive a wide variety of novel physical phenomena, especially operating electron spins by electric fields, even when it is a small correction to the band structure of the two-dimensional metallic state. An example of a physical phenomenon that can be explained by Rashba model is the anisotropic magnetoresistance (AMR).
Additionally, superconductors with large Rashba splitting are suggested as possible realizations of the elusive Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) state, Majorana fermions and topological p-wave superconductors.
Lately, a momentum dependent pseudospin-orbit coupling has been realized in cold atom systems.
Hamiltonian
The Rashba effect is most easily seen in the simple model Hamiltonian known as the Rashba Hamiltonian
,
where is the Rashba coupling, is the momentum and is the Pauli matrix vector.
This is nothing but a two-dimensional version of the Dirac Hamiltonian (with a 90 degree rotation of the spins).
The Rashba model in solids can be derived in the framework of the k·p perturbation theory or from the point of view of a tight binding approximation. However, the specifics of these methods are considered tedious and many prefer an intuitive toy model that gives qualitatively the same physics (quantitatively it gives a poor estimation of the coupling ). Here we will introduce the intuitive toy model approach followed by a sketch of a more accurate derivation.
Naive derivation
The Rashba effect is a direct result of inversion symmetry breaking in the direction perpendicular to the two-dimensional plane. Therefore, let us add to the Hamiltonian a term that breaks this symmetry in the form of an electric field
.
Due to relativistic corrections, an electron moving with velocity v in the electric field will experience an effective magnetic field B
,
where is the speed of light. This magnetic field couples to the electron spin in a spin-orbit term
,
where is the electron magnetic moment.
Within this toy model, the Rashba Hamiltonian is given by
,
where . However, while this "toy model" is superficially attractive, the Ehrenfest theorem seems to suggest that since the electronic motion in the direction is that of a bound state that confines it to the 2D surface, the space-averaged electric field (i.e., including that of the potential that binds it to the 2D surface) that the electron experiences must be zero given the connection between the time derivative of spatially averaged momentum, which vanishes as a bound state, and the spatial derivative of potential, which gives the electric field! When applied to the toy model, this argument seems to rule out the Rashba effect (and caused much controversy prior to its experimental confirmation), but turns out to be subtly incorrect when applied to a more realistic model. While the above naive derivation provides correct analytical form of the Rashba Hamiltonian, it is inconsistent because the effect comes from mixing energy bands (interband matrix elements) rather from intraband term of the naive model. A consistent approach explains the large magnitude of the effect by using a different denominator: instead of the Dirac gap of of the naive model, which is of the order of MeV, the consistent approach includes a combination of splittings in the energy bands in a crystal that have an energy scale of eV, as described in the next section.
Estimation of the Rashba coupling in a realistic system – the tight binding approach
In this section we will sketch a method to estimate the coupling constant from microscopics using a tight-binding model. Typically, the itinerant electrons that form the two-dimensional electron gas (2DEG) originate in atomic and orbitals. For the sake of simplicity consider holes in the band. In this picture electrons fill all the states except for a few holes near the point.
The necessary ingredients to get Rashba splitting are atomic spin-orbit coupling
,
and an asymmetric potential in the direction perpendicular to the 2D surface
.
The main effect of the symmetry breaking potential is to open a band gap between the isotropic and the , bands. The secondary effect of this potential is that it hybridizes the with the and bands. This hybridization can be understood within a tight-binding approximation. The hopping element from a state at site with spin to a or state at site j with spin is given by
,
where is the total Hamiltonian. In the absence of a symmetry breaking field, i.e. , the hopping element vanishes due to symmetry. However, if then the hopping element is finite. For example, the nearest neighbor hopping element is
,
where stands for unit distance in the direction respectively and is Kronecker's delta.
The Rashba effect can be understood as a second order perturbation theory in which a spin-up hole, for example, jumps from a state to a with amplitude then uses the spin–orbit coupling to flip spin and go back down to the with amplitude .
Note that overall the hole hopped one site and flipped spin.
The energy denominator in this perturbative picture is of course such that all together we have
,
where is the interionic distance. This result is typically several orders of magnitude larger than the naive result derived in the previous section.
Application
Spintronics - Electronic devices are based on the ability to manipulate the electrons position by means of electric fields. Similarly, devices can be based on the manipulation of the spin degree of freedom. The Rashba effect allows to manipulate the spin by the same means, that is, without the aid of a magnetic field. Such devices have many advantages over their electronic counterparts.
Topological quantum computation - Lately it has been suggested that the Rashba effect can be used to realize a p-wave superconductor. Such a superconductor has very special edge-states which are known as Majorana bound states. The non-locality immunizes them to local scattering and hence they are predicted to have long coherence times. Decoherence is one of the largest barriers on the way to realize a full scale quantum computer and these immune states are therefore considered good candidates for a quantum bit.
Discovery of the giant Rashba effect with of about 5 eV•Å in bulk crystals such as BiTeI, ferroelectric GeTe, and in a number of low-dimensional systems bears a promise of creating devices operating electrons spins at nanoscale and possessing short operational times.
Comparison with Dresselhaus spin–orbit coupling
The Rashba spin-orbit coupling is typical for systems with uniaxial symmetry, e.g., for hexagonal crystals of CdS and CdSe for which it was originally found and perovskites, and also for heterostructures where it develops as a result of a symmetry breaking field in the direction perpendicular to the 2D surface. All these systems lack inversion symmetry. A similar effect, known as the Dresselhaus spin orbit coupling arises in cubic crystals of AIIIBV type lacking inversion symmetry and in quantum wells manufactured from them.
See also
Electric dipole spin resonance
Footnotes
References
Further reading
A. Manchon, H. C. Koo, J. Nitta, S. M. Frolov, and R. A. Duine, New perspectives for Rashba spin–orbit coupling, Nature Materials 14, 871-882 (2015), http://www.nature.com/nmat/journal/v14/n9/pdf/nmat4360.pdf, stacks.iop.org/NJP/17/050202/mmedia
http://blog.physicsworld.com/2015/06/02/breathing-new-life-into-the-rashba-effect/
E. I. Rashba and V. I. Sheka, Electric-Dipole Spin-Resonances, in: Landau Level Spectroscopy, (North Holland, Amsterdam) 1991, p. 131; https://arxiv.org/abs/1812.01721
External links
Semiconductors
Quantum magnetism
Spintronics | Rashba effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,798 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Spintronics",
"Quantum mechanics",
"Quantum magnetism",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
31,526,903 | https://en.wikipedia.org/wiki/Bethanization | Bethanization is a process patented by the Bethlehem Steel Company to protect steel from corrosion by plating it with zinc, a process similar to electrogalvanization. In advertising materials, Bethlehem Steel claimed the process was more effective than hot dip galvanization, the most common means of using zinc to protect steel.
The process is similar to that of electrolytic sulfuric acid zinc plating, with few differences from bethanization. The electrolytic sulfuric acid zinc plating process uses zinc anodes, while bethanization uses inert mild steel anodes instead. The electrolytes used are manufactured by using zinc oxide dross, and dissolving it in sulfuric acid.
In 1936, Bethlehem Steel spent $30 million (1936 dollars) to build a factory in Johnstown, Pennsylvania capable of creating large amounts of Bethanized wire.
Material Properties
Uniformity - The zinc coating surrounding the wire is tightly bonded to the steel and uniformly distributed; weak spots on the wire will not be found.
Ductility - Bethanizing steel with 99.9 percent zinc, bonds them together tightly without any room for layers of zinc iron alloy. Zinc iron alloy is a brittle substance that induces cracking, leaving steel at critical points exposed. The zinc coating is more ductile and less brittle.
Resistance to corrosion and fatigue - Bethanized wire is claimed to have protection against corrosion and corrosion fatigue equal to that of hot-galvanized steel wire.
Strength - Wire that has been bethanized has breaking strengths claimed to be around 90 percent of bright wire strength.
Use in fencing
Bethanization was applied to outdoor mesh-wire fence by the Bethlehem Steel company in the 1930s. By coating wire with purer zinc, the wire is no longer contaminated with iron unlike previous processes such as galvanization, which the company claims gives wire more durability while still retaining ductility, and also be resistant to sulfur gases that are present in Earth's atmosphere. Other fencing applications that have been bethanized by the Bethlehem Steel Company include:
Barbed wire
Gates
Fence posts
Bale ties
References
Corrosion prevention
Metal plating
Zinc | Bethanization | [
"Chemistry"
] | 431 | [
"Corrosion prevention",
"Metallurgical processes",
"Coatings",
"Corrosion",
"Metal plating"
] |
31,531,183 | https://en.wikipedia.org/wiki/Bush%20pump | The bush pump, also known as the Zimbabwe bush pump, is a positive displacement pump based on lever action used to extract water from a bore hole well. It is the standard hand pump in Zimbabwe, and is used in Zimbabwe and Namibia. There are approximately 40 000 pumps (2009) in Zimbabwe, and annually about 3000 pumps are installed.
History
The original version was designed in 1933 as a closed-top cylinder pump. Around 1960 the design was modernized. The base of the well was from then on bolted to the well casing. It was at this point the pump got its current name, and this design became the national standard for hand pumps in Zimbabwe. After Zimbabwe’s independence in 1980, the government created its own modernized version of the pump, B-type Zimbabwe Bush Pump. The new pump integrated the features from the earlier pumps. This is the current standard (2009).
The pump is today regarded by many as a national treasure, and was in 1997 pictured on its own postal stamp.
Technical description
As other positive displacement pumps the Bush Pump is constructed around a down-hole cylinder containing a piston, that lifts water with every lever stroke. The pump's signifying feature is the hardwood block that acts as a bearing and its lever mechanism. Besides this, the pump has an untraditional construction where the well base is bolted directly to the top of the well casing, sticking out of the bore hole. It extracts water from a depth up of 18 to 100m, and can support a usage up to 250 people.
Installation and maintenance
The bush pump is easy and inexpensive to build, modify and maintain; still the pump has to be installed by experienced mechanics. Lifting tackles and other special equipment is used to install the component that are inserted into the bore hole. The installation is mostly done by the government of Zimbabwe and different NGOs.
The pump is maintained through a system consisting of three levels of responsibility:
1. At the direct user level, a committee and a caretaker is chosen by the users. They are responsible for minor functional maintenance, for instance oiling the pump and tightening bolts.
2. At the ward level there is a trained pump mechanic who repairs broken pumps, and keeps maintenance records.
3. At the district level a water supply operative is in charge of larger maintenance operations, and has access to vehicles and equipment for transporting larger pump components.
Analysis
The bush pump has been the subject of scholarly studies. In their article, “The Zimbabwe Bush Pump: Mechanics of a Fluid Technology", Marianne de Laet and Annemarie Mol analyze the pumps role as an appropriate technology. They focus on the fluidity of the pump, meaning its flexibility both when it comes to how it was invented, how it is used, and what function it has. They make their case by showing that the pump was not invented by one human actor (the opposite of what they call heroic actorship), but through a slow process of fluid actorship, where there is not one clear human creator. The use of the pump is also fluid, it does not only give water to a community, but the construction of the pump can serve an important ceremonial function both locally and nationally. The pump can also be said to be fluid in how it reacts to decay. Even if seemingly essential parts of it mechanics is broken; it can serve functions not intended by any of its creators. The pump is not only its physical appearance or its technical description, Laet and Mol argues:
The first aspect of the pump’s fluidity is that its boundaries are not solid and sharp. The Pump is a mechanical object, it is a hydraulic system, but it is also a device installed by the community, a health promoter and a nation-building apparatus. It has each of these identities – and each comes with its own different boundaries. ... In each of its identities the Bush Pump contains a variant of its environment.
Laet and Mol argue that their way of analyzing the bush pump can be helpful when trying to understand a wide variety of objects and practices.
See also
Appropriate technology
Drinking water
Hand pump
Pump
References
De Laet M, Mol A (2000) The Zimbabwe Bush Pump: Mechanics of a fluid technology. Social Studies of Science 30(2): 225–263.
Pumps
Australian outback
Rural culture in Oceania | Bush pump | [
"Physics",
"Chemistry"
] | 871 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
21,423,678 | https://en.wikipedia.org/wiki/ConsensusPathDB | The ConsensusPathDB is a molecular functional interaction database, integrating information on protein interactions, genetic interactions signaling, metabolism, gene regulation, and drug-target interactions in humans. ConsensusPathDB currently (release 30) includes such interactions from 32 databases. ConsensusPathDB is freely available for academic use under http://ConsensusPathDB.org.
Integrated Databases
Reactome (metabolic and signaling pathways)
KEGG (metabolic pathways only have been integrated in ConsensusPathDB)
HumanCyc (metabolic pathways)
PID - Pathway Interaction Database (signaling pathways)
BioCarta (signaling pathways)
Netpath (signaling pathways)
IntAct (protein interactions)
DIP (protein interactions)
MINT (protein interactions)
HPRD (protein interactions)
BioGRID (protein interactions)
SPIKE (protein interactions, signaling reactions)
WikiPathways (metabolic and signaling pathways)
and many more.
Functionalities
The ConsensusPathDB is accessible via a web interface providing a variety of functions.
Search and visualization
Using the web interface users can search for physical entities (e.g. proteins, metabolites etc.) or pathways using common names or accession numbers (e.g. UniProt identifiers). Selected interactions can be visualized in an interactive environment as expandable networks. ConsensusPathDB currently allows users to export their models in BioPAX format or as image in several formats.
Shortest path
Users can search for shortest paths of functional interactions between physical entities, based on all interactions in the database. The pathway search can be constrained by forbidding passing through certain physical entities.
Data upload
Users can upload their own interaction networks in BioPAX, PSI-MI or SBML files in order to validate and/or extend those networks in the context of the interactions in ConsensusPathDB.
Over-representation analysis
Using the web-interface of the database, one can perform overrepresentation analysis, based on biochemical pathways or on neighbourhood-based entity sets (NESTs) that constitute sub-networks of the overall interaction network containing all physical entities around a central one within a "radius" (number of interactions from the center). For each predefined set (pathway / NEST), a P-value is computed based on the hypergeometric distribution. It reflects the significance of the observed overlap between the user-specific input gene list and the members of the predefined set.
Over-representation analyses can be performed with user-specified genes or metabolites.
References
External links
Biological databases
Systems biology | ConsensusPathDB | [
"Biology"
] | 508 | [
"Bioinformatics",
"Biological databases",
"Systems biology"
] |
21,424,248 | https://en.wikipedia.org/wiki/Methemalbumin | Methemalbumin (MHA) is an albumin complex consisting of albumin and heme.
This complex gives brown color to plasma and occurs in hemolytic and hemorrhagic disorders.
Its presence in plasma is used to differentiate between hemorrhagic and edematous pancreatitis.
The Schumm test is used to differentiate intravascular haemolysis from extravascular haemolysis, as in haemolytic anaemias. A positive result is indicative of intravascular haemolysis.
References
Proteins | Methemalbumin | [
"Chemistry"
] | 120 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
21,429,738 | https://en.wikipedia.org/wiki/Concentration%20dimension | In mathematics — specifically, in probability theory — the concentration dimension of a Banach space-valued random variable is a numerical measure of how "spread out" the random variable is compared to the norm on the space.
Definition
Let (B, || ||) be a Banach space and let X be a Gaussian random variable taking values in B. That is, for every linear functional ℓ in the dual space B∗, the real-valued random variable 〈ℓ, X〉 has a normal distribution. Define
Then the concentration dimension d(X) of X is defined by
Examples
If B is n-dimensional Euclidean space Rn with its usual Euclidean norm, and X is a standard Gaussian random variable, then σ(X) = 1 and E[||X||2] = n, so d(X) = n.
If B is Rn with the supremum norm, then σ(X) = 1 but E[||X||2] (and hence d(X)) is of the order of log(n).
References
.
.
Dimension
Statistical randomness | Concentration dimension | [
"Physics"
] | 228 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
21,433,197 | https://en.wikipedia.org/wiki/Acoustic%20radiation%20force | Acoustic radiation force (ARF) is a physical phenomenon resulting from the interaction of an acoustic wave with an obstacle placed along its path. Generally, the force exerted on the obstacle is evaluated by integrating the acoustic radiation pressure (due to the presence of the sonic wave) over its time-varying surface.
The magnitude of the force exerted by an acoustic plane wave at any given location can be calculated as:
where
is a force per unit volume, here expressed in kg/(s2cm2);
is the absorption coefficient in Np/cm (nepers per cm);
is the temporal average intensity of the acoustic wave at the given location in W/cm2; and
is the speed of sound in the medium in cm/s.
The effect of frequency on acoustic radiation force is taken into account via intensity (higher pressures are more difficult to attain at higher frequencies) and absorption (higher frequencies have a higher absorption rate). As a reference, water has an acoustic absorption of 0.002 dB/(MHz2cm).(page number?) Acoustic radiation forces on compressible particles such as bubbles are also known as Bjerknes forces, and are generated through a different mechanism, which does not require sound absorption or reflection. Acoustic radiation forces can also be controlled through sub-wavelength patterning of the surface of the object.
When a particle is exposed to an acoustic standing wave it will experience a time-averaged force known as the primary acoustic radiation force (). In a rectangular microfluidic channel with coplanar walls which acts as a resonance chamber, the incoming acoustic wave can be approximated as a resonant, standing pressure wave of the form:.where is the wave number.
For a compressible, spherical and micrometre-sized particle (of radius ) suspended in an inviscid fluid in a rectangular micro-channel with a 1D planar standing ultrasonic wave of wavelength , the expression for the primary radiation force (at the far-field region where )becomes then :where
is the acoustic contrast factor
is relative compressibility between the particle and the surrounding fluid :
is relative density between the particle and the surrounding fluid :
is the acoustic energy density
The factor makes the radiation force period doubled and phase shifted relative to the pressure wave
is the speed of sound in the fluid
See also
Acoustic tweezers
Radiation pressure
References
Acoustics | Acoustic radiation force | [
"Physics"
] | 485 | [
"Classical mechanics",
"Acoustics"
] |
25,809,437 | https://en.wikipedia.org/wiki/Kenneth%20B.%20Storey | Kenneth B. Storey (born October 23, 1949) is a Canadian scientist whose work draws from a variety of fields including biochemistry and molecular biology. He is a Professor of Biology, Biochemistry and Chemistry at Carleton University in Ottawa, Canada. Storey has a world-wide reputation for his research on biochemical adaptation - the molecular mechanisms that allow animals to adapt to and endure severe environmental stresses such as deep cold, oxygen deprivation, and desiccation.
Biography
Kenneth Storey studied biochemistry at the University of Calgary (B.Sc. '71) and zoology at the University of British Columbia (Ph.D. '74). Storey is a Professor of Biochemistry, cross-appointed in the Departments of Biology, Chemistry and Neuroscience and holds the Canada Research Chair in Molecular Physiology at Carleton University in Ottawa, Canada.
Storey is an elected fellow of the Royal Society of Canada, of the Society for Cryobiology and of the American Association for the Advancement of Science. He has won fellowships and awards for research excellence including the Fry medal from the Canadian Society of Zoologists (2011), the Flavelle medal from the Royal Society of Canada (2010), Ottawa Life Sciences Council Basic Research Award (1998), a Killam Senior Research Fellowship (1993–1995), the Ayerst Award from the Canadian Society for Molecular Biosciences (1989), an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada (1984–1986), and four Carleton University Research Achievement Awards. Storey is the author of over 1200 research articles, the editor of seven books, has given over 500 talks at conferences and institutes worldwide, and organized numerous international symposia.
Research
Storey's research includes studies of enzyme properties, gene expression, protein phosphorylation, epigenetics, and cellular signal transduction mechanisms to seek out the basic principles of how organisms endure and flourish under extreme conditions. He is particularly known within the field of cryobiology for his studies of animals that can survive freezing, especially the frozen "frog-sicles" (Rana sylvatica) that have made his work popular with multiple TV shows and magazines. Storey's studies of the adaptations that allow frogs, insects, and other animals to survive freezing have made major advances in the understanding of how cells, tissues and organs can endure freezing. Storey was also responsible for the discovery that some turtle species are freeze tolerant: newly hatched painted turtles that spend their first winter on land (Chrysemys picta marginata & C. p. bellii). These turtles are unique as they are the only reptiles, and highest vertebrate life form, known to tolerate prolonged natural freezing of extracellular body fluids during winter hibernation. These advances may aid the development of organ cryopreservation technology. A second area of his research is metabolic rate depression - understanding the mechanisms by which some animals can reduce their metabolism and enter a state of hypometabolism or torpor that allows them to survive prolonged environmental stresses. His studies have identified molecular mechanisms that underlie metabolic arrest across phylogeny and that support phenomena including mammalian hibernation, estivation, and anoxia- and ischemia-tolerance. These studies hold key applications for medical science, particularly for preservation technologies that aim to extend the survival time of excised organs in cold or frozen storage. Additional applications include insights into hyperglycemia in metabolic syndrome and diabetes, and anoxic and ischemic damage caused by heart attack and stroke. Furthermore, Storey's lab has created several web based programs freely available for data management, data plotting, and microRNA analysis.
Publication links
Dr. Kenneth B. Storey is among the top 2% of highly cited scientists in the world.
PubMed
Google Scholar
External links
Storey lab website
Storey lab research tools
Kenneth B. Storey CV
References
1949 births
Canada Research Chairs
Carleton University
Academic staff of Carleton University
Cryobiology
Fellows of the Royal Society of Canada
Living people
Molecular biologists
People from Taber, Alberta
University of Calgary alumni | Kenneth B. Storey | [
"Physics",
"Chemistry",
"Biology"
] | 826 | [
"Physical phenomena",
"Phase transitions",
"Cryobiology",
"Molecular biology",
"Biochemistry",
"Biochemists",
"Molecular biologists"
] |
25,810,137 | https://en.wikipedia.org/wiki/Dymalloy | Dymalloy is a metal matrix composite of 20% copper and 80% silver alloy matrix with type I diamond. It has a very high thermal conductivity of 420 W/(m·K), and its thermal expansion can be adjusted to match other materials, e.g., silicon and gallium arsenide chips. It is chiefly used in microelectronics as a substrate for high-power and high-density multi-chip modules, where it aids with removing waste heat.
Dymalloy was developed as part of CRADA between Sun Microsystems and Lawrence Livermore National Laboratory. It was first researched for use in space-based electronics for the Brilliant Pebbles project.
Dymalloy is prepared from diamond powder of about 25 micrometers in size. The grains are coated by physical vapor deposition with 10 nanometers thick layer of alloy of tungsten with 26% rhenium, forming a tungsten carbide layer that assists bonding, then coated with 100 nanometers of copper to avoid carbide oxidation, then compacted in a mold and infiltrated with molten copper-silver alloy. Adding 55 vol.% of diamond yields material with thermal expansion matching that of gallium arsenide; a slightly higher amount of diamond allows matching to silicon. Copper can be used instead of copper-silver alloy, but the higher melting point may cause a partial transformation of diamond to graphite. The material shows some plasticity. High mechanical strain causes brittle failure in the diamond grains and ductile failure in the matrix. The diamond grains give the alloy a degree of surface texture; when a smooth surface is desired, the alloy can be plated and polished.
In 1996, the price for a 10×10×0.1 cm substrate was quoted as USD 200.
Similar alloys are possible with the metal phase of one or more of silver, copper, gold, aluminium, magnesium, and zinc. The carbide-forming metal can be selected from titanium, zirconium, hafnium, vanadium, niobium, tantalum, and chromium, where Ti, Zr, and Hf are preferable. The amount of carbide-forming metal must be sufficient to coat at least 25% of the diamond grains, as otherwise, the bonding is insufficient, and the heat transfer between matrix and diamond grains is weak, which leads to loss of effectivity towards the level of the matrix metal alone. The material may deform at higher temperatures and must be low to prevent the formation of too thick a carbide layer that would hinder heat transfer. The volume of diamond should be higher than 30 vol.%, as a lower ratio does not provide a significant increase of thermal conductivity, and lower than 70 vol.% as a higher ratio of diamond makes thermal expansion matching to semiconductors difficult. The grains should also be surrounded with metal to avoid deformation due to different thermal expansion coefficients between diamond and metal; the carbide coating assists with this.
A similar material is AlSiC, with aluminium instead of copper-silver alloy and silicon carbide instead of diamond.
References
Metal matrix composites
Copper alloys
Diamond
Chip carriers | Dymalloy | [
"Chemistry"
] | 650 | [
"Alloys",
"Copper alloys"
] |
25,811,021 | https://en.wikipedia.org/wiki/Palau%27amine | Palau'amine is a toxic chlorinated alkaloid compound synthesized naturally by certain species of sea sponges. The name of the molecule derives from the island nation of Palau, near where the first sponge species discovered to produce it, Stylotella agminata, is found. It has since been isolated in other sponges, including Stylissa massa.
The substance was first isolated from Stylotella agminata, a sponge found in the southwest Pacific Ocean, and described in 1993. Containing nine nitrogen atoms, the molecule is considered highly complex. The precise atomic structure was pinned down in 2007, and two years later, the molecule was synthesized in the lab of Phil Baran at the Scripps Research Institute in La Jolla, California. Early efforts towards its synthesis were directed at a misassigned structure featuring a cis- rather than trans-5/5 ring fusion, an error that was made because the trans-5/5 ring system is some 6 kcal/mol less stable than the cis-configured system.
Biomimetic synthesis
Based on the hypothesized biosynthesis of palau'amine, a proposed pathway to this dimeric pyrrole-imidazole alkaloid includes a key oxidation of a β-ketoester with manganese(III) acetate to initiate a cascade radical cyclization, producing an ageliferin skeleton.
Biological effects
Palau'amine is a proteasome inhibitor.
References
Alkaloids
Organochlorides
Halogen-containing alkaloids
Guanidine alkaloids
Amines
Spiro compounds
Nitrogen heterocycles
Proteasome inhibitors | Palau'amine | [
"Chemistry"
] | 346 | [
"Biomolecules by chemical classification",
"Natural products",
"Halogen-containing alkaloids",
"Guanidine alkaloids",
"Functional groups",
"Organic compounds",
"Alkaloids by chemical classification",
"Amines",
"Bases (chemistry)",
"Alkaloids",
"Spiro compounds"
] |
25,814,044 | https://en.wikipedia.org/wiki/Pipe%20Cutting | Pipe cutting or pipe profiling is a mechanized industrial process that removes material from pipe or tubing to create a desired profile. Typical profiles include straight cuts, mitres, saddles and midsection holes. These complex cuts are usually required to allow a tight fit between two parts that are to be joined via arc welding.
Pipe cutting is used in industries such as offshore operations, pipe processing, shipbuilding and pressure vessel manufacture. This technology is valued for its ability to produce the intricate cuts and profiles often required in these fields. Common applications include pipework, offshore jackets, steel frameworks, cranes, and demolition of structures offshore.
Hot cutting
Hot cutting refers to a process in which materials are cut using a thermal torch. One of the most common techniques is oxy-fuel gas cutting, which is used extensively for cutting carbon and low-alloy steels. However, its efficiency diminishes as the alloy content of the material increases, limiting its applicability for high-alloy steels. Arc-based cutting methods are used for cutting these materials. Among these, plasma arc cutting is the most commonly employed technique, owing to its precision and ability to cut through high-alloy steels efficiently. Thermal cutting creates a shallow region contaminated material adjacent to the cut surfaces - the heat affected zone. For some applications it is necessary to remove this material by mechanical means, such as grinding or machining, prior to use or further fabrication by welding.
The cutting torch can be integrated into a machine to perform precision cutting operations. In multi-axis machines, the movement of the axes is powered by electric motors and synchronized to guide the torch and the pipe being cut along a programmed path, resulting in the desired cutting profile. The synchronization of axes is accomplished either mechanically, via cams, levers and gears, or electronically with microprocessors , which is computer numerical control (CNC).
Cold cutting
Where the high temperatures and sources of ignition required by hot cutting are not desirable, air- or hydraulically-powered pipe cutting machines are used. These comprise a clamshell or chain-mounted cutting head holding a tool steel and feed mechanism which advances the tool a set amount per revolution round the pipe. Tools may be styled to cut and/or prepare the bevel for welding in a single or multiple passes.
High pressure abrasive water jets can be used for cold cutting. This technology is employed for the decommissioning of offshore structures.
References
Piping | Pipe Cutting | [
"Chemistry",
"Engineering"
] | 503 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
25,814,833 | https://en.wikipedia.org/wiki/Projective%20superspace | In supersymmetry, a theory of particle physics, projective superspace is one way of dealing with supersymmetric theories, i.e. with 8 real SUSY generators, in a manifestly covariant manner.
See also
Superspace
Harmonic superspace
References
Supersymmetry | Projective superspace | [
"Physics"
] | 62 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum physics stubs",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
1,412,323 | https://en.wikipedia.org/wiki/Trunnion | A trunnion () is a cylindrical protrusion used as a mounting or pivoting point. First associated with cannons, they are an important military development.
In mechanical engineering (see the trunnion bearing section below), it is one part of a rotating joint where a shaft (the trunnion) is inserted into (and turns inside) a full or partial cylinder.
Medieval history
In a cannon, the trunnions are two projections cast just forward of the center of mass of the cannon and fixed to a two-wheeled movable gun carriage.
With the creation of larger and more powerful siege guns in the early 15th century, a new way of mounting them became necessary. Stouter gun carriages were created with reinforced wheels, axles, and “trails” which extended behind the gun. Guns were now as long as in length and they were capable of shooting iron projectiles weighing from . When discharged, these wrought iron balls were comparable in range and accuracy with stone-firing bombards.
Trunnions were mounted near the center of mass to allow the barrel to be elevated to any desired angle, without having to dismount it from the carriage upon which it rested. Some guns had a second set of trunnions placed several feet back from the first pair, which could be used to allow for easier transportation. The gun would recoil causing the carriage to move backwards several feet but men or a team of horses could put it back into firing position. It became easier to rapidly transport these large siege guns, maneuver them from transportation mode to firing position, and they could go wherever a team of men or horses could pull them.
Initial significance
Due to its capabilities, the French- and Burgundy-designed siege gun, equipped with its trunnions, required little significant modification from around 1465 to the 1840s.
King Charles VIII and the French army used this new gun in the 1494 invasion of Italy. Although deemed masters of war and artillery at that time, Italians had not anticipated the innovations in French siege weaponry. Prior to this, field artillery guns were huge, large-caliber bombards: superguns that, along with enormous stones or other projectiles, were dragged from destination to destination. These behemoths could only be used effectively in sieges, and more often than not provided just a psychological effect on the battlefield; owning these giant mortars did not guarantee any army a victory. The French saw the limitations of these massive weapons and focused their efforts on improving their smaller and lighter guns, which used smaller, more manageable projectiles combined with larger amounts of gunpowder. Equipping them with trunnions was key for two reasons. First, teams of horses could now move these cannons fast enough to keep up with their armies and no longer had to stop and dismount them from their carriages to achieve the proper range before firing; second, the capability to adjust firing angle without having to lift the entire weight of the gun allowed tactical selection and reselection of targets rather than being deployed solely on the first target chosen. Francesco Guicciardini, an Italian historian and statesman, wrote that the cannons were placed against town walls so quickly, spaced together so closely and shot so rapidly and with such force that the time for a significant amount of damage to be inflicted went from a matter of days (as with bombards) to a matter of hours. For the first time in history, as seen in the 1512 battle of Ravenna and the 1515 Battle of Marignano, artillery weaponry played a very decisive part in the victory of the invading army over the city under siege. Cities that had proudly withstood sieges for up to seven years fell swiftly with the advent of these new weapons.
Defensive tactics and fortifications had to be altered since these new weapons could be transported so speedily and aimed with much more accuracy at strategic locations. Two significant changes were the additions of a ditch and low, sloping ramparts of packed earth (glacis) that would surround the city and absorb the impact of the cannonballs, and the replacement of round watchtowers with angular bastions. These towers would be deemed trace Italienne.
Whoever could afford these new weapons had the tactical advantage over their neighbors and smaller sovereignties, which could not incorporate them into their army. Smaller states, such as the principalities of Italy, began to conglomerate. Preexisting stronger entities, such as France or the Habsburg emperors, were able to expand their territories and maintain a tighter control over the land they already occupied. With the potential threat of their land and castles being seized, the nobility began to pay their taxes and more closely follow their ruler’s mandates. With siege guns mounted on trunnions, stronger and larger states were formed, but because of this, struggles between neighboring governments with consolidated power began to ensue and would continue to plague Europe for the next few centuries.
Usages
In vehicles
In older cars, the trunnion is part of the suspension and either allows free movement of the rear wheel hub in relation to the chassis or allows the front wheel hub to rotate with the steering. On many cars (such as those made by Triumph) the trunnion is machined from a brass or bronze casting and is prone to failure if not greased properly. Between 1962 and 1965 American Motors recommended lubrication of its pre-packed front suspension trunnions on some models using a sodium base grease every or three years. In 1963 it incorporated molded rubber "Clevebloc" bushings on the upper trunnion of others to seal out dirt and retain silicone lubricant for the life of the car.
In aviation, the term refers to the structural component that attaches the undercarriage or landing gear to the airframe. For aircraft equipped with retractable landing gear, the trunnion is pivoted to permit rotation of the entire gear assembly.
In axles, the term refers to the type of suspension used on a multi-axle configurations. It is a "short axle pivoted at or near its mid-point about a horizontal axis transverse to its own centerline, normally used in pairs in conjunction with a walking beam in order to achieve two axis of oscillation." This type of suspension allows to be loaded on an axle group.
In trailers, leveling jacks may have trunnion mounts.
Trunnion bearings
In mechanical engineering, it is one part of a rotating joint where a shaft (the trunnion) is inserted into (and turns inside) a full or partial cylinder. Often used in opposing pairs, this joint allows tight tolerances and strength from a large surface contact area between the trunnion and the cylinder.
See also
Gimbal
References
Hardware (mechanical)
Mechanisms (engineering)
Bridge components
Artillery components | Trunnion | [
"Physics",
"Technology",
"Engineering"
] | 1,395 | [
"Machines",
"Physical systems",
"Construction",
"Artillery components",
"Mechanical engineering",
"Bridge components",
"Hardware (mechanical)",
"Mechanisms (engineering)",
"Components"
] |
1,412,580 | https://en.wikipedia.org/wiki/Science%20of%20photography | The science of photography is the use of chemistry and physics in all aspects of photography. This applies to the camera, its lenses, physical operation of the camera, electronic camera internals, and the process of developing film in order to take and develop pictures properly.
Optics
Camera obscura
The fundamental technology of most photography, whether digital or analog, is the camera obscura effect and its ability to transform of a three dimensional scene into a two dimensional image. At its most basic, a camera obscura consists of a darkened box, with a very small hole in one side, which projects an image from the outside world onto the opposite side. This form is often referred to as a pinhole camera.
When aided by a lens, the hole in the camera doesn't have to be tiny to create a sharp and distinct image, and the exposure time can be decreased, which allows cameras to be handheld.
Lenses
A photographic lens is usually composed of several lens elements, which combine to reduce the effects of chromatic aberration, coma, spherical aberration, and other aberrations. A simple example is the three-element Cooke triplet, still in use over a century after it was first designed, but many current photographic lenses are much more complex.
Using a smaller aperture can reduce most, but not all aberrations. They can also be reduced dramatically by using an aspheric element, but these are more complex to grind than spherical or cylindrical lenses. However, with modern manufacturing techniques the extra cost of manufacturing aspherical lenses is decreasing, and small aspherical lenses can now be made by molding, allowing their use in inexpensive consumer cameras. Fresnel lenses are not common in photography are used in some cases due to their very low weight. The recently developed Fiber-coupled monocentric lens consists of spheres constructed of concentric hemispherical shells of different glasses tied to the focal plane by bundles of optical fibers. Monocentric lenses are also not used in cameras because the technology was just debuted in October 2013 at the Frontiers in Optics Conference in Orlando, Florida.
All lens design is a compromise between numerous factors, including cost. Zoom lenses (i.e. lenses of variable focal length) involve additional compromises and therefore normally do not match the performance of prime lenses.
When a camera lens is focused to project an object some distance away onto the film or detector, the objects that are closer in distance, relative to the distant object, are also approximately in focus. The range of distances that are nearly in focus is called the depth of field. Depth of field generally increases with decreasing aperture diameter (increasing f-number). The unfocused blur outside the depth of field is sometimes used for artistic effect in photography. The subjective appearance of this blur is known as bokeh.
If the camera lens is focused at or beyond its hyperfocal distance, then the depth of field becomes large, covering everything from half the hyperfocal distance to infinity. This effect is used to make "focus free" or fixed-focus cameras.
Aberration
Aberrations are the blurring and distorting properties of an optical system. A high quality lens will produce a smaller amount of aberrations.
Spherical aberration occurs due to the increased refraction of light rays that occurs when rays strike a lens, or a reflection of light rays that occurs when rays strike a mirror near its edge in comparison with those that strike nearer the center. This is dependent on the focal length of a spherical lens and the distance from its center. It is compensated by designing a multi-lens system or by using an aspheric lens.
Chromatic aberration is caused by a lens having a different refractive index for different wavelengths of light and the dependence of the optical properties on color. Blue light will generally bend more than red light. There are higher order chromatic aberrations, such as the dependence of magnification on color. Chromatic aberration is compensated by using a lens made out of materials carefully designed to cancel out chromatic aberrations.
Curved focal surface is the dependence of the first order focus on the position on the film or CCD. This can be compensated with a multiple lens optical design, but curving the film has also been used.
Focus
Focus is the tendency for light rays to reach the same place on the image sensor or film, independent of where they pass through the lens. For clear pictures, the focus is adjusted for distance, because at a different object distance the rays reach different parts of the lens with different angles. In modern photography, focusing is often accomplished automatically.
The autofocus system in modern SLRs use a sensor in the mirrorbox to measure contrast. The sensor's signal is analyzed by an application-specific integrated circuit (ASIC), and the ASIC tries to maximize the contrast pattern by moving lens elements. The ASICs in modern cameras also have special algorithms for predicting motion, and other advanced features.
Diffraction limit
Since light propagates as waves, the patterns it produces on the film are subject to the wave phenomenon known as diffraction, which limits the image resolution to features on the order of several times the wavelength of light. Diffraction is the main effect limiting the sharpness of optical images from lenses that are stopped down to small apertures (high f-numbers), while aberrations are the limiting effect at large apertures (low f-numbers). Since diffraction cannot be eliminated, the best possible lens for a given operating condition (aperture setting) is one that produces an image whose quality is limited only by diffraction. Such a lens is said to be diffraction limited.
The diffraction-limited optical spot size on the CCD or film is proportional to the f-number (about equal to the f-number times the wavelength of light, which is near 0.0005 mm), making the overall detail in a photograph proportional to the size of the film, or CCD divided by the f-number. For a 35 mm camera with , this limit corresponds to about 6,000 resolution elements across the width of the film (36 mm / (11 * 0.0005 mm) = 6,500.
The finite spot size caused by diffraction can also be expressed as a criterion for distinguishing distant objects: two distant point sources can only produce separate images on the film or sensor if their angular separation exceeds the wavelength of light divided by the width of the open aperture of the camera lens.
Chemical processes
Gelatin silver
The gelatin silver process is the most commonly used chemical process in black-and-white photography, and is the fundamental chemical process for modern analog color photography. As such, films and printing papers available for analog photography rarely rely on any other chemical process to record an image.
Daguerreotypes
Daguerreotype (; ) was the first publicly available photographic process; it was widely used during the 1840s and 1850s. "Daguerreotype" also refers to an image created through this process.
Collodion process and the ambrotype
The collodion process is an early photographic process. The collodion process, mostly synonymous with the "collodion wet plate process", requires the photographic material to be coated, sensitized, exposed and developed within the span of about fifteen minutes, necessitating a portable darkroom for use in the field. Collodion is normally used in its wet form, but can also be used in dry form, at the cost of greatly increased exposure time. The latter made the dry form unsuitable for the usual portraiture work of most professional photographers of the 19th century. The use of the dry form was therefore mostly confined to landscape photography and other special applications where minutes-long exposure times were tolerable.
Cyanotypes
Cyanotype is a photographic printing process that produces a cyan-blue print. Engineers used the process well into the 20th century as a simple and low-cost process to produce copies of drawings, referred to as blueprints. The process uses two chemicals: ferric ammonium citrate and potassium ferricyanide.
Platinum and palladium processes
Platinum prints, also called platinotypes, are photographic prints made by a monochrome printing process involving platinum.
Gum bichromate
Gum bichromate is a 19th-century photographic printing process based on the light sensitivity of dichromates. It is capable of rendering painterly images from photographic negatives. Gum printing is traditionally a multi-layered printing process, but satisfactory results may be obtained from a single pass. Any color can be used for gum printing, so natural-color photographs are also possible by using this technique in layers.
C-prints and color film
A chromogenic print, also known as a C-print or C-type print, a silver halide print, or a dye coupler print, is a photographic print made from a color negative, transparency or digital image, and developed using a chromogenic process. They are composed of three layers of gelatin, each containing an emulsion of silver halide, which is used as a light-sensitive material, and a different dye coupler of subtractive color which together, when developed, form a full-color image.
Digital sensors
An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices,[1][2][3] medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.
Practical applications
Law of reciprocity
Exposure ∝ Aperture Area × Exposure Time × Scene Luminance
The law of reciprocity describes how light intensity and duration trade off to make an exposure—it defines the relationship between shutter speed and aperture, for a given total exposure. Changes to any of these elements are often measured in units known as "stops"; a stop is equal to a factor of two.
Halving the amount light exposing the film can be achieved either by:
Closing the aperture by one stop
Decreasing the shutter time (increasing the shutter speed) by one stop
Cutting the scene lighting by half
Likewise, doubling the amount of light exposing the film can be achieved by the opposite of one of these operations.
The luminance of the scene, as measured on a reflected light meter, also affects the exposure proportionately. The amount of light required for proper exposure depends on the film speed; which can be varied in stops or fractions of stops. With either of these changes, the aperture or shutter speed can be adjusted by an equal number of stops to get to a suitable exposure.
Light is most easily controlled through the use of the camera's aperture (measure in f-stops), but it can also be regulated by adjusting the shutter speed. Using faster or slower film is not usually something that can be done quickly, at least using roll film. Large format cameras use individual sheets of film and each sheet could be a different speed. Also, if you're using a larger format camera with a polaroid back, you can switch between backs containing different speed polaroids. Digital cameras can easily adjust the film speed they are simulating by adjusting the exposure index, and many digital cameras can do so automatically in response to exposure measurements.
For example, starting with an exposure of 1/60 at , the depth-of-field could be made shallower by opening up the aperture to , an increase in exposure of 4 stops. To compensate, the shutter speed would need to be increased as well by 4 stops, that is, adjust exposure time down to 1/1000. Closing down the aperture limits the resolution due to the diffraction limit.
The reciprocity law specifies the total exposure, but the response of a photographic material to a constant total exposure may not remain constant for very long exposures in very faint light, such as photographing a starry sky, or very short exposures in very bright light, such as photographing the sun. This is known as reciprocity failure of the material (film, paper, or sensor).
Motion blur
Motion blur is caused when either the camera or the subject moves during the exposure. This causes a distinctive streaky appearance to the moving object or the entire picture (in the case of camera shake).
Motion blur can be used artistically to create the feeling of speed or motion, as with running water. An example of this is the technique of "panning", where the camera is moved so it follows the subject, which is usually fast moving, such as a car. Done correctly, this will give an image of a clear subject, but the background will have motion blur, giving the feeling of movement. This is one of the more difficult photographic techniques to master, as the movement must be smooth, and at the correct speed. A subject that gets closer or further away from the camera may further cause focusing difficulties.
Light trails are another photographic effect where motion blur is used. Photographs of the lines of light visible in long exposure photos of roads at night are one example of the effect. This is caused by the cars moving along the road during the exposure. The same principle is used to create star trail photographs.
Generally, motion blur is something that is to be avoided, and this can be done in several different ways. The simplest way is to limit the shutter time so that there is very little movement of the image during the time the shutter is open. At longer focal lengths, the same movement of the camera body will cause more motion of the image, so a shorter shutter time is needed. A commonly cited rule of thumb is that the shutter speed in seconds should be about the reciprocal of the 35 mm equivalent focal length of the lens in millimeters. For example, a 50 mm lens should be used at a minimum speed of 1/50 sec, and a 300 mm lens at 1/300 of a second. This can cause difficulties when used in low light scenarios, since exposure also decreases with shutter time.
Motion blur due to subject movement can usually be prevented by using a faster shutter speed. The exact shutter speed will depend on the speed at which the subject is moving. For example, a very fast shutter speed will be needed to "freeze" the rotors of a helicopter, whereas a slower shutter speed will be sufficient to freeze a runner.
A tripod may be used to avoid motion blur due to camera shake. This will stabilize the camera during the exposure. A tripod is recommended for exposure times more than about 1/15 seconds. There are additional techniques which, in conjunction with use of a tripod, ensure that the camera remains very still. These may employ use of a remote actuator, such as a cable release or infrared remote switch to activate the shutter, so as to avoid the movement normally caused when the shutter release button is pressed directly. The use of a "self timer" (a timed release mechanism that automatically trips the shutter release after an interval of time) can serve the same purpose. Most modern single-lens reflex camera (SLR) have a mirror lock-up feature that eliminates the small amount of shake produced by the mirror flipping up.
Film grain resolution
Black-and-white film has a "shiny" side and a "dull" side. The dull side is the emulsion, a gelatin that suspends an array of silver halide crystals. These crystals contain silver grains that determine how sensitive the film is to light exposure, and how fine or grainy the negative the print will look. Larger grains mean faster exposure but a grainier appearance; smaller grains are finer looking but take more exposure to activate. The graininess of film is represented by its ISO factor; generally a multiple of 10 or 100. Lower numbers produce finer grain but slower film, and vice versa.
Contribution to noise (grain)
Quantum efficiency
Light comes in particles and the energy of a light-particle (the photon) is the frequency of the light times the Planck constant. A fundamental property of any photographic method is how it collects the light on its photographic plate or electronic detector.
CCDs and other photodiodes
Photodiodes are back-biased semiconductor diodes, in which an intrinsic layer with very few charge carriers prevents electric currents from flowing. Depending on the material, photons have enough energy to raise one electron from the upper full band to the lowest empty band. The electron and the "hole", or the empty space where it was, are then free to move in the electric field and carry current, which can be measured. The fraction of incident photons that produce carrier pairs depends largely on the semiconductor material.
Photomultiplier tubes
Photomultiplier tubes are vacuum phototubes that amplify light by accelerating the photoelectrons to knock more electrons free from a series of electrodes. They are among the most sensitive light detectors but are not well suited to photography.
Aliasing
Aliasing can occur in optical and chemical processing, but it is more common and easily understood in digital processing. It occurs whenever an optical or digital image is sampled or re-sampled at a rate which is too low for its resolution. Some digital cameras and scanners have anti-aliasing filters to reduce aliasing by intentionally blurring the image to match the sampling rate. It is common for film developing equipment used to make prints of different sizes to increase the graininess of the smaller size prints by aliasing.
It is usually desirable to suppress both noises such as grain and details of the real object that are too small to be represented at the sampling rate.
See also
Astrophotography
Underwater photography
Infrared photography
Ultraviolet photography
Silver bromide
Photographic processing
Image editing
Highlight headroom
References
Photography, science of
Photography | Science of photography | [
"Physics",
"Chemistry"
] | 3,690 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
1,412,703 | https://en.wikipedia.org/wiki/1/N%20expansion | In quantum field theory and statistical mechanics, the 1/N expansion (also known as the "large N" expansion) is a particular perturbative analysis of quantum field theories with an internal symmetry group such as SO(N) or SU(N). It consists in deriving an expansion for the properties of the theory in powers of , which is treated as a small parameter.
This technique is used in QCD (even though is only 3 there) with the gauge group SU(3). Another application in particle physics is to the study of AdS/CFT dualities.
It is also extensively used in condensed matter physics where it can be used to provide a rigorous basis for mean-field theory.
Example
Starting with a simple example — the O(N) φ4 — the scalar field φ takes on values in the real vector representation of O(N). Using the index notation for the N "flavors" with the Einstein summation convention and because O(N) is orthogonal, no distinction will be made between covariant and contravariant indices. The Lagrangian density is given by
where runs from 1 to N. Note that N has been absorbed into the coupling strength λ. This is crucial here.
Introducing an auxiliary field F;
In the Feynman diagrams, the graph breaks up into disjoint cycles, each made up of φ edges of the same flavor and the cycles are connected by F edges (which have no propagator line as auxiliary fields do not propagate).
Each 4-point vertex contributes λ/N and hence, 1/N. Each flavor cycle contributes N because there are N such flavors to sum over. Note that not all momentum flow cycles are flavor cycles.
At least perturbatively, the dominant contribution to the 2k-point connected correlation function is of the order (1/N)k-1 and the other terms are higher powers of 1/N. Performing a 1/N expansion gets more and more accurate in the large N limit. The vacuum energy density is proportional to N, but can be ignored due to non-compliance with general relativity assumptions.
Due to this structure, a different graphical notation to denote the Feynman diagrams can be used. Each flavor cycle can be represented by a vertex. The flavor paths connecting two external vertices are represented by a single vertex. The two external vertices along the same flavor path are naturally paired and can be replaced by a single vertex and an edge (not an F edge) connecting it to the flavor path. The F edges are edges connecting two flavor cycles/paths to each other (or a flavor cycle/path to itself). The interactions along a flavor cycle/path have a definite cyclic order and represent a special kind of graph where the order of the edges incident to a vertex matters, but only up to a cyclic permutation, and since this is a theory of real scalars, also an order reversal (but if we have SU(N) instead of SU(2), order reversals aren't valid). Each F edge is assigned a momentum (the momentum transfer) and there is an internal momentum integral associated with each flavor cycle.
QCD
QCD is an SU(3) gauge theory involving gluons and quarks. The left-handed quarks belong to a triplet representation, the right-handed to an antitriplet representation (after charge-conjugating them) and the gluons to a real adjoint representation. A quark edge is assigned a color and orientation and a gluon edge is assigned a color pair.
In the large N limit, we only consider the dominant term. See AdS/CFT.
References
Quantum field theory
Quantum chromodynamics
String theory
Statistical mechanics | 1/N expansion | [
"Physics",
"Astronomy"
] | 774 | [
"Quantum field theory",
"Astronomical hypotheses",
"String theory",
"Quantum mechanics",
"Statistical mechanics"
] |
1,413,965 | https://en.wikipedia.org/wiki/Energy%20transformation | Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
Release of energy from gravitational potential
A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.
On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
Release of energy from radioactive potential
Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
Release of energy from hydrogen fusion potential
In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples
Examples of sets of energy conversions in machines
A coal-fired power plant involves these energy transformations:
Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
Kinetic energy of steam converted to mechanical energy in the turbine
Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output
In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
In a conventional automobile, the following energy transformations occur:
Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
Kinetic energy of expanding gas converted to the linear piston movement
Linear piston movement converted to rotary crankshaft movement
Rotary crankshaft movement passed into transmission assembly
Rotary movement passed out of transmission assembly
Rotary movement passed through a differential
Rotary movement passed out of differential to drive wheels
Rotary movement of drive wheels converted to linear motion of the vehicle
Other energy conversions
There are many different machines and transducers that convert one energy form into another. A short list of examples follows:
ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy)
Battery (electricity) (chemical energy → electrical energy)
Electric generator (kinetic energy or mechanical work → electrical energy)
Electric heater (electric energy → heat)
Fire (chemical energy → heat and light)
Friction (kinetic energy → heat)
Fuel cell (chemical energy → electrical energy)
Geothermal power (heat→ electrical energy)
Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy)
Hydroelectric dam (gravitational potential energy → electrical energy)
Electric lamp (electrical energy → heat and light)
Microphone (sound → electrical energy)
Ocean thermal power (heat → electrical energy)
Photosynthesis (electromagnetic radiation → chemical energy)
Piezoelectrics (strain → electrical energy)
Thermoelectric (heat → electrical energy)
Wave power (mechanical energy → electrical energy)
Windmill (wind energy → electrical energy or mechanical energy)
See also
Chaos theory
Conservation law
Conservation of energy
Conservation of mass
Energy accounting
Energy quality
Groundwater energy balance
Laws of thermodynamics
Noether's theorem
Ocean thermal energy conversion
Thermodynamic equilibrium
Thermoeconomics
Uncertainty principle
References
Further reading
Energy Transfer and Transformation | Core knowledge science
Energy (physics) | Energy transformation | [
"Physics",
"Mathematics"
] | 2,099 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
1,414,078 | https://en.wikipedia.org/wiki/Machinery%27s%20Handbook | Machinery's Handbook for machine shop and drafting-room; a reference book on machine design and shop practice for the mechanical engineer, draftsman, toolmaker, and machinist (the full title of the 1st edition) is a classic reference work in mechanical engineering and practical workshop mechanics in one volume published by Industrial Press, New York, since 1914. The first edition was created by Erik Oberg (1881–1951) and Franklin D. Jones (1879–1967), who are still mentioned on the title page of the 29th edition (2012). Recent editions of the handbook contain chapters on mathematics, mechanics, materials, measuring, toolmaking, manufacturing, threading, gears, and machine elements, combined with excerpts from ANSI standards. Machinery's Handbook is still regularly revised and updated; the most current revision is Edition 32 (2024). It continues to be the "bible of the metalworking industries" today. The work is available in online and ebook form as well as print.
During the decades from World War I to World War II, McGraw-Hill published a similar handbook, American Machinists' Handbook, which competed directly with Industrial Press's Machinery's Handbook. McGraw-Hill ceased publication of their guide after the 8th edition (1945). Another short-lived spin-off appeared in 1955.
Machinery's Handbook is the inspiration for similar works in other countries, such as Sweden's Karlebo handbok (1st ed. 1936).
Machinery's Encyclopedia
In 1917, Oberg and Jones also published Machinery's Encyclopedia in 7 volumes. The handbook and encyclopedia are named after the monthly magazine Machinery (Industrial Press, 1894–1973), where the two were consulting editors.
See also
Machinist Calculator
Kempe's Engineers Year-Book
References
External links
Machinery's Handbook on the Industrial Press website
1914 non-fiction books
Mechanical engineering
Handbooks and manuals
Metallurgical industry of the United States | Machinery's Handbook | [
"Physics",
"Chemistry",
"Engineering"
] | 400 | [
"Applied and interdisciplinary physics",
"Metallurgical industry of the United States",
"Mechanical engineering",
"Metallurgical industry by country"
] |
1,415,168 | https://en.wikipedia.org/wiki/Nuclear%20density | Nuclear density is the density of the nucleus of an atom. For heavy nuclei, it is close to the nuclear saturation density nucleons/fm3, which minimizes the energy density of an infinite nuclear matter. The nuclear saturation mass density is thus kg/m3, where mu is the atomic mass constant. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars.
Evaluation
The nuclear density of a typical nucleus can be approximately calculated from the size of the nucleus, which itself can be approximated based on the number of protons and neutrons in it. The radius of a typical nucleus, in terms of number of nucleons, is
where is the mass number and is 1.25 fm, with typical deviations of up to 0.2 fm from this value. The number density of the nucleus is thus:
The density for any typical nucleus, in terms of mass number, is thus constant, not dependent on A or R, theoretically:
The experimentally determined value for the nuclear saturation density is
The mass density ρ is the product of the number density n by the particle's mass. The calculated mass density, using a nucleon mass of mn=1.67×10−27 kg, is thus:
(using the theoretical estimate)
or
(using the experimental value).
Applications and extensions
The components of an atom and of a nucleus have varying densities. The proton is not a fundamental particle, being composed of quark–gluon matter. Its size is approximately 10−15 meters and its density 1018 kg/m3. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars.
Using deep inelastic scattering, it has been estimated that the "size" of an electron, if it is not a point particle, must be less than 10−17 meters. This would correspond to a density of roughly 1021 kg/m3.
There are possibilities for still-higher densities when it comes to quark matter. In the near future, the highest experimentally measurable densities will likely be limited to leptons and quarks.
See also
Electron degeneracy pressure
Nuclear matter
Quark–gluon plasma
References
External links
(derivation of equations and other mathematical descriptions)
Mass density
Atoms | Nuclear density | [
"Physics"
] | 479 | [
"Mechanical quantities",
"Physical quantities",
"Intensive quantities",
"Mass",
"Volume-specific quantities",
"Density",
"Atoms",
"Mass density",
"Matter"
] |
1,416,951 | https://en.wikipedia.org/wiki/Oxoglutarate%20dehydrogenase%20complex | The oxoglutarate dehydrogenase complex (OGDC) or α-ketoglutarate dehydrogenase complex is an enzyme complex, most commonly known for its role in the citric acid cycle.
Units
Much like pyruvate dehydrogenase complex (PDC), this enzyme forms a complex composed of three components:
Three classes of these multienzyme complexes have been characterized: one specific for pyruvate, a second specific for 2-oxoglutarate, and a third specific for branched-chain α-keto acids. The oxoglutarate dehydrogenase complex has the same subunit structure and thus uses the same cofactors as the pyruvate dehydrogenase complex and the branched-chain alpha-keto acid dehydrogenase complex (TTP, CoA, lipoate, FAD and NAD). Only the E3 subunit is shared in common between the three enzymes.
Properties
Metabolic pathways
This enzyme participates in three different pathways:
Citric acid cycle (KEGG link: MAP00020)
Lysine degradation (KEGG link: MAP00310)
Tryptophan metabolism (KEGG link: MAP00380)
Kinetic properties
The following values are from Azotobacter vinelandii (1):
KM: 0.14 ± 0.04 mM
Vmax : 9 ± 3 μmol.min−1.mg−1
Citric acid cycle
Reaction
The reaction catalyzed by this enzyme in the citric acid cycle is:
α-ketoglutarate + NAD+ + CoA → Succinyl CoA + CO2 + NADH
This reaction proceeds in three steps:
decarboxylation of α-ketoglutarate,
reduction of NAD+ to NADH,
and subsequent transfer to CoA, which forms the end product, succinyl CoA.
ΔG°' for this reaction is -7.2 kcal mol−1. The energy needed for this oxidation is conserved in the formation of a thioester bond of succinyl CoA.
Regulation
Oxoglutarate dehydrogenase is a key control point in the citric acid cycle. It is inhibited by its products, succinyl CoA and NADH. A high energy charge in the cell will also be inhibitive. ADP and calcium ions are allosteric activators of the enzyme.
By controlling the amount of available reducing equivalents generated by the Krebs cycle, Oxoglutarate dehydrogenase has a downstream regulatory effect on oxidative phosphorylation and ATP production. Reducing equivalents (such as NAD+/NADH) supply the electrons that run through the electron transport chain of oxidative phosphorylation. Increased Oxoglutarate dehydrogenase activation levels serve to increase the concentrations of NADH relative to NAD+. High NADH concentrations stimulate an increase in flux through oxidative phosphorylation.
While an increase in flux through this pathway generates ATP for the cell, the pathway also generates free radical species as a side product, which can cause oxidative stress to the cells if left to accumulate.
Oxoglutarate dehydrogenase is considered to be a redox sensor in the mitochondria, and has an ability to change the functioning level of mitochondria to help prevent oxidative damage. In the presence of a high concentration of free radical species, Oxoglutarate dehydrogenase undergoes fully reversible free radical mediated inhibition. In extreme cases, the enzyme can also undergo complete oxidative inhibition.
When mitochondria are treated with excess hydrogen peroxide, flux through the electron transport chain is reduced, and NADH production is halted. Upon consumption and removal of the free radical source, normal mitochondrial function is restored.
It is believed that the temporary inhibition of mitochondrial function stems from the reversible glutathionylation of the E2-lipoac acid domain of Oxoglutarate dehydrogenase. Glutathionylation, a form of post-translational modification, occurs during times of increased concentrations of free radicals, and can be undone after hydrogen peroxide consumption via glutaredoxin. Glutathionylation "protects" the lipoic acid of the E2 domain from undergoing oxidative damage, which helps spare the Oxoglutarate dehydrogenase complex from oxidative stress.
Oxoglutarate dehydrogenase activity is turned off in the presence of free radicals in order to protect the enzyme from damage. Once free radicals are consumed by the cell, the enzyme's activity is turned back on via glutaredoxin. The reduction in activity of the enzyme under times of oxidative stress also serves to slow the flux through the electron transport chain, which slows production of free radicals.
In addition to free radicals and the mitochondrial redox state, Oxoglutarate dehydrogenase activity is also regulated by ATP/ADP ratios, the ratio of Succinyl-CoA to CoA-SH, and the concentrations of various metal ion cofactors (Mg2+, Ca2+). Many of these allosteric regulators act at the E1 domain of the enzyme complex, but all three domains of the enzyme complex can be allosterically controlled. The activity of the enzyme complex is upregulated with high levels of ADP and Pi, Ca2+, and CoA-SH. The enzyme is inhibited by high ATP levels, high NADH levels, and high Succinyl-CoA concentrations.
Stress response
Oxoglutarate dehydrogenase plays a role in the cellular response to stress. The enzyme complex undergoes a stress-mediated temporary inhibition upon acute exposure to stress. The temporary inhibition period sparks a stronger up-regulation response, allowing an increased level of oxoglutarate dehydrogenase activity to compensate for the acute stress exposure. Acute exposures to stress are usually at lower, tolerable levels for the cell.
Pathophysiologies can arise when the stress becomes cumulative or develops into chronic stress. The up-regulation response that occurs after acute exposure can become exhausted if the inhibition of the enzyme complex becomes too strong. Stress in cells can cause a deregulation in the biosynthesis of the neurotransmitter glutamate. Glutamate toxicity in the brain is caused by a buildup of glutamate under times of stress. If oxoglutarate dehydrogenase activity is dysfunctional (no adaptive stress compensation), the build-up of glutamate cannot be fixed, and brain pathologies can ensue. Dysfunctional oxoglutarate dehydrogenase may also predispose the cell to damage from other toxins that can cause neurodegeneration.
Pathology
2-Oxo-glutarate dehydrogenase is an autoantigen recognized in primary biliary cirrhosis, a form of acute liver failure. These antibodies appear to recognize oxidized protein that has resulted from inflammatory immune responses. Some of these inflammatory responses are explained by gluten sensitivity. Other mitochondrial autoantigens include pyruvate dehydrogenase and branched-chain alpha-keto acid dehydrogenase complex, which are antigens recognized by anti-mitochondrial antibodies.
Activity of the 2-oxoglutarate dehydrogenase complex is decreased in many neurodegenerative diseases. Alzheimer's disease, Parkinson's disease, Huntington disease, and supranuclear palsy are all associated with an increased oxidative stress level in the brain. Specifically for Alzheimer Disease patients, the activity of oxoglutarate dehydrogenase is significantly diminished. This leads to a possibility that the portion of the TCA cycle responsible for causing the build-up of free radical species in the brain of patients is a malfunctioning oxoglutarate dehydrogenase complex. The mechanism for disease-related inhibition of this enzyme complex remains relatively unknown.
In the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, mitochondrial fatty acid synthesis (mtFASII) is impaired, which is the precursor reaction of lipoic acid biosynthesis. The result is a reduced lipoylation degree of important mitochondrial enzymes, such as oxoglutarate dehydrogenase complex (OGDC).
References
Further reading
External links
EC 1.2.4
Autoantigens
Citric acid cycle | Oxoglutarate dehydrogenase complex | [
"Chemistry"
] | 1,784 | [
"Carbohydrate metabolism",
"Citric acid cycle"
] |
22,883,373 | https://en.wikipedia.org/wiki/Electron-longitudinal%20acoustic%20phonon%20interaction | The electron-longitudinal acoustic phonon interaction is an interaction that can take place between an electron and a longitudinal acoustic (LA) phonon in a material such as a semiconductor.
Displacement operator of the LA phonon
The equations of motion of the atoms of mass M which locates in the periodic lattice is
,
where is the displacement of the nth atom from their equilibrium positions.
Defining the displacement of the th atom by , where is the coordinates of the th atom and is the lattice constant,
the displacement is given by
Then using Fourier transform:
and
.
Since is a Hermite operator,
From the definition of the creation and annihilation operator
is written as
Then expressed as
Hence, using the continuum model, the displacement operator for the 3-dimensional case is
,
where is the unit vector along the displacement direction.
Interaction Hamiltonian
The electron-longitudinal acoustic phonon interaction Hamiltonian is defined as
,
where is the deformation potential for electron scattering by acoustic phonons.
Inserting the displacement vector to the Hamiltonian results to
Scattering probability
The scattering probability for electrons from to states is
Replace the integral over the whole space with a summation of unit cell integrations
where , is the volume of a unit cell.
See also
Phonon scattering
Umklapp scattering
Notes
References
Atomic physics | Electron-longitudinal acoustic phonon interaction | [
"Physics",
"Chemistry"
] | 261 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
22,884,190 | https://en.wikipedia.org/wiki/Docosatetraenoylethanolamide | Docosatetraenoylethanolamide (DEA) (Adrenoyl-ethanolamide) (Adrenoyl-EA) is an endogenous ethanolamide that has been shown to act on the cannabinoid (CB1) receptor. DEA is similar in structure to anandamide (AEA, a recognized endogenous ligand for the CB1 receptor), containing docosatetraenoic acid in place of arachidonic acid. While DEA has been shown to bind to the CB1 receptor with similar potency and efficacy as AEA, its role as a cannabinergic neurotransmitter is not well understood.
Docosatetraenoylethanolamide (DEA) has been found in Tropaeolum tuberosum (Mashua) and Leonotis leonurus (Wild Dagga / Lion's Tail).
References
Fatty acid amides
Endocannabinoids
Neurotransmitters | Docosatetraenoylethanolamide | [
"Chemistry",
"Biology"
] | 203 | [
"Biotechnology stubs",
"Neurotransmitters",
"Biochemistry stubs",
"Biochemistry",
"Neurochemistry"
] |
22,884,956 | https://en.wikipedia.org/wiki/Baikal%20Deep%20Underwater%20Neutrino%20Telescope | The Baikal Deep Underwater Neutrino Telescope (BDUNT) () is a neutrino detector conducting research below the surface of Lake Baikal (Russia) since 2003. The first detector was started in 1990 and completed in 1998. It was upgraded in 2005 and again starting in 2015 to build the Baikal Gigaton Volume Detector (Baikal-GVD.) BDUNT has studied neutrinos coming through the Earth with results on atmospheric muon flux. BDUNT picks up many atmospheric neutrinos created by cosmic rays interacting with the atmosphere – as opposed to cosmic neutrinos which give clues to cosmic events and are therefore of greater interest to physicists.
Detector history
The start of the Baikal neutrino experiment dates back to 1 October 1980, when a laboratory of high-energy neutrino astrophysics was established at the Institute for Nuclear Research of the former Academy of Sciences of the USSR in Moscow. This laboratory would become the core of the Baikal collaboration.
The original NT-200 design was deployed in stages 3.6 km from shore at a depth of 1.1 km.
The first part, NT-36 with 36 optical modules (OMs) at 3 short strings, was put into operation and took data up to March 1995. NT-72 ran 1995–1996 then was replaced by the four-string NT-96 array. Over its 700 days of operation, 320,000,000 muon events were collected with NT-36, NT-72, and NT-96. Beginning April 1997, NT-144, a six-string array took data. The full NT-200 array with 192 modules was completed April 1998. In 2004–2005 it was updated to NT-200+ with three additional strings around NT-200 at distance of 100 meters, each with 12 modules.
Baikal-GVD
Since 2016, a 1 cubic km telescope, NT-1000 or Baikal-GVD (or just GVD, Gigaton Volume Detector), is being built. The first stage of 3 strings was switched on in April 2013. During 2015 the GVD demonstration cluster (also known as Dubna) with 192 optical modules was successfully operated. This concluded the preparatory phase of the project. In 2016 the construction of the first phase of the telescope began with the demonstration cluster being upgraded to the baseline configuration for a single cluster, with 288 OMs on eight vertical strings. The first phase of the telescope, when completed, is expected to contain 8 such clusters. This first phase was expected to be completed around 2020.
As of 2018, the Baikal telescope continued to operate and to be developed.
On 13 March 2021, the first phase of the telescope, GVD-I, was completed. It consisted of 8 clusters of 288 OMs each, and had a volume of about half a cubic kilometre. In the years to come the telescope will be expanded to measure one cubic kilometre (the full planned size). The project's cost (for the GVD-I phase) was about 2.5 billion Russian rubles (about 34 million $USD).
Results
BDUNT has used its neutrino detector to study astrophysical phenomena. Searches for relic dark matter in the Sun and high-energy muons and neutrinos have been published.
See also
Baksan Neutrino Observatory
References
External links
Baikal GVD Home page
Neutrino astronomy
Science and technology in Siberia
Lake Baikal | Baikal Deep Underwater Neutrino Telescope | [
"Astronomy"
] | 704 | [
"Neutrino astronomy",
"Astronomical sub-disciplines"
] |
37,060,172 | https://en.wikipedia.org/wiki/SeaDataNet | SeaDataNet is an international project of oceanography. Its main goal is to enable the scientific community to access historical datasets owned by national data centers.
Description
This project aims to provide a web service permitting to retrieve validated datasets (temperature, oxygen, salinity, nutrients, etc.) from 45 different National Data Centers of 35 countries having coasts along European seas. Therefore SeaDataNet is a standardized system for managing the large and diverse data sets collected by the oceanographic fleets and the automatic observation systems. Additional objectives consist in creating product with aggregated data such as climatological descriptions. This European funded project has started in 2004, the project is currently in its second phase with fundings for 2012 to 2016. Most of the datasets are free of access, but some are restricted to institutes.
In term of harmonization SeaDataNet has chosen standards, vocabularies, tools that are used in the different NODC(National Oceanographic Data Center). For example they use Ocean Data View to validate or visualize datasets, they also use DIVA software to perform objective analysis. Datasets are covering the years 1800 up to 2012. In 2012 400 data originators are registered into Seadatanet project.
Usage
Users of SeaDataNet who want to retrieve datasets coming from multiple Data Centers log to the Common Data Index web-service to define their request. They can provide many details such as the type of platform wanted, the parameter wanted, the rate of sampling, the position, the originator country, etc. Then users send their request, the request is analysed and split into as much request as there are data centers concerned. At the end the user receive an email giving a FTP address where to retrieve all the data ordered in the file format wanted (ASCII, NetCDF or Ocean Data View format).
References
External links
Information technology organizations based in Europe
Physical oceanography
Science and technology in Merseyside | SeaDataNet | [
"Physics"
] | 409 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
37,064,339 | https://en.wikipedia.org/wiki/Cyclic%20language | In computer science, more particularly in formal language theory, a cyclic language is a set of strings that is closed with respect to repetition, root, and cyclic shift.
Definition
If A is a set of symbols, and A* is the set of all strings built from symbols in A, then a string set L ⊆ A* is called a formal language over the alphabet A.
The language L is called cyclic if
∀w∈A*. ∀n>0. w ∈ L ⇔ wn ∈ L, and
∀v,w∈A*. vw ∈ L ⇔ wv ∈ L,
where wn denotes the n-fold repetition of the string w, and vw denotes the concatenation of the strings v and w.
Examples
For example, using the alphabet A = {a, b }, the language
is cyclic, but not regular.
However, L is context-free, since M = { an1bn1 an2bn2 ... ank bnk : ni ≥ 0 } is, and context-free languages are closed under circular shift; L is obtained as circular shift of M.
References
Formal languages | Cyclic language | [
"Mathematics",
"Technology"
] | 228 | [
"Formal languages",
"Mathematical logic",
"Computer science stubs",
"Computer science",
"Computing stubs"
] |
37,070,120 | https://en.wikipedia.org/wiki/Triakis%20truncated%20tetrahedral%20honeycomb | The triakis truncated tetrahedral honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space made up of triakis truncated tetrahedra. It was discovered in 1914.
Voronoi tessellation
It is the Voronoi tessellation of the carbon atoms in diamond, which lie in the diamond cubic crystal structure.
Being composed entirely of triakis truncated tetrahedra, it is cell-transitive.
Relation to quarter cubic honeycomb
It can be seen as the uniform quarter cubic honeycomb where its tetrahedral cells are subdivided by the center point into 4 shorter tetrahedra, and each adjoined to the adjacent truncated tetrahedral cells.
See also
Disphenoid tetrahedral honeycomb
References
Honeycombs (geometry)
Truncated tilings | Triakis truncated tetrahedral honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 171 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
920,218 | https://en.wikipedia.org/wiki/Pharmaceutics | Pharmaceutics is the discipline of pharmacy that deals with the process of turning a new chemical entity (NCE) or an existing drug into a medication to be used safely and effectively by patients. The patients could be either humans or animals. Pharmaceutics helps relate the formulation of drugs to their delivery and disposition in the body. Pharmaceutics deals with the formulation of a pure drug substance into a dosage form.
Description
Pharmaceutics is also called the science of dosage form design. There are many chemicals with pharmacological properties, but need special measures to help them achieve therapeutically relevant amounts at their sites of action.
Branches
Branches of pharmaceutics include:
Pharmaceutical formulation
Pharmaceutical manufacturing
Dispensing pharmacy
Pharmaceutical technology
Physical pharmacy
Pharmaceutical jurisprudence
History
Pharmaceutics deals with the formulation of a pure drug substance into a dosage form. Pure drug substances are usually white crystalline or amorphous powders. Before the advent of medicine as a science, it was common for pharmacists to dispense drugs as is. Most drugs today are administered as parts of a dosage form. The clinical performance of drugs depends on their form of presentation to the patient.
Education
Pharmaceutics is a specialization in the field of pharmacy. Typically, Pharm-D graduates can choose to continue studies in this field towards a PhD degree.
See also
List of pharmaceutical companies
Pharmacognosy
Pharmaceutical industry
Nicholas Culpeper – 17th-century English physician who translated and used "pharmacological texts"
References
External links
Excipient selection for injectable / parenteral formulations
Life sciences industry
Pharmacy | Pharmaceutics | [
"Chemistry",
"Biology"
] | 342 | [
"Pharmacology",
"Life sciences industry",
"Pharmacy"
] |
921,168 | https://en.wikipedia.org/wiki/Scale%20factor%20%28cosmology%29 | The expansion of the universe is parametrized by a dimensionless scale factor . Also known as the cosmic scale factor or sometimes the Robertson–Walker scale factor, this is a key parameter of the Friedmann equations.
In the early stages of the Big Bang, most of the energy was in the form of radiation, and that radiation was the dominant influence on the expansion of the universe. Later, with cooling from the expansion the roles of matter and radiation changed and the universe entered a matter-dominated era. Recent results suggest that we have already entered an era dominated by dark energy, but examination of the roles of matter and radiation are most important for understanding the early universe.
Using the dimensionless scale factor to characterize the expansion of the universe, the effective energy densities of radiation and matter scale differently. This leads to a radiation-dominated era in the very early universe but a transition to a matter-dominated era at a later time and, since about 4 billion years ago, a subsequent dark-energy-dominated era.
Detail
Some insight into the expansion can be obtained from a Newtonian expansion model which leads to a simplified version of the Friedmann equation. It relates the proper distance (which can change over time, unlike the comoving distance which is constant and set to today's distance) between a pair of objects, e.g. two galaxy clusters, moving with the Hubble flow in an expanding or contracting FLRW universe at any arbitrary time to their distance at some reference time . The formula for this is:
where is the proper distance at epoch , is the distance at the reference time , usually also referred to as comoving distance, and is the scale factor. Thus, by definition, and .
The scale factor is dimensionless, with counted from the birth of the universe and set to the present age of the universe: giving the current value of as or .
The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the Friedmann equations.
The Hubble parameter is defined as:
where the dot represents a time derivative. The Hubble parameter varies with time, not with space, with the Hubble constant being its current value.
From the previous equation one can see that , and also that , so combining these gives , and substituting the above definition of the Hubble parameter gives which is just Hubble's law.
Current evidence suggests that the expansion of the universe is accelerating, which means that the second derivative of the scale factor is positive, or equivalently that the first derivative is increasing over time. This also implies that any given galaxy recedes from us with increasing speed over time, i.e. for that galaxy is increasing with time. In contrast, the Hubble parameter seems to be decreasing with time, meaning that if we were to look at some fixed distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
According to the Friedmann–Lemaître–Robertson–Walker metric which is used to model the expanding universe, if at present time we receive light from a distant object with a redshift of z, then the scale factor at the time the object originally emitted that light is .
Chronology
Radiation-dominated era
After Inflation, and until about 47,000 years after the Big Bang, the dynamics of the early universe were set by radiation (referring generally to the constituents of the universe which moved relativistically, principally photons and neutrinos).
For a radiation-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is obtained solving the Friedmann equations:
Matter-dominated era
Between about 47,000 years and 9.8 billion years after the Big Bang, the energy density of matter exceeded both the energy density of radiation and the vacuum energy density.
When the early universe was about 47,000 years old (redshift 3600), mass–energy density surpassed the radiation energy, although the universe remained optically thick to radiation until the universe was about 378,000 years old (redshift 1100). This second moment in time (close to the time of recombination), at which the photons which compose the cosmic microwave background radiation were last scattered, is often mistaken as marking the end of the radiation era.
For a matter-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations:
Dark-energy-dominated era
In physical cosmology, the dark-energy-dominated era is proposed as the last of the three phases of the known universe, the other two being the radiation-dominated era and the matter-dominated era. The dark-energy-dominated era began after the matter-dominated era, i.e. when the Universe was about 9.8 billion years old. In the era of cosmic inflation, the Hubble parameter is also thought to be constant, so the expansion law of the dark-energy-dominated era also holds for the inflationary prequel of the big bang.
The cosmological constant is given the symbol Λ, and, considered as a source term in the Einstein field equation, can be viewed as equivalent to a "mass" of empty space, or dark energy. Since this increases with the volume of the universe, the expansion pressure is effectively constant, independent of the scale of the universe, while the other terms decrease with time. Thus, as the density of other forms of matter – dust and radiation – drops to very low concentrations, the cosmological constant (or "dark energy") term will eventually dominate the energy density of the Universe. Recent measurements of the change in Hubble constant with time, based on observations of distant supernovae, show this acceleration in expansion rate, indicating the presence of such dark energy.
For a dark-energy-dominated universe, the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations:
Here, the coefficient in the exponential, the Hubble constant, is
This exponential dependence on time makes the spacetime geometry identical to the de Sitter universe, and only holds for a positive sign of the cosmological constant, which is the case according to the currently accepted value of the cosmological constant, Λ, that is approximately
The current density of the observable universe is of the order of and the age of the universe is of the order of 13.8 billion years, or . The Hubble constant, , is (The Hubble time is 13.79 billion years).
See also
Cosmological principle
Lambda-CDM model
Redshift
Notes
References
External links
Relation of the scale factor with the cosmological constant and the Hubble constant
Physical cosmology | Scale factor (cosmology) | [
"Physics",
"Astronomy"
] | 1,412 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
921,245 | https://en.wikipedia.org/wiki/List%20of%20interstellar%20and%20circumstellar%20molecules | This is a list of molecules that have been detected in the interstellar medium and circumstellar envelopes, grouped by the number of component atoms. The chemical formula is listed for each detected compound, along with any ionized form that has also been observed.
Background
The molecules listed below were detected through astronomical spectroscopy. Their spectral features arise because molecules either absorb or emit a photon of light when they transition between two molecular energy levels. The energy (and thus the wavelength) of the photon matches the energy difference between the levels involved. Molecular electronic transitions occur when one of the molecule's electrons moves between molecular orbitals, producing a spectral line in the ultraviolet, optical or near-infrared parts of the electromagnetic spectrum. Alternatively, a vibrational transition transfers quanta of energy to (or from) vibrations of molecular bonds, producing signatures in the mid- or far-infrared. Gas-phase molecules also have quantised rotational levels, leading to transitions at microwave or radio wavelengths.
Sometimes a transition can involve more than one of these types of energy level e.g. ro-vibrational spectroscopy changes both the rotational and vibrational energy level. Occasionally all three occur together, as in the Phillips band of C2 (diatomic carbon), in which an electronic transition produces a line in the near-infrared, which is then split into several vibronic bands by a simultaneous change in vibrational level, which in turn are split again into rotational branches.
The spectrum of a particular molecule is governed by the selection rules of quantum chemistry and by its molecular symmetry. Some molecules have simple spectra which are easy to identify, whilst others (even some small molecules) have extremely complex spectra with flux spread among many different lines, making them far harder to detect. Interactions between the atomic nuclei and the electrons sometimes cause further hyperfine structure of the spectral lines. If the molecule exists in multiple isotopologues (versions containing different atomic isotopes), the spectrum is further complicated by isotope shifts.
Detection of a new interstellar or circumstellar molecule requires identifying a suitable astronomical object where it is likely to be present, then observing it with a telescope equipped with a spectrograph working at the required wavelength, spectral resolution and sensitivity. The first molecule detected in the interstellar medium was the methylidyne radical (CH•) in 1937, through its strong electronic transition at 4300 angstroms (in the optical). Advances in astronomical instrumentation have led to increasing numbers of new detections. From the 1950s onwards, radio astronomy began to dominate new detections, with sub-mm astronomy also becoming important from the 1990s.
The inventory of detected molecules is highly biased towards certain types which are easier to detect. For example, radio astronomy is most sensitive to small linear molecules with a high molecular dipole. The most common molecule in the Universe, H2 (molecular hydrogen), is completely invisible to radio telescopes because it has no dipole; its electronic transitions are too energetic for optical telescopes, so detection of H2 required ultraviolet observations with a sounding rocket. Vibrational lines are often not specific to an individual molecule, allowing only the general class to be identified. For example, the vibrational lines of polycyclic aromatic hydrocarbons (PAHs) were identified in 1984, showing the class of molecules is very common in space, but it took until 2021 to identify any specific PAHs through their rotational lines.
One of the richest sources for detecting interstellar molecules is Sagittarius B2 (Sgr B2), a giant molecular cloud near the centre of the Milky Way. About half of the molecules listed below were first found in Sgr B2, and many of the others have been subsequently detected there. A rich source of circumstellar molecules is CW Leonis (also known as IRC +10216), a nearby carbon star, where about 50 molecules have been identified. There is no clear boundary between interstellar and circumstellar media, so both are included in the tables below.
The discipline of astrochemistry includes understanding how these molecules form and explaining their abundances. The extremely low density of the interstellar medium is not conducive to the formation of molecules, making conventional gas-phase reactions between neutral species (atoms or molecules) inefficient. Many regions also have very low temperatures (typically 10 kelvin inside a molecular cloud), further reducing the reaction rates, or high ultraviolet radiation fields, which destroy molecules through photochemistry. Explaining the observed abundances of interstellar molecules requires calculating the balance between formation and destruction rates using gas-phase ion chemistry (often driven by cosmic rays), surface chemistry on cosmic dust, radiative transfer including interstellar extinction, and sophisticated reaction networks. The use of molecular lines to determine the physical properties of astronomical objects is known as molecular astrophysics.
Molecules
The following tables list molecules that have been detected in the interstellar medium or circumstellar matter, grouped by the number of component atoms. Neutral molecules and their molecular ions are listed in separate columns; if there is no entry in the molecule column, only the ionized form has been detected. Designations (names of molecules) are those used in the scientific literature describing the detection; if none was given that field is left empty. Mass is listed in atomic mass units. Deuterated molecules, which contain at least one deuterium (2H) atom, have slightly different masses and are listed in a separate table. The total number of unique species, including distinct ionization states, is indicated in each section header.
Most of the molecules detected so far are organic. The only detected inorganic molecule with five or more atoms is SiH4. Molecules larger than that all have at least one carbon atom, with no N−N or O−O bonds.
Diatomic (43)
Triatomic (44)
Four atoms (30)
Five atoms (20)
Six atoms (16)
Seven atoms (13)
Eight atoms (14)
Nine atoms (10)
Ten or more atoms (23)
Deuterated molecules (22)
These molecules all contain one or more deuterium atoms, a heavier isotope of hydrogen.
Unconfirmed (15)
Evidence for the existence of the following molecules has been reported in the scientific literature, but the detections either are described as tentative by the authors, or have been challenged by other researchers. They await independent confirmation.
See also
Astrochemistry
Cosmic dust
Diffuse interstellar band
Lists of molecules
Molecular astrophysics
Molecular spectroscopy
Molecules in stars
Polycyclic aromatic hydrocarbon (PAH)
Tholin
Notes
References
External links
Astrochemistry
Molecules in interstellar space
Interstellar media
Molecules | List of interstellar and circumstellar molecules | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,355 | [
"Interstellar media",
"Molecular physics",
"Outer space",
"Astronomical sub-disciplines",
"Molecules",
"Astrochemistry",
"Physical objects",
"nan",
"Atoms",
"Matter"
] |
921,525 | https://en.wikipedia.org/wiki/Maxwell%20relations | Maxwell's relations are a set of equations in thermodynamics which are derivable from the symmetry of second derivatives and from the definitions of the thermodynamic potentials. These relations are named for the nineteenth-century physicist James Clerk Maxwell.
Equations
The structure of Maxwell relations is a statement of equality among the second derivatives for continuous functions. It follows directly from the fact that the order of differentiation of an analytic function of two variables is irrelevant (Schwarz theorem). In the case of Maxwell relations the function considered is a thermodynamic potential and and are two different natural variables for that potential, we have
where the partial derivatives are taken with all other natural variables held constant. For every thermodynamic potential there are possible Maxwell relations where is the number of natural variables for that potential.
The four most common Maxwell relations
The four most common Maxwell relations are the equalities of the second derivatives of each of the four thermodynamic potentials, with respect to their thermal natural variable (temperature , or entropy and their mechanical natural variable (pressure , or volume
where the potentials as functions of their natural thermal and mechanical variables are the internal energy , enthalpy , Helmholtz free energy , and Gibbs free energy . The thermodynamic square can be used as a mnemonic to recall and derive these relations. The usefulness of these relations lies in their quantifying entropy changes, which are not directly measurable, in terms of measurable quantities like temperature, volume, and pressure.
Each equation can be re-expressed using the relationship
which are sometimes also known as Maxwell relations.
Derivations
Short derivation
This section is based on chapter 5 of.
Suppose we are given four real variables , restricted to move on a 2-dimensional surface in . Then, if we know two of them, we can determine the other two uniquely (generically).
In particular, we may take any two variables as the independent variables, and let the other two be the dependent variables, then we can take all these partial derivatives.
Proposition:
Proof: This is just the chain rule.
Proposition:
Proof. We can ignore . Then locally the surface is just . Then , etc. Now multiply them.
Proof of Maxwell's relations:
There are four real variables , restricted on the 2-dimensional surface of possible thermodynamic states. This allows us to use the previous two propositions.
It suffices to prove the first of the four relations, as the other three can be obtained by transforming the first relation using the previous two propositions.
Pick as the independent variables, and as the dependent variable. We have
.
Now, since the surface is , that is,which yields the result.
Another derivation
Based on.
Since , around any cycle, we haveTake the cycle infinitesimal, we find that . That is, the map is area-preserving. By the chain rule for Jacobians, for any coordinate transform , we haveNow setting to various values gives us the four Maxwell relations. For example, setting gives us
Extended derivations
Maxwell relations are based on simple partial differentiation rules, in particular the total differential of a function and the symmetry of evaluating second order partial derivatives.
Derivation based on Jacobians
If we view the first law of thermodynamics,
as a statement about differential forms, and take the exterior derivative of this equation, we get
since . This leads to the fundamental identity
The physical meaning of this identity can be seen by noting that the two sides are the equivalent ways of writing the work done in an infinitesimal Carnot cycle. An equivalent way of writing the identity is
The Maxwell relations now follow directly. For example,
The critical step is the penultimate one. The other Maxwell relations follow in similar fashion. For example,
General Maxwell relationships
The above are not the only Maxwell relationships. When other work terms involving other natural variables besides the volume work are considered or when the number of particles is included as a natural variable, other Maxwell relations become apparent. For example, if we have a single-component gas, then the number of particles N is also a natural variable of the above four thermodynamic potentials. The Maxwell relationship for the enthalpy with respect to pressure and particle number would then be:
where is the chemical potential. In addition, there are other thermodynamic potentials besides the four that are commonly used, and each of these potentials will yield a set of Maxwell relations. For example, the grand potential yields:
See also
Table of thermodynamic equations
Thermodynamic equations
References
James Clerk Maxwell
Thermodynamic equations | Maxwell relations | [
"Physics",
"Chemistry"
] | 943 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
922,567 | https://en.wikipedia.org/wiki/Fermi%27s%20golden%20rule | In quantum physics, Fermi's golden rule is a formula that describes the transition rate (the probability of a transition per unit time) from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak perturbation. This transition rate is effectively independent of time (so long as the strength of the perturbation is independent of time) and is proportional to the strength of the coupling between the initial and final states of the system (described by the square of the matrix element of the perturbation) as well as the density of states. It is also applicable when the final state is discrete, i.e. it is not part of a continuum, if there is some decoherence in the process, like relaxation or collision of the atoms, or like noise in the perturbation, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth.
Historical background
Although the rule is named after Enrico Fermi, most of the work leading to it is due to Paul Dirac, who twenty years earlier had formulated a virtually identical equation, including the three components of a constant, the matrix element of the perturbation and an energy difference. It was given this name because, on account of its importance, Fermi called it "golden rule No. 2".
Most uses of the term Fermi's golden rule are referring to "golden rule No. 2", but Fermi's "golden rule No. 1" is of a similar form and considers the probability of indirect transitions per unit time.
The rate and its derivation
Fermi's golden rule describes a system that begins in an eigenstate of an unperturbed Hamiltonian and considers the effect of a perturbing Hamiltonian applied to the system. If is time-independent, the system goes only into those states in the continuum that have the same energy as the initial state. If is oscillating sinusoidally as a function of time (i.e. it is a harmonic perturbation) with an angular frequency , the transition is into states with energies that differ by from the energy of the initial state.
In both cases, the transition probability per unit of time from the initial state to a set of final states is essentially constant. It is given, to first-order approximation, by
where is the matrix element (in bra–ket notation) of the perturbation between the final and initial states, and is the density of states (number of continuum states divided by in the infinitesimally small energy interval to ) at the energy of the final states. This transition probability is also called "decay probability" and is related to the inverse of the mean lifetime. Thus, the probability of finding the system in state is proportional to .
The standard way to derive the equation is to start with time-dependent perturbation theory and to take the limit for absorption under the assumption that the time of the measurement is much larger than the time needed for the transition.
Only the magnitude of the matrix element enters the Fermi's golden rule. The phase of this matrix element, however, contains separate information about the transition process.
It appears in expressions that complement the golden rule in the semiclassical Boltzmann equation approach to electron transport.
While the Golden rule is commonly stated and derived in the terms above, the final state (continuum) wave function is often rather vaguely described, and not normalized correctly (and the normalisation is used in the derivation). The problem is that in order to produce a continuum there can be no spatial confinement (which would necessarily discretise the spectrum), and therefore the continuum wave functions must have infinite extent, and in turn this means that the normalisation is infinite, not unity. If the interactions depend on the energy of the continuum state, but not any other quantum numbers, it is usual to normalise continuum wave-functions with energy labelled , by writing where is the Dirac delta function, and effectively a factor of the square-root of the density of states is included into . In this case, the continuum wave function has dimensions of , and the Golden Rule is now
where refers to the continuum state with the same energy as the discrete state . For example, correctly normalized continuum wave functions for the case of a free electron in the vicinity of a hydrogen atom are available in Bethe and Salpeter.
Applications
Semiconductors
The Fermi's golden rule can be used for calculating the transition probability rate for an electron that is excited by a photon from the valence band to the conduction band in a direct band-gap semiconductor, and also for when the electron recombines with the hole and emits a photon. Consider a photon of frequency and wavevector , where the light dispersion relation is and is the index of refraction.
Using the Coulomb gauge where and , the vector potential of light is given by where the resulting electric field is
For an electron in the valence band, the Hamiltonian is
where is the potential of the crystal, and are the charge and mass of an electron, and is the momentum operator. Here we consider process involving one photon and first order in . The resulting Hamiltonian is
where is the perturbation of light.
From here on we consider vertical optical dipole transition, and thus have transition probability based on time-dependent perturbation theory that
with
where is the light polarization vector. and are the Bloch wavefunction of the initial and final states. Here the transition probability needs to satisfy the energy
conservation given by . From perturbation it is evident that the heart of the calculation lies in the matrix elements shown in the bracket.
For the initial and final states in valence and conduction bands, we have and , respectively and if the operator does not act on the spin, the electron stays in the same spin state and hence we can write the Bloch wavefunction of the initial and final states as
where is the number of unit cells with volume . Calculating using these wavefunctions, and focusing on emission (photoluminescence) rather than absorption, we are led to the transition rate
where defined as the optical transition dipole moment is qualitatively the expectation value and in this situation takes the form
Finally, we want to know the total transition rate . Hence we need to sum over all possible initial and final states that can satisfy the energy conservation (i.e. an integral of the Brillouin zone in the k-space), and take into account spin degeneracy, which after calculation results in
where is the joint valence-conduction density of states (i.e. the density of pair of states; one occupied valence state, one empty conduction state). In 3D, this is
but the joint DOS is different for 2D, 1D, and 0D.
We note that in a general way we can express the Fermi's golden rule for semiconductors as
In the same manner, the stationary DC photocurrent with amplitude proportional to the square of the field of light is
where is the relaxation time, and are the
difference of the group velocity and Fermi-Dirac distribution between possible the initial and
final states. Here defines the optical transition dipole. Due to the commutation relation between position and the Hamiltonian, we can also rewrite the transition dipole and photocurrent in terms of position operator matrix using . This effect can only exist in systems with broken inversion symmetry and nonzero components of the photocurrent can be obtained by symmetry arguments.
Scanning tunneling microscopy
In a scanning tunneling microscope, the Fermi's golden rule is used in deriving the tunneling current. It takes the form
where is the tunneling matrix element.
Quantum optics
When considering energy level transitions between two discrete states, Fermi's golden rule is written as
where is the density of photon states at a given energy, is the photon energy, and is the angular frequency. This alternative expression relies on the fact that there is a continuum of final (photon) states, i.e. the range of allowed photon energies is continuous.
Drexhage experiment
Fermi's golden rule predicts that the probability that an excited state will decay depends on the density of states. This can be seen experimentally by measuring the decay rate of a dipole near a mirror: as the presence of the mirror creates regions of higher and lower density of states, the measured decay rate depends on the distance between the mirror and the dipole.
See also
Sargent's rule
References
External links
More information on Fermi's golden rule
Derivation of Fermi’s Golden Rule
Time-dependent perturbation theory
Fermi's golden rule: its derivation and breakdown by an ideal model
Equations of physics
Perturbation theory
Mathematical physics | Fermi's golden rule | [
"Physics",
"Mathematics"
] | 1,813 | [
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Equations",
"Mathematical physics",
"Perturbation theory"
] |
30,480,361 | https://en.wikipedia.org/wiki/Jackiw%E2%80%93Teitelboim%20gravity | Jackiw–Teitelboim gravity, also known as the R = T model , or simply JT gravity (after physicists Roman Jackiw and Claudio Teitelboim), is a theory of gravity with dilaton coupling in one spatial and one time dimension. It should not be confused with the CGHS model or Liouville gravity. The action is given by
The metric in this case is more amenable to analytical solutions than the general 3+1D case though a canonical reduction for the latter has recently been obtained. For example, in 1+1D, the metric for the case of two mutually interacting bodies can be solved exactly in terms of the Lambert W function, even with an additional electromagnetic field.
By varying with respect to Φ, we get on shell, which means the metric is either Anti-de Sitter space or De Sitter space depending upon the sign of Λ.
See also
References
Theory of relativity | Jackiw–Teitelboim gravity | [
"Physics"
] | 196 | [
"Relativity stubs",
"Theory of relativity"
] |
30,480,366 | https://en.wikipedia.org/wiki/C17H18O5 | The molecular formula C17H18O5, molar mass: 302.32 g/mol, exact mass: 302.115423686 u, may refer to:
Diffutidin, a flavan
Funicin
Isonotholaenic acid, a dihydrostilbenoid
Notholaenic acid
Proxicromil, an antihistamine
Molecular formulas | C17H18O5 | [
"Physics",
"Chemistry"
] | 84 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
30,483,254 | https://en.wikipedia.org/wiki/PolymiRTS | Polymorphism in microRNA Target Site (PolymiRTS) is a database of naturally occurring DNA variations in putative microRNA target sites.
See also
MicroRNA
List of miRNA target prediction tools
References
External links
http://compbio.uthsc.edu/miRSNP/
Biological databases
Mutation
RNA
MicroRNA | PolymiRTS | [
"Biology"
] | 69 | [
"Bioinformatics",
"Biological databases"
] |
30,483,271 | https://en.wikipedia.org/wiki/Straightening%20theorem%20for%20vector%20fields | In differential calculus, the domain-straightening theorem states that, given a vector field on a manifold, there exist local coordinates such that in a neighborhood of a point where is nonzero. The theorem is also known as straightening out of a vector field.
The Frobenius theorem in differential geometry can be considered as a higher-dimensional generalization of this theorem.
Proof
It is clear that we only have to find such coordinates at 0 in . First we write where is some coordinate system at and are the component function of relative to Let . By linear change of coordinates, we can assume Let be the solution of the initial value problem and let
(and thus ) is smooth by smooth dependence on initial conditions in ordinary differential equations. It follows that
,
and, since , the differential is the identity at . Thus, is a coordinate system at . Finally, since , we have: and so
as required.
References
Theorem B.7 in Camille Laurent-Gengoux, Anne Pichereau, Pol Vanhaecke. Poisson Structures, Springer, 2013.
Differential calculus | Straightening theorem for vector fields | [
"Mathematics"
] | 215 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Calculus",
"Differential calculus",
"Mathematical problems",
"Mathematical theorems"
] |
30,483,666 | https://en.wikipedia.org/wiki/Floxing | In genetic engineering, floxing refers to the insertion of a DNA sequence (which is then said to be floxed) between two LoxP sequences, creating an artificial gene cassette which can then be conditionally deleted (knocked out), translocated, or inverted in a process called Cre-Lox recombination. Recombination between LoxP sites is catalysed by Cre recombinase. The term "floxing" is a portmanteau constructed from the phrase "flanking/flanked by LoxP".
The floxing method is essential in the development of scientific model systems as it allows researchers to have spatial and temporal alteration of gene expression. The Cre-Lox system is widely used to manipulate gene expression in model organisms such as mice in order to study human diseases and drug development. For example, using the Cre-Lox system, researchers are able to study oncogenes and tumor suppressor genes and their role in the development and progression of cancer in mouse models.
Uses in research
Floxing a gene allows it to be deleted (knocked out), translocated or inserted (through various mechanisms in Cre-Lox recombination).
The floxing of genes is essential in the development of scientific model systems as it allows spatial and temporal alteration of gene expression. In layman's terms, the gene can be knocked-out (inactivated) in a specific tissue in vivo, at a specific time chosen by the scientist. The scientist can then evaluate the effects of the knocked-out gene and identify the gene's normal function. This is different from having the gene absent starting from conception, whereby inactivation or loss of genes that are essential for the development of the organism may interfere with the normal function of cells and prevent the production of viable offspring.
Mechanism of deletion
Deletion events are useful for performing gene editing experiments through precisely removing segments of or even whole genes. Deletion requires floxing of the segment of interest with loxP sites which face the same direction. The Cre recombinase will detect the unidirectional loxP sites and excise the floxed segment of DNA. The successfully edited clones can be selected using a selection marker which can be removed using the same Cre-LoxP system. The same mechanism can be used to create conditional alleles by introducing an FRT/Flp site which accomplishes the same mechanism but with a different enzyme.
Mechanism of inversion
Inversion events are useful for inactivating a gene or DNA sequence without actually removing it, and thereby maintaining a consistent amount of genetic material. The inverted genes are not often associated with abnormal phenotypes, meaning the inverted genes are generally viable. Cre-LoxP recombination that results in inversion requires loxP sites flanking the gene of interest, with the loxP sites oriented towards each other as inverted repeats. By undergoing Cre recombination, the region flanked by the loxP sites will become inverted, i.e. re-inserted in the same position but in reverse orientation; this process is not permanent and can be reversed.
Mechanism of translocation
Translocation events occur when the loxP sites flank genes on two different DNA molecules in a unidirectional orientation. Cre recombinase is then used to generate a translocation between the two DNA molecules, exchanging the genetic material from one DNA molecule to the other, forming a simultaneous translocation of both floxed genes.
Common applications in research
Cardiomyocytes (heart muscle tissue) have been shown to express a type of Cre recombinase that is highly specific to cardiomyocytes and can be used by researchers to perform highly efficient recombinations. This is achieved by using a type of Cre whose expression is driven by the -myosin heavy chain promoter (-MyHC). These recombinations are capable of disrupting genes in a manner that is specific to heart tissue in vivo and allows for the creation of conditional knockouts of the heart, mostly for use as controls. For example, using the Cre recombinase with the -MyHC promoter causes the floxed gene to be inactivated in the heart alone. Further, these knockouts can be made inducible. In several mouse studies, tamoxifen is used to induce the expression of Cre recombinase. In this case, Cre recombinase is fused to a portion of the mouse estrogen receptor (ER) which contains a mutation within its ligand binding domain (LBD). The mutation renders the receptor inactive, which leads to incorrect localization through its interactions with chaperone proteins such as heat shock protein 70 and 90 (Hsp70 and Hsp90). Tamoxifen binds to Cre-ER and disrupts its interactions with the chaperones, which allows the Cre-ER fusion protein to enter the nucleus and perform recombination on the floxed gene. Additionally, Cre recombinase can be induced by heat when under the control of specific heat shock elements (HSEs).
References
DNA
Genetics techniques | Floxing | [
"Engineering",
"Biology"
] | 1,080 | [
"Genetics techniques",
"Genetic engineering"
] |
30,483,675 | https://en.wikipedia.org/wiki/Recode%20%28database%29 | RECODE is a database of "programmed" frameshifts, bypassing and codon redefinition used for gene expression.
See also
Translational frameshift
References
External links
http://recode.ucc.ie/
Biological databases
Genetics databases
Cis-regulatory RNA elements
Gene expression | Recode (database) | [
"Chemistry",
"Biology"
] | 63 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
30,483,900 | https://en.wikipedia.org/wiki/RegTransBase | RegTransBase is database of regulatory interactions and transcription factor binding sites in prokaryotes
See also
Transcription factors
References
External links
http://regtransbase.lbl.gov.
Biological databases
Transcription factors
DNA
Biophysics | RegTransBase | [
"Physics",
"Chemistry",
"Biology"
] | 49 | [
"Applied and interdisciplinary physics",
"Gene expression",
"Signal transduction",
"Bioinformatics",
"Biophysics",
"Induced stem cells",
"Biological databases",
"Transcription factors"
] |
30,485,319 | https://en.wikipedia.org/wiki/Biot%E2%80%93Tolstoy%E2%80%93Medwin%20diffraction%20model | In applied mathematics, the Biot–Tolstoy–Medwin (BTM) diffraction model describes edge diffraction. Unlike the uniform theory of diffraction (UTD), BTM does not make the high frequency assumption (in which edge lengths and distances from source and receiver are much larger than the wavelength). BTM sees use in acoustic simulations.
Impulse response
The impulse response according to BTM is given as follows:
The general expression for sound pressure is given by the convolution integral
where represents the source signal, and represents the impulse response at the receiver position. The BTM gives the latter in terms of
the source position in cylindrical coordinates where the -axis is considered to lie on the edge and is measured from one of the faces of the wedge.
the receiver position
the (outer) wedge angle and from this the wedge index
the speed of sound
as an integral over edge positions
where the summation is over the four possible choices of the two signs, and are the distances from the point to the source and receiver respectively, and is the Dirac delta function.
where
See also
Uniform theory of diffraction
Notes
References
Calamia, Paul T. and Svensson, U. Peter, "Fast time-domain edge-diffraction calculations for interactive acoustic simulations," EURASIP Journal on Advances in Signal Processing, Volume 2007, Article ID 63560.
Signal processing | Biot–Tolstoy–Medwin diffraction model | [
"Technology",
"Engineering"
] | 288 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
30,488,340 | https://en.wikipedia.org/wiki/Shockley%E2%80%93Ramo%20theorem | The Shockley–Ramo theorem is a method for calculating the electric current induced by a charge moving in the vicinity of an electrode. Previously named simply the "Ramo Theorem",
the modified name was introduced by D.S. McGregor et al. in 1998
to recognize the contributions of both Shockley and Ramo to understanding the influence of mobile charges in a radiation detector. The theorem appeared in William Shockley's 1938 paper titled "Currents to Conductors Induced by a Moving Point Charge" and in Simon Ramo's 1939 paper titled "Currents Induced by Electron Motion".
It is based on the concept that the current induced in the electrode is due to the instantaneous change of electrostatic flux lines that end on the electrode, rather than the amount of charge received by the electrode per second (net charge flow rate).
The Shockley–Ramo theorem states that the instantaneous current induced on a given electrode due to the motion of a charge is given by:
where
is the charge of the particle;
is its instantaneous velocity; and
is the component of the electric field in the direction of at the charge's instantaneous position, under the following conditions: charge removed, given electrode raised to unit potential, and all other conductors grounded.
The theorem has been applied to a wide variety of applications and fields, including semiconductor radiation detection, calculations of charge movement in proteins., or the detection of moving ions in vacuum for mass spectrometry or ion implantation.
References
External links
J. H. Jeans, "Electricity and Magnetism," page 160, Cambridge, London, English (1927) – Green's Theorem as Simon Ramo used it to derive his theorem.
Introduction to Radiation Detectors and Electronics – Lecture Notes by Helmuth Spieler which briefly discuss Ramo's Theorem.
Electromagnetism
Eponymous theorems of physics | Shockley–Ramo theorem | [
"Physics"
] | 376 | [
"Electromagnetism",
"Physical phenomena",
"Equations of physics",
"Eponymous theorems of physics",
"Fundamental interactions",
"Physics theorems"
] |
30,488,626 | https://en.wikipedia.org/wiki/Mean%20speed%20theorem | The mean speed theorem, also known as the Merton rule of uniform acceleration, was discovered in the 14th century by the Oxford Calculators of Merton College, and was proved by Nicole Oresme. It states that a uniformly accelerated body (starting from rest, i.e. zero initial velocity) travels the same distance as a body with uniform speed whose speed is half the final velocity of the accelerated body.
Details
Oresme provided a geometrical verification for the generalized Merton rule, which we would express today as (i.e., distance traveled is equal to one half of the sum of the initial and final velocities, multiplied by the elapsed time ), by finding the area of a trapezoid. Clay tablets used in Babylonian astronomy (350–50 BC) present trapezoid procedures for computing Jupiter's position and motion.
The medieval scientists demonstrated this theorem—the foundation of "the law of falling bodies"—long before Galileo, who is generally credited with it. Oresme's proof is also the first known example of the modelization of a physical problem as a mathematical function with a graphical representation, as well as of an early form of integration. The mathematical physicist and historian of science Clifford Truesdell, wrote:
The theorem is a special case of the more general kinematics equations for uniform acceleration.
See also
Science in the Middle Ages
Scholasticism
Notes
Further reading
Sylla, Edith (1982) "The Oxford Calculators", in Kretzmann, Kenny & Pinborg (edd.), The Cambridge History of Later Medieval Philosophy.
Longeway, John (2003) "William Heytesbury", in The Stanford Encyclopedia of Philosophy.
Natural philosophy
Merton College, Oxford
History of the University of Oxford
14th century in science
Classical mechanics | Mean speed theorem | [
"Physics"
] | 367 | [
"Mechanics",
"Classical mechanics"
] |
30,489,289 | https://en.wikipedia.org/wiki/8-Phenyltheophylline | 8-Phenyltheophylline (8-phenyl-1,3-dimethylxanthine, 8-PT) is a drug derived from the xanthine family which acts as a potent and selective antagonist for the adenosine receptors A1 and A2A, but unlike other xanthine derivatives has virtually no activity as a phosphodiesterase inhibitor. It has stimulant effects in animals with similar potency to caffeine. Coincidentally 8-phenyltheophylline has also been found to be a potent and selective inhibitor of the liver enzyme CYP1A2 which makes it likely to cause interactions with other drugs which are normally metabolised by CYP1A2.
See also
8-Chlorotheophylline
8-Cyclopentyltheophylline
DPCPX
DMPX
Xanthine
References
Adenosine receptor antagonists
Xanthines | 8-Phenyltheophylline | [
"Chemistry"
] | 197 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
24,395,925 | https://en.wikipedia.org/wiki/Bromley%20equation | The Bromley equation was developed in 1973 by Leroy A. Bromley with the objective of calculating activity coefficients for aqueous electrolyte solutions whose concentrations are above the range of validity of the Debye–Hückel equation. This equation, together with Specific ion interaction theory (SIT) and Pitzer equations is important for the understanding of the behaviour of ions dissolved in natural waters such as rivers, lakes and sea-water.
Description
Guggenheim had proposed an extension of the Debye-Hückel equation which is the basis of SIT theory. The equation can be written, in its simplest form for a 1:1 electrolyte, MX, as
is the mean molal activity coefficient. The first term on the right-hand side is the Debye–Hückel term, with a constant, A, and the ionic strength I. β is an interaction coefficient and b the molality of the electrolyte. As the concentration decreases so the second term becomes less important until, at very low concentrations, the Debye-Hückel equation gives a satisfactory account of the activity coefficient.
Leroy A. Bromley observed that experimental values of were often approximately proportional to ionic strength. Accordingly, he developed the equation, for a salt of general formula
At 25 °C Aγ is equal to 0.511 and ρ is equal to one. Bromley tabulated values of the interaction coefficient B. He noted that the equation gave satisfactory agreement with experimental data up to ionic strength of 6 molal, though with decreasing precision when extrapolating to very high ionic strength. As with other equations, it is not satisfactory when there is ion association as, for example, with divalent metal sulfates. Bromley also found that B could be expressed in terms of single-ion quantities as
where the + subscript refers to a cation and the minus subscript refers to an anion. Bromley's equation can easily be transformed for the calculation of osmotic coefficients, and Bromley also proposed extensions to multicomponent solutions and for the effect of temperature change.
A modified version of the Bromley equation has been used extensively by Madariaga and co-workers. In a comparison of Bromley, SIT and Pitzer models, little difference was found in the quality of fit. The Bromley equation is essentially an empirical equation. The B parameters are relatively easy to determine. However, SIT theory, as extended by Scatchard. and Ciavatta is much more widely used.
By contrast the Pitzer equation is based on rigorous thermodynamics. The determination Pitzer parameters is more laborious. Whilst the Bromley and SIT approaches are based on pair-wise interactions between oppositely charged ions, the Pitzer approach also allows for interactions between three ions. These equations are important for the understanding of the behaviour of ions in natural waters such as rivers, lakes and sea-water.
For some complex electrolytes, Ge et al. obtained the new set of Bromley parameters using up-to-date measured or critically reviewed osmotic coefficient or activity coefficient data.
See also
Davies equation
Van 't Hoff factor
References
Thermodynamic equations
Equilibrium chemistry
Electrochemical equations | Bromley equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 643 | [
"Thermodynamic equations",
"Equations of physics",
"Mathematical objects",
"Equations",
"Equilibrium chemistry",
"Electrochemistry",
"Thermodynamics",
"Electrochemical equations"
] |
24,396,122 | https://en.wikipedia.org/wiki/C10H10O6 | {{DISPLAYTITLE:C10H10O6}}
The molecular formula C10H10O6 (molar mass: 226.18 g/mol, exact mass: 226.0477 u) may refer to:
Chorismic acid
Prephenic acid
Molecular formulas | C10H10O6 | [
"Physics",
"Chemistry"
] | 62 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,397,093 | https://en.wikipedia.org/wiki/C16H17NO2 | {{DISPLAYTITLE:C16H17NO2}}
The molecular formula C16H17NO2 (molar mass: 255.312 g/mol, exact mass: 255.1259 u) may refer to:
SKF-38,393
UWA-001, or methylenedioxymephenidine
Molecular formulas | C16H17NO2 | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,397,107 | https://en.wikipedia.org/wiki/C18H20ClNO2 | {{DISPLAYTITLE:C18H20ClNO2}}
The molecular formula C18H20ClNO2 may refer to:
α-Chlorocodide, an opioid analog that is a derivative of codeine
β-Chlorocodide, an opioid analog and isomer of α-chlorocodide
SKF-83,959, a synthetic benzazepine derivative
Molecular formulas | C18H20ClNO2 | [
"Physics",
"Chemistry"
] | 92 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,397,233 | https://en.wikipedia.org/wiki/C15H13NO3 | {{DISPLAYTITLE:C15H13NO3}}
The molecular formula C15H13NO3 (molar mass: 255.27 g/mol) may refer to:
Amfenac, also known as 2-amino-3-benzoylbenzeneacetic acid
Dinoxyline
Ketorolac
Polyfothine
Pranoprofen
Molecular formulas | C15H13NO3 | [
"Physics",
"Chemistry"
] | 81 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,398,213 | https://en.wikipedia.org/wiki/Template%20reaction | In chemistry, a template reaction is any of a class of ligand-based reactions that occur between two or more adjacent coordination sites on a metal center. In the absence of the metal ion, the same organic reactants produce different products. The term is mainly used in coordination chemistry. The template effects emphasizes the pre-organization provided by the coordination sphere, although the coordination modifies the electronic properties (acidity, electrophilicity, etc.) of ligands.
An early example is the dialkylation of a nickel dithiolate:
The corresponding alkylation in the absence of a metal ion would yield polymers. Crown ethers arise from dialkylations that are templated by alkali metals. Other template reactions include the Mannich and Schiff base condensations. The condensation of formaldehyde, ammonia, and tris(ethylenediamine)cobalt(III) to give a clathrochelate complex is one example.
The phosphorus analogue of an aza crown can be prepared by a template reaction. Where it is not possible to isolate the phosphine itself.
Limitations
Many template reactions are only stoichiometric, and the decomplexation of the "templating ion" can be difficult. The alkali metal-templated syntheses of crown ether syntheses are notable exceptions. Metal Phthalocyanines are generated by metal-templated condensations of phthalonitriles, but the liberation of metal-free phthalocyanine is difficult.
Some so-called template reactions proceed similarly in the absence of the templating ion. One example being the condensation of acetone and ethylenediamine, which yields isomeric 14-membered tetraaza rings. Similarly, porphyrins, which feature 16-membered central rings, form in the absence of metal templates.
Concept in catalysis
In a general sense, transition metal-based catalysis can be viewed as template reactions: Reactants coordinate to adjacent sites on the metal ion and, owing to their adjacency, the two reactants interconnect (insert or couple) either directly or via the action of another reagent. In the area of homogeneous catalysis, the cyclo-oligomerization of acetylene to cyclooctatetraene at a nickel(II) centre reflects the templating effect of the nickel, where it is supposed that four acetylene molecules occupy four sites around the metal and react simultaneously to give the product. This simplistic mechanistic hypotheses was influential in the development of these catalytic reactions. For example, if a competing ligand such as triphenylphosphine were added to occupy one coordination site, then only three molecules of acetylene could bind, and these come together to form benzene (see Reppe chemistry).
References
Polymer chemistry | Template reaction | [
"Chemistry",
"Materials_science",
"Engineering"
] | 599 | [
"Materials science",
"Polymer chemistry"
] |
24,399,333 | https://en.wikipedia.org/wiki/Symbiosis%20%28chemical%29 | The biological term symbiosis was first used in chemistry by C. K. Jørgensen in 1964, to refer to the process by which a hard ligand on a metal predisposes the metal to receive another hard ligand rather than a soft one. Two superficially antithetical phenomena occur: symbiosis and antisymbiosis.
Chemical antisymbiosis
This is found principally with soft metals. Two soft ligands in mutual trans position will have a destabilizing effect on each other. The effect is also found with borderline metals in the presence of high trans effect ligands. For example the selenocyanate ion trans to the soft carbon dioxide in trans-Rh(PPh3)2(CO)(NCSe) bonds via the nitrogen, the harder of its two donors. The phenomenon may be explained in terms of a trans influence:
“With two π-acid ligands in mutual trans positions at a class-b metal, there would be a destabilizing competition for the dπ electrons on the metal. A π-acid bonded to a soft metal thus makes a metal a harder Lewis acid. Similarly a soft σ-donor will tend to polarize the electron density on a soft metal, causing it to favour an electrovalently bonded ligand in the trans position.”
Chemical symbiosis
This effect occurs with class-a metals such as iron(II). The Cyclopentadienyl complex (C5H5)Fe(CO)2(SCN) is an example of chemical symbiosis. The cyclopentadienyl directs the thiocyanate to bond through its softer Sulphur donor. A more definitive example are the halopentamminocobalt(III) ions, Co(NH3)5X2+, which are more stable when the halogen, X, is fluoride than with iodide, and the halopentcyanocobalt(III) ions, Co(CN)5X3−, which are most stable when the halogen is iodine.
“Hard bases (electronegative donor atoms) retain their valence (outer shell) electrons when attached to a given central metal ion, thus enabling the metal ion to retain more of its positive charge, making it a hard Lewis acid. With soft bases the central metal atom is made a softer Lewis acid, because the metal’s positive charge is reduced by delocalization of electron density from the ligand into the ligand-metal bond. But we have the distinction that with a class-a metal there is little concomitant polarization of the electron density away from the trans position of the metal. In addition, symbiosis, unlike antisymbiosis, is probably not specifically trans directional, and is just as effective in, say, tetrahedral complexes.”
References
Coordination chemistry | Symbiosis (chemical) | [
"Chemistry"
] | 599 | [
"Coordination chemistry"
] |
20,410,028 | https://en.wikipedia.org/wiki/Rudolf%20Hoppe | Rudolf Hoppe (29 October 1922 – 24 November 2014), a German chemist, discovered the first covalent noble gas compounds.
Academic career
Hoppe studied chemistry at the Christian-Albrechts-University of Kiel and was awarded his doctorate at the Westfälische Wilhelms-University of Münster in 1954 under the supervision of Wilhelm Klemm. He also got his habilitation degree in Münster and gained a professorship for inorganic chemistry in 1958. In 1965, Hoppe accepted an offer for the chair of inorganic and analytic chemistry at the Justus Liebig University Giessen, which he kept until his retirement in 1991.
Scientific research
In Münster
Hoppe became famous through his synthesis of the stable noble gas compound XeF2 (xenon difluoride), reported in November 1962. His work followed the previous synthesis of by xenon hexafluoroplatinate by Neil Bartlett, in an experiment run on March 23, 1962 and reported in June of that year. Until then, everyone had assumed that compounds of such kind would not exist, the reason being, first, unsuccessful experiments attempting to synthesize such noble gas compounds and, second, the concept of the "closed octet of electrons", according to which noble gases would not participate in chemical reactions.
Through the properties of the interhalogen compounds it had become obvious that noble gas fluorides were the only accessible ones. Since 1949/50, a research group in Münster had carried out in-depth discussions on the possibility of the formation and the properties of xenon fluorides. This research group was convinced, already in 1951, that XeF4 and XeF2 should be thermodynamically stable against the decomposition into the elements.
For a long time it was planned to occasionally perform synthetic experiments targeted at the xenon fluorides. Technical and conceptional difficulties, however, interfered in Münster. On the one hand, xenon was not accessible in sufficient purity; on the other hand, the researchers believed that only pressure syntheses would be successful, for which steel bottles with compressed F2 were needed. Since 1961, those F2-pressure cylinders had been promised by American friends but the transfer could not take place until 1963 because the valves of non-standard U.S. pressure cylinders were not allowed in Germany and vice versa.
Nevertheless, Hoppe’s research group was able to generate XeF2 in the form of transparent crystals in early 1962. To do so, they let electric sparks impact on xenon-fluorine mixtures. Neil Bartlett tried a similar experiment for the first time in the USA on August 2, 1962. After a few days, he gained xenon tetrafluoride, XeF4.
In Gießen
In Gießen, Hoppe continued his extensive research in the field of solid state chemistry with a focus on the synthesis and characterization of oxo- and fluorometalates of the alkali metals. During his research he published over 650 articles in international and national peer-review journals. In addition, he had been the scientific editor for the German Journal of Inorganic and General Chemistry (Zeitschrift für Anorganische und Allgemeine Chemie).
Teachings
As a professor, Prof. Hoppe taught many young students the fundamentals of chemistry and other more specific topics. In addition, 114 doctoral candidates earned their Ph.D. with Hoppe as their supervisor.
Other activities
Hoppe was a great pet lover and was known to be a supporter of zoological gardens. He died at the age of 92 on 24 November 2014.
Honors
Honorary doctorate of the Christian Albrechts University of Kiel (1983) as well as of the University of Ljubljana (1990)
Award of the Göttingen Academy of Sciences and Humanities (1963)
Alfred Stock Award of the German Chemical Society (1974)
Henri Moissan Medal of the Société chimique de France (1986)
Jozef Stefan Medal of the Jozef Stefan Institute in Ljubljana (1988)
Otto Hahn Award for Chemistry and Physics (1989) as the first representative of inorganic chemistry
Lavoisier Medal (along with Derek Barton) of the Société chimique de France (1995)
Furthermore, Hoppe has been a member of several scientific societies and academies as well as of the German National Academy of Sciences Leopoldina in Halle and of the Bavarian Academy of Sciences and Austrian Academy of Sciences.
References
https://web.archive.org/web/20090628130943/http://home.arcor.de/prignitzportal/citizen/seite_hoppe_rudolf.htm
Hoppe, R.; Valence Compounds of the Inert Gases, Angewandte Chemie International Edition Engl., 1964, 3, 538.
1922 births
2014 deaths
People from Wittenberge
20th-century German chemists
Academic staff of the University of Giessen
Solid state chemists
University of Kiel alumni
University of Münster alumni
Academic staff of the University of Münster
Members of the German National Academy of Sciences Leopoldina
Members of the Austrian Academy of Sciences | Rudolf Hoppe | [
"Chemistry"
] | 1,042 | [
"Solid state chemists"
] |
20,415,988 | https://en.wikipedia.org/wiki/Source%20water%20protection | Source Water Protection is a planning process conducted by local water utilities, as well as regional or national government agencies, to protect drinking water sources from overuse and contamination. The process includes identification of water sources, assessment of known and potential threats of contamination, notification of the public, and steps to eliminate the contamination. The process is applicable to lakes, rivers and groundwater (aquifers).
Canada
Source water protection is part of a multi-barrier approach to protecting municipal sources of drinking water that was recommended by the Canadian Justice Dennis O'Connor in his Walkerton reports. This study was released in 2002 as a response to the Walkerton Tragedy, in which the town of Walkerton, Ontario's drinking water became contaminated with E. coli bacteria.
United States
The Safe Drinking Water Act requires each state to delineate the boundaries of areas that public water systems use for their sources of drinking water—both surface and underground sources. The U.S. Environmental Protection Agency (EPA) encourages states and local water utilities to conduct source water assessments and take steps to protect the sources. EPA provides some financial assistance to states and utilities to conduct source water planning, through the Drinking Water State Revolving Fund. Technical and financial assistance is also available through the agency's Water Infrastructure and Resiliency Finance Center.
See also
Watershed management
References
External links
Source water protection - U.S. EPA
Water supply
Water pollution | Source water protection | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 282 | [
"Water pollution",
"Water supply",
"Hydrology",
"Environmental engineering"
] |
20,418,621 | https://en.wikipedia.org/wiki/Vacuum%20airship | A vacuum airship, also known as a vacuum balloon, is a hypothetical airship that is evacuated rather than filled with a lighter-than-air gas such as hydrogen or helium. First proposed by Italian Jesuit priest Francesco Lana de Terzi in 1670, the vacuum balloon would be the ultimate expression of lifting power per volume displaced. (Also called "FLanar", combination of F. Lana and the Portuguese word "flanar," which means wandering.)
History
From 1886 to 1900 Arthur De Bausset attempted in vain to raise funds to construct his "vacuum-tube" airship design, but despite early support in the United States Congress, the general public was skeptical. Illinois historian Howard Scamehorn reported that Octave Chanute and Albert Francis Zahm "publicly denounced and mathematically proved the fallacy of the vacuum principle"; however, the author does not give his source. De Bausset published a book on his design and offered $150,000 stock in the Transcontinental Aerial Navigation Company of Chicago. His patent application was eventually denied on the basis that it was "wholly theoretical, everything being based upon calculation and nothing upon trial or demonstration."
Double wall fallacy
In 1921, Lavanda Armstrong disclosed a composite wall structure with a vacuum chamber "surrounded by a second envelope constructed so as to hold air under pressure, the walls of the envelope being spaced from one another and tied together", including a honeycomb-like cellular structure.
In 1983, David Noel discussed the use of a geodesic sphere covered with plastic film and "a double balloon containing pressurized air between the skins, and a vacuum in the centre".
In 1982–1985 Emmanuel Bliamptis elaborated on energy sources and use of "inflatable strut rings".
However, the double-wall design proposed by Armstrong, Noel, and Bliamptis would not have been buoyant. In order to avoid collapse, the air between the walls must have a minimum pressure (and therefore also a density) proportional to the fraction of the total volume occupied by the vacuum section, preventing the total density of the craft from being less than the surrounding air.
21st century
In 2004–2007, to address strength to weight ratio issues, Akhmeteli and Gavrilin addressed choice of four materials, specifically I220H beryllium (elemental 99%), boron carbide ceramic, diamond-like carbon, and 5056 Aluminum alloy (94.8% Al, 5% Mg, 0.12% Mn, 0.12%Cr) in a honeycomb double layer. In 2021, they extended this research; a "finite element analysis was employed to demonstrate that buckling can be prevented", focusing on a "shell of outer radius R > 2.11 m containing two boron carbide face skins of thickness 4.23 x 10−5 R each that are reliably bonded to an aluminum honeycomb core of thickness 3.52 x 10−3 R". At least two papers (in 2010 and 2016) have discussed the use of graphene as an outer membrane.
Principle
An airship operates on the principle of buoyancy, according to Archimedes' principle. In an airship, air is the fluid in contrast to a traditional ship where water is the fluid.
The density of air at standard temperature and pressure is 1.28 g/L, so 1 liter of displaced air has sufficient buoyant force to lift 1.28 g. Airships use a bag to displace a large volume of air; the bag is usually filled with a lightweight gas such as helium or hydrogen. The total lift generated by an airship is equal to the weight of the air it displaces, minus the weight of the materials used in its construction, including the gas used to fill the bag.
Vacuum airships would replace the lifting gas with a near-vacuum environment. Having no mass, the density of this body would be near to 0.00 g/L, which would theoretically be able to provide the full lift potential of displaced air, so every liter of vacuum could lift 1.28 g. Using the molar volume, the mass of 1 liter of helium (at 1 atmospheres of pressure) is found to be 0.178 g. If helium is used instead of vacuum, the lifting power of every litre is reduced by 0.178 g, so the effective lift is reduced by 13.90625%. A 1-litre volume of hydrogen has a mass of 0.090 g, reducing the effective lift by 7.03125%.
The main problem with the concept of vacuum airships is that, with a near-vacuum inside the airbag, the exterior atmospheric pressure is not balanced by any internal pressure. This enormous imbalance of forces would cause the airbag to collapse unless it were extremely strong (in an ordinary airship, the force is balanced by the pressure of the lifting gas, making this unnecessary). Thus the difficulty is in constructing an airbag with the additional strength to resist this extreme net force, without weighing the structure down so much that the greater lifting power of the vacuum is negated.
Material constraints
Compressive strength
From the analysis by Akhmeteli and Gavrilin:
The total force on a hemi-spherical shell of radius by an external pressure is . Since the force on each hemisphere has to balance along the equator, assuming where is the shell thickness, the compressive stress () will be:
Neutral buoyancy occurs when the shell has the same mass as the displaced air, which occurs when , where is the air density and is the shell density, assumed to be homogeneous. Combining with the stress equation gives
.
For aluminum and terrestrial conditions Akhmeteli and Gavrilin estimate the stress as Pa, of the same order of magnitude as the compressive strength of aluminum alloys.
Buckling
Akhmeteli and Gavrilin note, however, that the compressive strength calculation disregards buckling, and using R. Zoelli's formula for the critical buckling pressure of a sphere
where is the modulus of elasticity and is the Poisson ratio of the shell. Substituting the earlier expression gives a necessary condition for a feasible vacuum balloon shell:
The requirement is about .
Akhmeteli and Gavrilin assert that this cannot even be achieved using diamond (), and
propose that dropping the assumption that the shell is a homogeneous material may allow lighter and stiffer structures (e.g. a honeycomb structure).
Atmospheric constraints
A vacuum airship should at least float (Archimedes law) and resist external pressure (strength law, depending on design, like the above R. Zoelli's formula for sphere). These two conditions may be rewritten as an inequality where a complex of several physical constants related to the material of the airship is to be lesser than a complex of atmospheric parameters. Thus, for a sphere (hollow sphere and, to a lesser extent, cylinder are practically the only designs for which a strength law is known) it is , where is pressure within the sphere, while («Lana coefficient») and («Lana atmospheric ratio») are:
(or, when is unknown, with an error of order of 3% or less);
(or, when is unknown, ),
where and are pressure and density of standard Earth atmosphere at sea level, and are molar mass (kg/kmol) and temperature (K) of atmosphere at floating area.
Of all known planets and moons of the Sun system only the Venusian atmosphere has big enough to surpass for such materials as some composites (below altitude of ca. 15 km) and graphene (below altitude of ca. 40 km). Both materials may survive in the Venusian atmosphere. The equation for shows that exoplanets with dense, cold and high-molecular (, , type) atmospheres may be suitable for vacuum airships, but it is a rare type of atmosphere.
In fiction
In Edgar Rice Burroughs's novel Tarzan at the Earth's Core, Tarzan travels to Pellucidar in a vacuum airship constructed of the fictional material Harbenite.
In Passarola Rising, novelist Azhar Abidi imagines what might have happened had Bartolomeu de Gusmão built and flown a vacuum airship.
Spherical vacuum body airships using the Magnus effect and made of carbyne or similar superhard carbon are glimpsed in Neal Stephenson's novel The Diamond Age.
In Maelstrom and Behemoth:B-Max, author Peter Watts describes various flying devices, such as "botflies" (named after the botfly) and "lifters" that use "vacuum bladders" to keep them airborne.
In Feersum Endjinn by Iain M. Banks, a vacuum balloon is used by the narrative character Bascule in his quest to rescue Ergates. Vacuum dirigibles (airships) are also mentioned as a notable engineering feature of the space-faring utopian civilisation The Culture in Banks' novel Look to Windward, and the vast vacuum dirigible Equatorial 353 is a pivotal location in the final Culture novel, The Hydrogen Sonata.
See also
Aerostat
References
Further reading
Le defi de Cyrano; un ballon gonfle avec du vide : Fabrice David
Airship configurations
Airship technology
Hypothetical technology
Vacuum systems
Materials science
Pressure | Vacuum airship | [
"Physics",
"Materials_science",
"Engineering"
] | 1,923 | [
"Scalar physical quantities",
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Pressure",
"Vacuum",
"Materials science",
"nan",
"Vacuum systems",
"Wikipedia categories named after physical quantities",
"Matter"
] |
20,419,621 | https://en.wikipedia.org/wiki/Arthur%E2%80%93Selberg%20trace%20formula | In mathematics, the Arthur–Selberg trace formula is a generalization of the Selberg trace formula from the group SL2 to arbitrary reductive groups over global fields, developed by James Arthur in a long series of papers from 1974 to 2003. It describes the character of the representation of on the discrete part of in terms of geometric data, where is a reductive algebraic group defined over a global field and is the ring of adeles of F.
There are several different versions of the trace formula. The first version was the unrefined trace formula, whose terms depend on truncation operators and have the disadvantage that they are not invariant. Arthur later found the invariant trace formula and the stable trace formula which are more suitable for applications. The simple trace formula is less general but easier to prove. The local trace formula is an analogue over local fields.
Jacquet's relative trace formula is a generalization where one integrates the kernel function over non-diagonal subgroups.
Notation
F is a global field, such as the field of rational numbers.
A is the ring of adeles of F.
G is a reductive algebraic group defined over F.
The compact case
In the case when is compact the representation splits as a direct sum of irreducible representations, and the trace formula is similar to the Frobenius formula for the character of the representation induced from the trivial representation of a subgroup of finite index.
In the compact case, which is essentially due to Selberg, the groups G(F) and G(A) can be replaced by any
discrete subgroup of a locally compact group with compact. The group acts on the space of functions on
by the right regular representation , and this extends to an action of the group ring of , considered as the ring of functions on . The character of this representation is given by a generalization of the Frobenius formula as follows.
The action of a function on a function on is given by
In other words, is an integral operator on (the space of functions on ) with kernel
Therefore, the trace of is given by
The kernel K can be written as
where is the set of conjugacy classes in , and
where is an element of the conjugacy class , and is its centralizer in .
On the other hand, the trace is also given by
where is the multiplicity of the irreducible unitary representation of in and is the operator on the space of given by .
Examples
If and are both finite, the trace formula is equivalent to the Frobenius formula for the character of an induced representation.
If is the group of real numbers and the subgroup of integers, then the trace formula becomes the Poisson summation formula.
Difficulties in the non-compact case
In most cases of the Arthur–Selberg trace formula, the quotient is not compact, which causes the following (closely related) problems:
The representation on contains not only discrete components, but also continuous components.
The kernel is no longer integrable over the diagonal, and the operators are no longer of trace class.
Arthur dealt with these problems by truncating the kernel at cusps in such a way that the truncated kernel is integrable over the diagonal. This truncation process causes many problems; for example, the truncated terms are no longer invariant under conjugation. By manipulating the terms further, Arthur was able to produce an invariant trace formula whose terms are invariant.
The original Selberg trace formula studied a discrete subgroup of a real Lie group (usually ).
In higher rank it is more convenient to replace the Lie group with an adelic group . One reason for this that the discrete group can be taken as the group of points for a (global) field, which is easier to work with than discrete subgroups of Lie groups. It also makes Hecke operators easier to work with.
The trace formula in the non-compact case
One version of the trace formula asserts the equality of two distributions on :
The left hand side is the geometric side of the trace formula, and is a sum over equivalence classes in the group of rational points of , while the right hand side is the spectral side of the trace formula and is a sum over certain representations of subgroups of .
Distributions
Geometric terms
Spectral terms
The invariant trace formula
The version of the trace formula above is not particularly easy to use in practice, one of the problems being that the terms in it are not invariant under conjugation. found a modification in which the terms are invariant.
The invariant trace formula states
where
is a test function on
ranges over a finite set of rational Levi subgroups of
is the set of conjugacy classes of
is the set of irreducible unitary representations of
is related to the volume of
is related to the multiplicity of the irreducible representation in
is related to
is related to trace
is the Weyl group of M.
Stable trace formula
suggested the possibility a stable refinement of the trace formula that can be used to compare the trace formula for two different groups. Such a stable trace formula was found and proved by .
Two elements of a group are called stably conjugate if they are conjugate over
the algebraic closure of the field . The point is that when one compares elements in two different groups, related for example by inner twisting, one does not usually get a good correspondence between conjugacy classes, but only between stable conjugacy classes. So to compare the geometric terms in the trace formulas for two different groups, one would like the terms to be not just invariant under conjugacy, but also to be well behaved on stable conjugacy classes; these are called stable distributions.
The stable trace formula writes the terms in the trace formula of a group in terms of stable distributions. However these stable distributions are not distributions on the group , but are distributions on a family of quasisplit groups called the endoscopic groups of . Unstable orbital integrals on the group correspond to stable orbital integrals on its endoscopic groups .
Simple trace formula
There are several simple forms of the trace formula, which restrict the compactly supported test functions f in some way . The advantage of this is that the trace formula and its proof become much easier, and the disadvantage is that the resulting formula is less powerful.
For example, if the functions f are cuspidal, which means that
for any unipotent radical of a proper parabolic subgroup (defined over ) and any x, y in , then the operator has image in the space of cusp forms so is compact.
Applications
used the Selberg trace formula to prove the Jacquet–Langlands correspondence between automorphic forms on and its twisted forms. The Arthur–Selberg trace formula can be used to study similar correspondences on higher rank groups. It can also be used to prove several other special cases of Langlands functoriality, such as base change, for
some groups.
used the Arthur–Selberg trace formula to prove the Weil conjecture on Tamagawa numbers.
described how the trace formula is used in his proof of the Langlands conjecture for general linear groups over function fields.
See also
Maass wave form
Harmonic Maass form
Arthur's conjectures
References
External links
Works of James Arthur at the Clay institute
Archive of Collected Works of James Arthur at the University of Toronto Department of Mathematics
Automorphic forms
Theorems in harmonic analysis | Arthur–Selberg trace formula | [
"Mathematics"
] | 1,495 | [
"Theorems in mathematical analysis",
"Theorems in harmonic analysis"
] |
20,425,855 | https://en.wikipedia.org/wiki/Elementary%20Calculus%3A%20An%20Infinitesimal%20Approach | Elementary Calculus: An Infinitesimal approach is a textbook by H. Jerome Keisler. The subtitle alludes to the infinitesimal numbers of the hyperreal number system of Abraham Robinson and is sometimes given as An approach using infinitesimals. The book is available freely online and is currently published by Dover.
Textbook
Keisler's textbook is based on Robinson's construction of the hyperreal numbers. Keisler also published a companion book, Foundations of Infinitesimal Calculus, for instructors, which covers the foundational material in more depth.
Keisler defines all basic notions of the calculus such as continuity (mathematics), derivative, and integral using infinitesimals. The usual definitions in terms of ε–δ techniques are provided at the end of Chapter 5 to enable a transition to a standard sequence.
In his textbook, Keisler used the pedagogical technique of an infinite-magnification microscope, so as to represent graphically, distinct hyperreal numbers infinitely close to each other. Similarly, an infinite-resolution telescope is used to represent infinite numbers.
When one examines a curve, say the graph of ƒ, under a magnifying glass, its curvature decreases proportionally to the magnification power of the lens. Similarly, an infinite-magnification microscope will transform an infinitesimal arc of a graph of ƒ, into a straight line, up to an infinitesimal error (only visible by applying a higher-magnification "microscope"). The derivative of ƒ is then the (standard part of the) slope of that line (see figure).
Thus the microscope is used as a device in explaining the derivative.
Reception
The book was first reviewed by Errett Bishop, noted for his work in constructive mathematics. Bishop's review was harshly critical; see Criticism of nonstandard analysis. Shortly after, Martin Davis and Hausner published a detailed favorable review, as did Andreas Blass and Keith Stroyan. Keisler's student K. Sullivan, as part of her PhD thesis, performed a controlled experiment involving 5 schools, which found Elementary Calculus to have advantages over the standard method of teaching calculus. Despite the benefits described by Sullivan, the vast majority of mathematicians have not adopted infinitesimal methods in their teaching. Recently, Katz & Katz give a positive account of a calculus course based on Keisler's book. O'Donovan also described his experience teaching calculus using infinitesimals. His initial point of view was positive, but later he found pedagogical difficulties with the approach to nonstandard calculus taken by this text and others.
G. R. Blackley remarked in a letter to Prindle, Weber & Schmidt, concerning Elementary Calculus: An Approach Using Infinitesimals, "Such problems as might arise with the book will be political. It is revolutionary. Revolutions are seldom welcomed by the established party, although revolutionaries often are."
Hrbacek writes that the definitions of continuity, derivative, and integral implicitly must be grounded in the ε–δ method in Robinson's theoretical framework, in order to extend definitions to include nonstandard values of the inputs, claiming that the hope that nonstandard calculus could be done without ε–δ methods could not be realized in full. Błaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".
Transfer principle
Between the first and second edition of the Elementary Calculus, much of the theoretical material that was in the first chapter was moved to the epilogue at the end of the book, including the theoretical groundwork of nonstandard analysis.
In the second edition Keisler introduces the extension principle and the transfer principle in the following form:
Every real statement that holds for one or more particular real functions holds for the hyperreal natural extensions of these functions.
Keisler then gives a few examples of real statements to which the principle applies:
Closure law for addition: for any x and y, the sum x + y is defined.
Commutative law for addition: x + y = y + x.
A rule for order: if 0 < x < y then 0 < 1/y < 1/x.
Division by zero is never allowed: x/0 is undefined.
An algebraic identity: .
A trigonometric identity: .
A rule for logarithms: If x > 0 and y > 0, then .
See also
Criticism of nonstandard analysis
Influence of nonstandard analysis
Nonstandard calculus
Increment theorem
Notes
References
Blass writes: "I suspect that many mathematicians harbor, somewhere in the back of their minds, the formula for arc length (and quickly factor out dx before writing it down)" (p. 35).
"Often, as in the examples above, the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers)" (p. 37).
"The relative simplicity of the nonstandard definitions of some concepts of elementary analysis suggests a pedagogical application in freshman calculus. One could make use of the students' intuitive ideas about infinitesimals (which are usually very vague, but so are their ideas about real numbers) to develop calculus on a nonstandard basis" (p. 38).
.
A companion to the textbook Elementary Calculus: An Approach Using Infinitesimals.
External links
Book in PDF format
Calculus
Nonstandard analysis
Mathematics of infinitesimals | Elementary Calculus: An Infinitesimal Approach | [
"Mathematics"
] | 1,152 | [
"Calculus",
"Mathematical objects",
"Infinity",
"Nonstandard analysis",
"Mathematics of infinitesimals",
"Model theory"
] |
20,426,081 | https://en.wikipedia.org/wiki/Clairaut%27s%20relation%20%28differential%20geometry%29 | In classical differential geometry, Clairaut's relation, named after Alexis Claude de Clairaut, is a formula that characterizes the great circle paths on the unit sphere. The formula states that if γ is a parametrization of a great circle then
where ρ(P) is the distance from a point P on the great circle to the z-axis, and ψ(P) is the angle between the great circle and the meridian through the point P.
The relation remains valid for a geodesic on an arbitrary surface of revolution.
A statement of the general version of Clairaut's relation is:
Pressley (p. 185) explains this theorem as an expression of conservation of angular momentum about the axis of revolution when a particle moves along a geodesic under no forces other than those that keep it on the surface.
References
M. do Carmo, Differential Geometry of Curves and Surfaces, page 257.
Differential geometry
Differential geometry of surfaces
Geodesy | Clairaut's relation (differential geometry) | [
"Mathematics"
] | 198 | [
"Applied mathematics",
"Geodesy"
] |
6,811,795 | https://en.wikipedia.org/wiki/Convergence%20of%20measures | In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance we require there be sufficiently large for to ensure the 'difference' between and is smaller than . Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.
Three of the most common notions of convergence are described below.
Informal descriptions
This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if is a sequence of probability measures on a Polish space.
The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge:
To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be.
The notion of weak convergence requires this convergence to take place for every continuous bounded function .
This notion treats convergence for different functions independently of one another, i.e., different functions may require different values of to be approximated equally well (thus, convergence is non-uniform in ).
The notion of setwise convergence formalizes the assertion that the measure of each measurable set should converge:
Again, no uniformity over the set is required.
Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded
variation on a Polish space, setwise convergence implies the convergence for any bounded measurable function .
As before, this convergence is non-uniform in .
The notion of total variation convergence formalizes the assertion that the measure of all measurable sets should converge uniformly, i.e. for every there exists such that for every and for every measurable set . As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant.
Total variation convergence of measures
This is the strongest notion of convergence shown on this page and is defined as follows. Let be a measurable space. The total variation distance between two (positive) measures and is then given by
Here the supremum is taken over ranging over the set of all measurable functions from to . This is in contrast, for example, to the Wasserstein metric, where the definition is of the same form, but the supremum is taken over ranging over the set of measurable functions from to which have Lipschitz constant at most 1; and also in contrast to the Radon metric, where the supremum is taken over ranging over the set of continuous functions from to . In the case where is a Polish space, the total variation metric coincides with the Radon metric.
If and are both probability measures, then the total variation distance is also given by
The equivalence between these two definitions can be seen as a particular case of the Monge–Kantorovich duality. From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2.
To illustrate the meaning of the total variation distance, consider the following thought experiment. Assume that we are given two probability measures and , as well as a random variable . We know that has law either or but we do not know which one of the two. Assume that these two measures have prior probabilities 0.5 each of being the true law of . Assume now that we are given one single sample distributed according to the law of and that we are then asked to guess which one of the two distributions describes that law. The quantity
then provides a sharp upper bound on the prior probability that our guess will be correct.
Given the above definition of total variation distance, a sequence of measures defined on the same measure space is said to converge to a measure in total variation distance if for every , there exists an such that for all , one has that
Setwise convergence of measures
For a measurable space, a sequence is said to converge setwise to a limit if
for every set .
Typical arrow notations are and .
For example, as a consequence of the Riemann–Lebesgue lemma, the sequence of measures on the interval given by converges setwise to Lebesgue measure, but it does not converge in total variation.
In a measure theoretical or probabilistic context setwise convergence is often referred to as strong convergence (as opposed to weak convergence). This can lead to some ambiguity because in functional analysis, strong convergence usually refers to convergence with respect to a norm.
Weak convergence of measures
In mathematics and statistics, weak convergence is one of many types of convergence relating to the convergence of measures. It depends on a topology on the underlying space and thus is not a purely measure-theoretic notion.
There are several equivalent definitions of weak convergence of a sequence of measures, some of which are (apparently) more general than others. The equivalence of these conditions is sometimes known as the Portmanteau theorem.
Definition. Let be a metric space with its Borel -algebra . A bounded sequence of positive probability measures on is said to converge weakly to a probability measure (denoted ) if any of the following equivalent conditions is true (here denotes expectation or the norm with respect to , while denotes expectation or the norm with respect to ):
for all bounded, continuous functions ;
for all bounded and Lipschitz functions ;
for every upper semi-continuous function bounded from above;
for every lower semi-continuous function bounded from below;
for all closed sets of space ;
for all open sets of space ;
for all continuity sets of measure .
In the case and (with its usual topology) are homeomorphic , if and denote the cumulative distribution functions of the measures and , respectively, then converges weakly to if and only if for all points at which is continuous.
For example, the sequence where is the Dirac measure located at converges weakly to the Dirac measure located at 0 (if we view these as measures on with the usual topology), but it does not converge setwise. This is intuitively clear: we only know that is "close" to because of the topology of .
This definition of weak convergence can be extended for any metrizable topological space. It also defines a weak topology on , the set of all probability measures defined on . The weak topology is generated by the following basis of open sets:
where
If is also separable, then is metrizable and separable, for example by the Lévy–Prokhorov metric. If is also compact or Polish, so is .
If is separable, it naturally embeds into as the (closed) set of Dirac measures, and its convex hull is dense.
There are many "arrow notations" for this kind of convergence: the most frequently used are , , and .
Weak convergence of random variables
Let be a probability space and X be a metric space. If is a sequence of random variables then Xn is said to converge weakly (or in distribution or in law) to the random variable X: Ω → X as if the sequence of pushforward measures (Xn)∗(P) converges weakly to X∗(P) in the sense of weak convergence of measures on X, as defined above.
Comparison with vague convergence
Let be a metric space (for example or ). The following spaces of test functions are commonly used in the convergence of probability measures.
the class of continuous functions each vanishing outside a compact set.
the class of continuous functions such that
the class of continuous bounded functions
We have . Moreover, is the closure of with respect to uniform convergence.
Vague Convergence
A sequence of measures converges vaguely to a measure if for all , .
Weak Convergence
A sequence of measures converges weakly to a measure if for all , .
In general, these two convergence notions are not equivalent.
In a probability setting, vague convergence and weak convergence of probability measures are equivalent assuming tightness. That is, a tight sequence of probability measures converges vaguely to a probability measure if and only if converges weakly to .
The weak limit of a sequence of probability measures, provided it exists, is a probability measure. In general, if tightness is not assumed, a sequence of probability (or sub-probability) measures may not necessarily converge vaguely to a true probability measure, but rather to a sub-probability measure (a measure such that ). Thus, a sequence of probability measures such that where is not specified to be a probability measure is not guaranteed to imply weak convergence.
Weak convergence of measures as an example of weak-* convergence
Despite having the same name as weak convergence in the context of functional analysis, weak convergence of measures is actually an example of weak-* convergence. The definitions of weak and weak-* convergences used in functional analysis are as follows:
Let be a topological vector space or Banach space.
A sequence in converges weakly to if as for all . One writes as .
A sequence of converges in the weak-* topology to provided that for all . That is, convergence occurs in the point-wise sense. In this case, one writes as .
To illustrate how weak convergence of measures is an example of weak-* convergence, we give an example in terms of vague convergence (see above). Let be a locally compact Hausdorff space. By the Riesz-Representation theorem, the space of Radon measures is isomorphic to a subspace of the space of continuous linear functionals on . Therefore, for each Radon measure , there is a linear functional such that for all . Applying the definition of weak-* convergence in terms of linear functionals, the characterization of vague convergence of measures is obtained. For compact , , so in this case weak convergence of measures is a special case of weak-* convergence.
See also
Convergence of random variables
Lévy–Prokhorov metric
Prokhorov's theorem
Tightness of measures
Notes and references
Further reading
Measure theory
Measure, Convergence of | Convergence of measures | [
"Mathematics"
] | 2,186 | [
"Sequences and series",
"Functions and mappings",
"Convergence (mathematics)",
"Mathematical structures",
"Mathematical objects",
"Mathematical relations"
] |
6,812,911 | https://en.wikipedia.org/wiki/Chemotactic%20selection | Chemotaxis receptors are expressed in the surface membrane with diverse dynamics, some of them have long-term characteristics as they are determined genetically, others have short-term moiety as their assembly is induced ad hoc in the presence of the ligand. The diverse feature of the chemotaxis receptors and ligands provides the possibility to select chemotactic responder cells with a simple chemotaxis assay. By chemotactic selection we can determine whether a still not characterized molecule acts via the long- or the short-term receptor pathway. Recent results proved that chemokines (e.g. IL-8, RANTES) are working on long-term chemotaxis receptors, while vasoactive peptides (e.g. endothelin) act more on the short-term ones. Term chemotactic selection is also used to design a technique which separates eukaryotic or prokaryotic cells upon their chemotactic responsiveness to selector ligands.
References
External links
Chemotaxis
Cell biology
Perception
Signal transduction | Chemotactic selection | [
"Chemistry",
"Biology"
] | 218 | [
"Biochemistry",
"Neurochemistry",
"Cell biology",
"Signal transduction"
] |
6,813,660 | https://en.wikipedia.org/wiki/Apparent%20horizon | In general relativity, an apparent horizon is a surface that is the boundary between light rays that are directed outwards and moving outwards and those directed outward but moving inward.
Apparent horizons are not invariant properties of spacetime, and in particular, they are distinct from event horizons. Within an apparent horizon, light does not move outward; this is in contrast with the event horizon. In a dynamical spacetime, there can be outgoing light rays exterior to an apparent horizon (but still interior to the event horizon). An apparent horizon is a local notion of the boundary of a black hole, whereas an event horizon is a global notion.
The notion of a horizon in general relativity is subtle and depends on fine distinctions.
Definition
The notion of an "apparent horizon" begins with the notion of a trapped null surface. A (compact, orientable, spacelike) surface always has two independent forward-in-time pointing, lightlike, normal directions. For example, a (spacelike) sphere in Minkowski space has lightlike vectors pointing inward and outward along the radial direction. In Euclidean space (i.e. flat and unaffected by gravitational effects), the inward-pointing, lightlike normal vectors converge, while the outward-pointing, lightlike normal vectors diverge. It can, however, happen that both inward-pointing and outward-pointing lightlike normal vectors converge. In such a case, the surface is called trapped. The apparent horizon is the outermost of all trapped surfaces, also called the "marginally outer trapped surface" (MOTS).
Differences from the (absolute) event horizon
In the context of black holes, the term event horizon refers almost exclusively to the notion of the "absolute horizon". Much confusion seems to arise concerning the differences between an apparent horizon (AH) and an event horizon (EH). In general, the two need not be the same. For example, in the case of a perturbed black hole, the EH and the AH generally do not coincide as long as either horizon fluctuates.
Event horizons can, in principle, arise and evolve in exactly flat regions of spacetime, having no black hole inside, if a hollow spherically symmetric thin shell of matter is collapsing in a vacuum spacetime. The exterior of the shell is a portion of Schwarzschild space and the interior of the hollow shell is exactly flat Minkowski space. Bob Geroch has pointed out that if all the stars in the Milky Way gradually aggregate towards the Galactic Center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide.
In the simple picture of stellar collapse leading to formation of a black hole, an event horizon forms before an apparent horizon. As the black hole settles down, the two horizons approach each other, and asymptotically become the same surface. If the null curvature condition (where denotes the Ricci tensor and a null vector) is satisfied, then the apparent horizon is located inside the event horizon. The emission of Hawking radiation violates the weak and the null energy condition. In this case, a section of the apparent horizon is located outside of the event horizon.
Apparent horizons depend on the "slicing" of a spacetime. That is, the location and even existence of an apparent horizon depends on the way spacetime is divided into space and time. For example, it is possible to slice the Schwarzschild geometry in such a way that there is no apparent horizon, ever, despite the fact that there is certainly an event horizon.
See also
Absolute horizon
Cauchy horizon
Cosmological horizon
Ergosphere
Killing horizon
Naked singularity
Particle horizon
Photon sphere
Reissner–Nordström solution
References
External links
Mathematical methods in general relativity
Black holes | Apparent horizon | [
"Physics",
"Astronomy"
] | 771 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
35,718,464 | https://en.wikipedia.org/wiki/Active%20disturbance%20rejection%20control | Active disturbance rejection control (or ADRC, also known as automatic disturbance rejection control) is a model-free control technique used for designing controllers for systems with unknown dynamics and external disturbances. This approach only necessitates an estimated representation of the system's behavior to design controllers that effectively counteract disturbances without causing any overshooting.
ADRC has been successfully used as an alternative to PID control in many applications, such as the control of permanent magnet synchronous motors, thermal power plants and robotics. In particular, the precise control of brushless motors for joint motion is vital in high-speed industrial robot applications. However, flexible robot structures can introduce unwanted vibrations, challenging PID controllers. ADRC offers a solution by real-time disturbance estimation and compensation, without needing a detailed model.
Disturbance rejection
To achieve robustness, ADRC is based on extension of the system model with an additional and fictitious state variable representing everything that the user does not include in the mathematical description of the base system to be controlled. This virtual state (sum of unknown part of model dynamics and external disturbances, usually denoted as a "total disturbance" or "generalized disturbance") is estimated online with an extended state observer and used in the control signal in order to decouple the system from the actual perturbation acting on the plant. This disturbance rejection feature allows users to treat the considered system with a simpler model insofar as the negative effects of modeling uncertainty are compensated in real time. As a result, the operator does not need a precise analytical description of the base system; one can model the unknown parts of the dynamics as internal disturbances in the base system.
Control architecture
The ADRC consists of three main components: a tracking differentiator, a non-linear state error feedback and an extended state observer. The global convergence of ADRC has been proved for a class of general multiple-input multiple-output systems.
The following architecture is known as the output-form structure of ADRC:There also exists a special form of ADRC, known as error-form structure, which is used for comparing the ADRC with classical controllers such as PID.
Tracking differentiator
The primary objective of the tracking differentiator is to follow the transient profile of the reference signal, addressing the issue of sudden changes in the set point that occur in the conventional PID controller. Moreover, the tracking differentiator also mitigates the possible noise amplification that affects the derivative term of the PID controller by using numerical integration instead of numerical differentiation.
Extended state observer
An extended state observer (ESO) keeps track of the system's states as well as external disturbances and unknown model's perturbations. As a result, ADRC does not rely on any particular mathematical model of disturbance. Nonlinear ESO (NESO) is a subtype of general ESO that uses a nonlinear discontinuous function of the output estimate error. NESO are comparable to sliding mode observers in that both use a nonlinear function of output estimation error (rather than a linear function as in linear, high gain, and extended observers). A sliding mode observer's discontinuity is at the origin, but the NESO's discontinuity is at a preset error threshold.
Nonlinear state error feedback
The intuitiveness of PID control can be attributed to the simplicity of its error feedback. ADRC extends the PID by employing a nonlinear state error feedback, and because of this, seminal works referred to ADRC as nonlinear PID. Weighted state errors can also be used as feedback in a linearization system.
References
External links
Active disturbance rejection control implementation in MATLAB.
Control theory | Active disturbance rejection control | [
"Mathematics"
] | 747 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
35,721,801 | https://en.wikipedia.org/wiki/Olmec%20colossal%20heads | The Olmec colossal heads are stone representations of human heads sculpted from large basalt boulders. They range in height from . The heads date from at least 900 BC and are a distinctive feature of the Olmec civilization of ancient Mesoamerica. All portray mature individuals with fleshy cheeks, flat noses, and slightly-crossed eyes; their physical characteristics correspond to a type that is still common among the inhabitants of Tabasco and Veracruz. The backs of the monuments are often flat.
The boulders were brought from the Sierra de Los Tuxtlas mountains of Veracruz. Given that the extremely large slabs of stone used in their production were transported more than , requiring a great deal of human effort and resources, it is thought that the monuments represent portraits of powerful individual Olmec rulers. Each of the known examples has a distinctive headdress. The heads were variously arranged in lines or groups at major Olmec centres, but the method and logistics used to transport the stone to these sites remain unclear. The heads all display distinctive headgear and one theory is that these were worn as protective helmets, maybe worn for war or to take part in a ceremonial Mesoamerican ballgame.
The discovery of the first colossal head at Tres Zapotes in 1862 by José María Melgar y Serrano was not well documented nor reported outside Mexico.
The excavation of the same colossal head by Matthew Stirling in 1938 spurred the first archaeological investigations of Olmec culture. Seventeen confirmed examples are known from four sites within the Olmec heartland on the Gulf Coast of Mexico. Most colossal heads were sculpted from spherical boulders but two from San Lorenzo Tenochtitlán were re-carved from massive stone thrones. An additional monument, at Takalik Abaj in Guatemala, is a throne that may have been carved from a colossal head. This is the only known example from outside the Olmec heartland.
Dating the monuments remains difficult because many were removed from their original contexts prior to archaeological investigation. Most have been dated to the Early Preclassic period (1500–1000 BC) with some to the Middle Preclassic (1000–400 BC) period. The smallest weigh , while the largest is estimated to weigh ; it was abandoned and left uncompleted close to the source of its stone.
Olmec civilization
The Olmec civilization developed in the lowlands of southeastern Mexico between 1500 and 400 BC. The Olmec heartland lies on the Gulf Coast of Mexico within the states of Veracruz and Tabasco, an area measuring approximately east to west and extending about inland from the coast. The Olmecs are regarded as the first civilization to develop in Mesoamerica and the Olmec heartland is one of six cradles of civilization worldwide, the others being the Norte Chico culture of South America, the Erlitou culture of China's Yellow River, the Indus Valley civilization of the Indian subcontinent, the civilization of ancient Egypt in Africa, and the Sumerian civilization of ancient Iraq. Of these, only the Olmec civilization developed in a lowland tropical forest setting.
The Olmecs were one of the first inhabitants of the Americas to construct monumental architecture and to settle in towns and cities, predated only by the Caral civilization. They were also the first people in the Americas to develop a sophisticated style of stone sculpture. In the first decade of the 21st century, evidence emerged of Olmec writing, with the earliest examples of Olmec hieroglyphs dating to around 650 BC. Examples of script have been found on roller stamps and stone artefacts; the texts are short and have been partially deciphered based on their similarity to other Mesoamerican scripts. The evidence of complex society developing in the Olmec heartland has led to the Olmecs being regarded as the "Mother Culture" of Mesoamerica, although this concept remains controversial.
Some of the Olmecs' rulers seem to have served religious functions. The city of San Lorenzo was succeeded as the main centre of the civilization by La Venta in about 900 BC, with Tres Zapotes and Laguna de los Cerros possibly sharing the role; other urban centres were much less significant. The nature and degree of the control exercised by the centres over a widespread rural population remains unclear. Very fine Olmec art, much clearly made for an elite, survives in several forms, notably Olmec figurines, and larger sculptures such as The Wrestler. The figurines have been recovered in large numbers and are mostly in pottery; these were presumably widely available to the population. Together with these, of particular relevance to the colossal heads are the "Olmec-style masks" in stone, so called because none has yet been excavated in circumstances that allow the proper archaeological identification of an Olmec context. These evocative stone face masks present both similarities and differences to the colossal heads. Two thirds of Olmec monumental sculptures represent the human form, and the colossal heads fall within this major theme of Olmec art.
Dating
The colossal heads cannot be precisely dated. However, the San Lorenzo heads were buried by 900 BC, indicating that their period of manufacture and use was earlier still. The heads from Tres Zapotes had been moved from their original context before they were investigated by archaeologists and the heads from La Venta were found partially exposed on the modern ground surface. The period of production of the colossal heads is therefore unknown, as is whether it spanned a century or a millennium. Estimates of the time span during which colossal heads were produced vary from 50 to 200 years. The San Lorenzo heads are believed to be the oldest, and are the most skilfully executed. All of the stone heads have been assigned to the Preclassic period of Mesoamerican chronology, generally to the Early Preclassic (1500–1000 BC), although the two Tres Zapotes heads and the La Cobata Head are attributed to the Middle Preclassic (1000–400 BC).
Characteristics
Olmec colossal heads vary in height from 1.47 to 3.4 metres, or from 4.8 to 11.2 feet, and weigh between 6 and 50 tons. All of the Olmec colossal heads depict mature men with flat noses and fleshy cheeks; the eyes tend to be slightly crossed. The general physical characteristics of the heads are of a type that is still common among people in the Olmec region in modern times. The backs of the heads are often flat, as if the monuments were originally placed against a wall. All examples of Olmec colossal heads wear distinctive headdresses that probably represent cloth or animal hide originals. Some examples have a tied knot at the back of the head, and some are decorated with feathers. A head from La Venta is decorated with the head of a bird. There are similarities between the headdresses on some of the heads that has led to speculation that specific headdresses may represent different dynasties, or perhaps identify specific rulers. Most of the heads wear large earspools inserted into the ear lobes.
All of the heads are realistic, unidealised and frank depictions of the men. It is likely that they were portraits of living (or recently deceased) rulers well known to the sculptors. Each head is distinct and naturalistic, displaying individualised features. They were once thought to represent ballplayers although this theory is no longer widely held; it is possible, however, that they represent rulers equipped for the Mesoamerican ballgame. Facial expressions depicted on the heads vary from stern through placid to smiling. The most naturalistic Olmec art is the earliest, appearing suddenly without surviving antecedents, with a tendency towards more stylised sculpture as time progressed. Some surviving examples of wooden sculpture recovered from El Manatí demonstrate that the Olmecs are likely to have created many more perishable sculptures than works sculpted from stone.
In the late 19th century, José Melgar y Serrano described a colossal head as having "Ethiopian" features, and speculations that the Olmec had African origins resurfaced in 1960 in the work of Alfonso Medellín Zenil and in the 1970s in the writings of Ivan van Sertima. Such speculation is not taken seriously by Mesoamerican scholars such as Richard Diehl and Ann Cyphers. Genetic studies have shown that, rather than Africa, the earliest Americans had ancestry closer to Ancient Paleo-Siberian.
Although all the colossal heads are broadly similar, there are distinct stylistic differences in their execution. One of the heads from San Lorenzo bears traces of plaster and red paint, suggesting that the heads were originally brightly decorated. Heads did not just represent individual Olmec rulers; they also incorporated the very concept of rulership itself.
Manufacture
The production of each colossal head must have been carefully planned, given the effort required to ensure the necessary resources were available; it seems likely that only the more powerful Olmec rulers were able to mobilise such resources. The workforce would have included sculptors, labourers, overseers, boatmen, woodworkers and other artisans producing the tools to make and move the monument, in addition to the support needed to feed and otherwise attend to these workers. The seasonal and agricultural cycles and river levels needed to have been taken into account to plan the production of the monument and the whole project may well have taken years from beginning to end.
Archaeological investigation of Olmec basalt workshops suggest that the colossal heads were first roughly shaped using direct percussion to chip away both large and small flakes of stone. The sculpture was then refined by retouching the surface using hammerstones, which were generally rounded cobbles that could be of the same basalt as the monument itself, although this was not always the case. Abrasives were found in association with workshops at San Lorenzo, indicating their use in the finishing of fine detail. Olmec colossal heads were fashioned as in-the-round monuments with varying levels of relief on the same work; they tended to feature higher relief on the face and lower relief on the earspools and headdresses. Monument 20 at San Lorenzo is an extensively damaged throne with a figure emerging from a niche. Its sides were broken away and it was dragged to another location before being abandoned. It is possible that this damage was caused by the initial stages of re-carving the monument into a colossal head, left uncompleted.
All seventeen of the confirmed heads in the Olmec heartland were sculpted from basalt mined in the Sierra de los Tuxtlas mountains of Veracruz. Most were formed from coarse-grained, dark-grey basalt known as Cerro Cintepec basalt after a volcano in the range. Investigators have proposed that large Cerro Cintepec basalt boulders found on the southeastern slopes of the mountains are the source of the stone for the monuments. These boulders are found in an area affected by large lahars (volcanic mudslides) that carried substantial blocks of stone down the mountain slopes, which suggests that the Olmecs did not need to quarry the raw material for sculpting the heads. Roughly spherical boulders were carefully selected to mimic the shape of a human head. The stone for the San Lorenzo and La Venta heads was transported a considerable distance from the source. The La Cobata head was found on El Vigia hill in the Sierra de los Tuxtlas and the stone from Tres Zapotes Colossal Head 1 and Nestepe Colossal Head 1 (also known as Tres Zapotes Monuments A and Q) came from the same hill.
The boulders were transported over from the source of the stone. The exact method of transport of such large masses of rock are unknown, especially since the Olmecs lacked beasts of burden and functional wheels, and they were likely to have used water transport whenever possible. Coastal currents of the Gulf of Mexico and in river estuaries might have made the waterborne transport of monuments weighing 20 tons or more impractical. Two badly damaged Olmec sculptures depict rectangular stone blocks bound with ropes. A largely destroyed human figure rides upon each block, with their legs hanging over the side. These sculptures may well depict Olmec rulers overseeing the transport of the stone that would be fashioned into their monuments. When transport over land was necessary, the Olmecs are likely to have used causeways, ramps and roads to facilitate moving the heads. The regional terrain offers significant obstacles such as swamps and floodplains; avoiding these would have necessitated crossing undulating hill country. The construction of temporary causeways using the suitable and plentiful floodplain soils would have allowed a direct route across the floodplains to the San Lorenzo Plateau. Earth structures such as mounds, platforms and causeways upon the plateau demonstrate that the Olmec possessed the necessary knowledge and could commit the resources to build large-scale earthworks.
The flat backs of many of the colossal heads represented the flat bases of the monumental thrones from which they were reworked. Only four of the seventeen heartland heads do not have flattened backs, indicating the possibility that the majority were reworked monuments. Alternatively, the backs of many of these massive monuments may have been flattened to ease their transport, providing a stable form for hauling the monuments with ropes. Two heads from San Lorenzo have traces of niches that are characteristic of monumental Olmec thrones and so were definitely reworked from earlier monuments.
Known monuments
Seventeen confirmed examples are known. An additional monument, at Takalik Abaj in Guatemala, is a throne that may have been carved from a colossal head. This is the only known example outside the Olmec heartland on the Gulf Coast of Mexico. Possible fragments of additional colossal heads have been recovered at San Lorenzo and at San Fernando in Tabasco. Crude colossal stone heads are also known in the Southern Maya area where they are associated with the potbelly style of sculpture. Although some arguments have been made that they are pre-Olmec, these latter monuments are generally believed to be influenced by the Olmec style of sculpture.
San Lorenzo
The ten colossal heads from San Lorenzo originally formed two roughly parallel lines running north-south across the site. Although some were recovered from ravines, they were found close to their original placements and had been buried by local erosion. These heads, together with monumental stone thrones, probably formed a processional route across the site, powerfully displaying its dynastic history. Two of the San Lorenzo heads had been re-carved from older thrones.
San Lorenzo Colossal Head 1 (also known as San Lorenzo Monument 1) was lying facing upwards when excavated. The erosion of a path passing on top of the monument uncovered its eye and led to the discovery of the Olmec site. Colossal Head 1 is high; it measures wide and it weighs 25.3 tons. The monument was discovered partially buried at the edge of a gully by Matthew Stirling in 1945. When discovered, it was lying on its back, looking upwards. It was associated with a large number of broken ceramic vessels and figurines. The majority of these ceramic remains have been dated to between 800 and 400 BC; some pieces have been dated to the Villa Alta phase (Late Classic period, 800–1000 AD). The headdress possesses a plain band that is tied at the back of the head. The upper portion of the headdress is decorated with a U-shaped motif. This element descends across the front of the headdress, terminating on the forehead. On the front portion it is decorated with five semicircular motifs. The scalp piece does not meet the horizontal band, leaving a space between the two pieces. On each side of the face a strap descends from the headdress and passes in front of the ear. The forehead is wrinkled in a frown. The lips are slightly parted without revealing the teeth. The cheeks are pronounced and the ears are particularly well executed. The face is slightly asymmetric, which may be due to error on the part of the sculptors or may accurately reflect the physical features of the portrait's subject. The head has been moved to the Xalapa Museum of Anthropology.
San Lorenzo Colossal Head 2 (also known as San Lorenzo Monument 2) was reworked from a monumental throne. The head stands high and measures wide by deep; it weighs 20 tons. Colossal Head 2 was discovered in 1945 when Matthew Stirling's guide cleared away some of the vegetation and mud that covered it. The monument was found lying on its back, facing the sky, and was excavated in 1946 by Stirling and Philip Drucker. In 1962 the monument was removed from the San Lorenzo plateau in order to put it on display as part of "The Olmec tradition" exhibition at the Museum of Fine Arts in Houston in 1963. San Lorenzo Colossal Head 2 is currently in the Museo Nacional de Antropología in Mexico City. The head was associated with ceramic finds which have been dated to the Early Preclassic and Late Classic periods. Colossal Head 2 wears a complex headdress that sports a horizontal band tied at the back of the head; this is decorated with three bird's heads that are located above the forehead and temples. The scalp piece is formed from six strips running towards the back of the head. The front of the headdress above the horizontal band is plain. Two short straps hang down from the headdress in front of the ears. The ear jewellery is formed by large squared hoops or framed discs. The left and right ornaments are different, with radial lines on the left earflare, a feature absent on the right earflare. The head is badly damaged due to an unfinished reworking process. This process has pitmarked the entire face with at least 60 smaller hollows and 2 larger holes. The surviving features appear to depict an ageing man with the forehead creased into a frown. The lips are thick and slightly parted to reveal the teeth; the head has a pronounced chin.
San Lorenzo Colossal Head 3 is also known as San Lorenzo Monument 3. The head measures high by wide by deep and weighs 9.4 tons. The head was discovered in a deep gully by Matthew Stirling in 1946; it was found lying face down and its excavation was difficult due to the wet conditions in the gully. The monument was found southwest of the main mound at San Lorenzo, however, its original location is unknown; erosion of the gully may have resulted in significant movement of the sculpture. Head 3 has been moved to the Xalapa Museum of Anthropology. The headdress is complex, with the horizontal basal band being formed by four horizontal cords, with diagonal folds above each eye. A small skullcap tops the headdress. A large flap formed of four cords drops down both sides of the head, completely covering the ears. The face has a typically frowning brow and, unusually, has clearly defined eyelids. The lips are thick and slightly parted; the front of the lower lip has broken away completely, and the lower front of the headdress is pitted with 27 irregularly spaced artificial depressions.
San Lorenzo Colossal Head 4 (also known as San Lorenzo Monument 4) weighs 6 tons and has been moved to the Xalapa Museum of Anthropology. Colossal Head 4 is high, wide and deep. The head was discovered by Matthew Stirling in 1946, northwest of the principal mound, at the edge of a gully. When excavated, it was found to be lying on its right-hand side and in a very good state of preservation. Ceramic materials excavated with the head became mixed with ceramics associated with Head 5, making ceramic dating of the monument difficult. The headdress is decorated with a horizontal band formed of four sculpted cords, similar to those of Head 3. On the right-hand side, three tassels descend from the upper portion of the headdress; they terminate in a total of eight strips that hang down across the horizontal band. These tassels are judged to represent hair rather than cords. Also on the right hand side, two cords descend across the ear and continue to the base of the monument. On the left-hand side, three vertical cords descend across the ear. The earflare is only visible on the right hand side; it is formed of a plain disc and peg. The face is that of an ageing man with a creased forehead, low cheekbones and a prominent chin. The lips are thick and slightly parted.
San Lorenzo Colossal Head 5 is also known as San Lorenzo Monument 5. The monument stands high and measures wide by deep. It weighs 11.6 tons. The head was discovered by Matthew Stirling in 1946, face down in a gully to the south of the principal mound. The head is particularly well executed and is likely to have been found close to its original location. Ceramics recovered during its excavation became mixed with those from the excavation of Head 4. The mixed ceramics have been dated to the San Lorenzo and Villa Alta phases (approximately 1400–1000 BC and 800–1000 AD respectively). Colossal Head 5 is particularly well preserved, although the back of the headdress band was damaged when the head was moved from the archaeological site. The band of the headdress is set at an angle and has a notch above the bridge of the nose. The headdress is decorated with jaguar paws; this general identification of the decoration is contested by Beatriz de la Fuente since the "paws" have three claws each; she identifies them as the claws of a bird of prey. At the back of the head, ten interlaced strips form a net decorated with disc motifs. Two short straps descend from the headdress in front of the ears. The ears are adorned with disc-shaped earspools with pegs. The face is that of an ageing man with wrinkles under the eyes and across the bridge of the nose, and a forehead that is creased in a frown. The lips are slightly parted. Colossal Head 5 has been moved to the Xalapa Museum of Anthropology.
San Lorenzo Colossal Head 6 (also known as San Lorenzo Monument 17) is one of the smaller examples of colossal heads, standing . It measures wide by deep and is estimated to weigh between 8 and 10 tons. The head was discovered by a local farmworker and was excavated in 1965 by Luis Aveleyra and Román Piña Chan. The head had collapsed into a ravine under its own weight and was found face down on its left hand side. In 1970 it was transported to the Metropolitan Museum of Art in New York for the museum's centenary exhibition. After its return to Mexico, it was placed in the Museo Nacional de Antropología in Mexico City. It is sculpted with a net-like head covering joined together with sculpted beads. A covering descends from under the headdress to cover the back half of the neck. The headband is divided into four strips and begins above the right ear, extending around the entire head. A short strap descends from either side of the head to the ear. The ear ornaments are complex and are larger at the front of the ear than at the back. The face is that of an ageing male with the forehead creased in a frown, wrinkles under the eyes, sagging cheeks and deep creases on either side of the nose. The face is somewhat asymmetric, possibly due to errors in the execution of the monument.
San Lorenzo Colossal Head 7 (also known as San Lorenzo Monument 53) measures high by wide by deep and weighs 18 tons. San Lorenzo Colossal Head 7 was reworked from a monumental throne; it was discovered by a joint archaeological project by the Instituto Nacional de Antropología e Historia and Yale University, as a result of a magnetometer survey. It was buried at a depth of less than and was lying facing upwards, leaning slightly northwards on its right hand side. The head is poorly preserved and has suffered both from erosion and deliberate damage. The headdress is decorated with a pair of human hands; a feathered ornament is carved at the back of the headband and two discs adorn the front. A short strap descends from the headband and hangs in front of the right ear. The head sports large earflares that completely cover the earlobes, although severe erosion makes their exact form difficult to distinguish. The face has wrinkles between the nose and cheeks, sagging cheeks and deep-set eyes; the lips are badly damaged and the mouth is open, displaying the teeth. In 1986 the head was transported to the Xalapa Museum of Anthropology.
San Lorenzo Colossal Head 8 (also known as San Lorenzo Monument 61) stands high; it measures wide by deep and weighs 13 tons. It is one of the finest examples of an Olmec colossal head. It was found lying on its side to the south of a monumental throne. The monument was discovered at a depth of during a magnetometer survey of the site in 1968; it has been dated to the Early Preclassic. After discovery it was initially reburied; it was moved to the Xalapa Museum of Anthropology in 1986. The headdress is decorated with the talons or claws of either a jaguar or an eagle. It has a headband and a cover that descends from under the headdress proper behind the ears. Two short straps descend in front of the ears. The head sports large ear ornaments in the form of pegs. The face is that of a mature male with sagging cheeks and wrinkles between these and the nose. The forehead is gathered in a frown. The mouth is slightly parted to reveal the teeth. Most of the head is carved in a realistic manner, the exception being the ears. These are stylised and represented by one question mark shape contained within another. The head is very well preserved and displays a fine finish.
San Lorenzo Colossal Head 9 is also known as San Lorenzo Monument 66. It measures high by wide by deep. The head was exposed in 1982 by erosion of the gullies at San Lorenzo; it was found leaning slightly on its right hand side and facing upwards, half covered by the collapsed side of a gully and washed by a stream. Although it was documented by archaeologists, it remained for some time in its place of discovery before being moved to the Xalapa Museum of Anthropology. The headdress is of a single piece without a distinct headband. The sides display features that are possibly intended to represent long hair trailing to the bottom of the monument. The earflares are rectangular plates with an additional trapezoid element at the front. The head is also depicted wearing a nose-ring. The face is smiling and has wrinkles under the eyes and at the edge of the mouth. It has sagging cheeks and wide eyes. The mouth is closed and the upper lip is badly damaged. The sculpture suffered some mutilation in antiquity, with nine pits hollowed into the face and headdress.
San Lorenzo Colossal Head 10 (also known as San Lorenzo Monument 89) has been moved to the Museo Comunitario de San Lorenzo Tenochtitlán near Texistepec. It stands tall and measures wide by deep; it weighs 8 tons. The head was discovered by a magnetometer survey in 1994; it was found buried, lying face upwards in the bottom of a ravine and was excavated by Ann Cyphers. The headdress is formed of 92 circular beads that completely cover the upper part of the head and descend across the sides and back. Above the forehead is a large element forming a three-toed foot with long nails, possibly the foot of a bird. The head wears large earspools that protrude beyond the beads of the headdress. The spools have the form of a rounded square with a circular sunken central portion. The face is that of a mature man with the mouth closed, sagging cheeks and lines under the eyes. The mouth is sensitively carved and the head possesses a pronounced chin.
La Venta
Three of the La Venta heads were found in a line running east-west in the northern Complex I; all three faced northwards, away from the city centre. The other head was found in Complex B to the south of the Great Pyramid, in a plaza that included other sculptures. The latter, the first of the La Venta heads to be discovered, was found during archaeological exploration of La Venta in 1925; the other three remained unknown to archaeologists until a local boy guided Matthew Stirling to them while he was excavating the first head in 1940. They were located approximately to the north of Monument 1.
La Venta Monument 1 is speculated to have been the portrait of La Venta's final ruler. Monument 1 measures high by wide by deep; it weighs 24 tons. The front of the headdress is decorated with three motifs that apparently represent the claws or fangs of an animal. Above these symbols is an angular U-shaped decoration descending from the scalp. On each side of the monument a strap descends from the headdress, passing in front of the ear. Each ear has a prominent ear ornament that descends from the earlobe to the base of the monument. The features are those of a mature man, with wrinkles around the mouth, eyes and nose. Monument 1 is the best preserved head at La Venta but has suffered from erosion, particularly at the back. The head was first described by Franz Blom and Oliver La Farge who investigated the La Venta remains on behalf of Tulane University in 1925. When discovered, it was half-buried; its massive size meant that the discoverers were unable to excavate it completely. Matthew Stirling fully excavated the monument in 1940, after clearing the thick vegetation that had covered it in the intervening years. Monument 1 has been moved to the Parque-Museo La Venta in Villahermosa. The head was found in its original context; associated finds have been radiocarbon dated to between 1000 and 600 BC.
La Venta Monument 2 measures high by wide by deep; the head weighs 11.8 tons. The face has a broadly smiling expression that reveals four of the upper teeth. The cheeks are given prominence by the action of smiling; the brow that is normally visible in other heads is covered by the rim of the headdress. The face is badly eroded, distorting the features. In addition to the severe erosion damage, the upper lip and a part of the nose have been deliberately mutilated. The head was found in its original context a few metres north of the northwest corner of pyramid-platform A-2. Radiocarbon dating of the monument's context dates it to between 1000 and 600 BC. Monument 2 has suffered erosion damage from its exposure to the elements prior to discovery. The head has a prominent headdress but this is badly eroded and any individual detail has been erased. A strap descends in front of the ear on each side of the head, descending as far as the earlobe. The head is adorned with ear ornaments in the form of a disc that covers the earlobe, with an associated clip or peg. The surviving details of the headdress and earflares are stylistically similar to those of Tres Zapotes Monument A. The head has been moved to the Museo del Estado de Tabasco in Villahermosa.
La Venta Monument 3 stands high and measures wide by deep; it weighs 12.8 tons. Monument 3 was located a few metres to the east of Monument 2, but was moved to the Parque-Museo La Venta in Villahermosa. Like the other La Venta heads, its context has been radiocarbon dated to between 1000 and 600 BC. It appears unfinished and has suffered severe damage through weathering, making analysis difficult. It had a large headdress that reaches to the eyebrows but any details have been lost through erosion. Straps descend in front of each ear and continue to the base of the monument. The ears are wearing large flattened rings that overlap the straps; they probably represent jade ornaments of a type that have been recovered in the Olmec region. Although most of the facial detail is lost, the crinkling of the bridge of the nose is still evident, a feature that is common to the frowning expressions of the other Olmec colossal heads.
La Venta Monument 4 measures high by wide and deep. It weighs 19.8 tons. It was found a few metres to the west of Monument 2 and has been moved to the Parque-Museo La Venta. As with the other heads in the group, its archaeological context has been radiocarbon dated to between 1000 and 600 BC. The headdress is elaborate and, although damaged, various details are still discernible. The base of the headdress is formed by three horizontal strips running over the forehead. One side is decorated with a double-disc motif that may have been repeated on the other; if so, damage to the right side has obliterated any trace of it. The top of the headdress is decorated with the clawed foot of a bird of prey. Either straps or plaits of hair descend on either side of the face, from the headdress to the base of the monument. Only one earspool survives; it is flat, in the form of a rounded square, and is decorated with a cross motif. The ears have been completely eroded away and the lips are damaged. The surviving features display a frown and creasing around the nose and cheeks. The head displays prominent teeth.
Tres Zapotes
The two heads at Tres Zapotes, with the La Cobata head, are stylistically distinct from the other known examples. Beatriz de la Fuente views them as a late regional survival of an older tradition while other scholars argue that they are merely the kind of regional variant to be expected in a frontier settlement. These heads are sculpted with relatively simple headdresses; they have squat, wide proportions and distinctive facial features. The two Tres Zapotes heads are the earliest known stone monuments from the site. The discovery of one of the Tres Zapotes heads in the 19th century led to the first archaeological investigations of Olmec culture, carried out by Matthew Stirling in 1938.
Tres Zapotes Monument A (also known as Tres Zapotes Colossal Head 1) was the first colossal head to be found, discovered by accident in the middle of the 19th century, to the north of the modern village of Tres Zapotes. After its discovery it remained half-buried until it was excavated by Matthew Stirling in 1939. At some point it was moved to the plaza of the modern village, probably in the early 1960s. It has since been moved to the Museo Comunitario de Tres Zapotes. Monument A stands tall; it measures wide by deep, and is estimated to weigh 7.8 tons. The head is sculpted with a simple headdress with a wide band that is otherwise unadorned, and wears rectangular ear ornaments that project forwards onto the cheeks. The face is carved with deep creases between the cheeks and the nose and around the mouth; the forehead is creased into a frown. The upper lip has suffered recent damage, with the left portion flaking away.
Tres Zapotes Monument Q (also known as the Nestape Head and Tres Zapotes Colossal Head 2) measures high by wide by deep and weighs 8.5 tons. Its exact date of discovery is unknown but is estimated to have been some time in the 1940s, when it was struck by machinery being used to clear vegetation from Nestape hill. Monument Q was the eleventh colossal head to be discovered. It was moved to the plaza of Santiago Tuxtla in 1951 and remains there to this day. Monument Q was first described by Williams and Heizer in an article published in 1965. The headdress is decorated with a frontal tongue-shaped ornament, and the back of the head is sculpted with seven plaits of hair bound with tassels. A strap descends from each side of the headdress, passing over the ears and to the base of the monument. The face has pronounced creases around the nose, mouth and eyes.
La Cobata
The La Cobata region was the source of the basalt used for carving all of the colossal heads in the Olmec heartland. The La Cobata colossal head was discovered in 1970 and was the fifteenth to be recorded. It was discovered in a mountain pass in the Sierra de los Tuxtlas, on the north side of El Vigia volcano near to Santiago Tuxtla. The head was largely buried when found; excavations uncovered a Late Classic (600–900 AD) offering associated with the head consisting of a ceramic vessel and a long obsidian knife placed pointing northwards towards the head. The offering is believed to have been deposited long after the head was sculpted. The La Cobata head has been moved from its original location to the main plaza at Santiago.
The La Cobata head is more or less rounded and measures by high, making it the largest known head. This massive sculpture is estimated to weigh 40 tons. It is stylistically distinct from the other examples, and Beatriz de la Fuente placed it late in the Olmec time frame. The characteristics of the sculpture have led to some investigators suggesting that it represents a deceased person. Norman Hammond argues that the apparent stylistic differences of the monument stem from its unfinished state rather than its late production. The eyes of the monument are closed, the nose is flattened and lacks nostrils and the mouth was not sculpted in a realistic manner. The headdress is in the form of a plain horizontal band.
The original location of the La Cobata head was not a major archaeological site and it is likely that the head was either abandoned at its source or during transport to its intended destination. Various features of the head suggest that it was unfinished, such as a lack of symmetry below the mouth and an area of rough stone above the base. Rock was not removed from around the earspools as on other heads, and does not narrow towards the base. Large parts of the monument seem to be roughed out without finished detail. The right hand earspool also appears incomplete; the forward portion is marked with a sculpted line while the rear portion has been sculpted in relief, probably indicating that the right cheek and eye area were also unfinished. The La Cobata head was almost certainly carved from a raw boulder rather than being sculpted from a throne.
Takalik Abaj
Takalik Abaj Monument 23 dates to the Middle Preclassic period, and is found in Takalik Abaj, an important city in the foothills of the Guatemalan Pacific coast, in the modern department of Retalhuleu. It appears to be an Olmec-style colossal head re-carved into a niche figure sculpture. If originally a colossal head then it would be the only known example from outside the Olmec heartland.
Monument 23 is sculpted from andesite and falls in the middle of the size range for confirmed colossal heads. It stands high and measures wide by deep. Like the examples from the Olmec heartland, the monument features a flat back. Lee Parsons contests John Graham's identification of Monument 23 as a re-carved colossal head; he views the side ornaments, identified by Graham as ears, as rather the scrolled eyes of an open-jawed monster gazing upwards. Countering this, James Porter has claimed that the re-carving of the face of a colossal head into a niche figure is clearly evident.
Monument 23 was damaged in the mid-20th century by a local mason who attempted to break its exposed upper portion using a steel chisel. As a result, the top is fragmented, although the broken pieces were recovered by archaeologists and have been put back into place.
Collections
All of the 17 confirmed colossal heads remain in Mexico. Two heads from San Lorenzo are on permanent display at the Museo Nacional de Antropología in Mexico City. Seven of the San Lorenzo heads are on display in the Xalapa Museum of Anthropology. Five of them are in Sala 1, one is in Sala 2, and one is in Patio 1. The remaining San Lorenzo head is in the Museo Comunitario de San Lorenzo Tenochtitlán near Texistepec. All four heads from La Venta are now in Villahermosa, the state capital of Tabasco. Three are in the Parque-Museo La Venta and one is in the Museo del Estado de Tabasco. Two heads are on display in the plaza of Santiago Tuxtla; one from Tres Zapotes and the La Cobata Head. The other Tres Zapotes head is in the Museo Comunitario de Tres Zapotes.
Several colossal heads have been loaned to temporary exhibitions abroad; San Lorenzo Colossal Head 6 was loaned to the Metropolitan Museum of Art in New York in 1970. San Lorenzo colossal heads 4 and 8 were lent to the Olmec Art of Ancient Mexico exhibition in the National Gallery of Art, Washington, D.C., which ran from 30 June to 20 October 1996. San Lorenzo Head 4 was again loaned in 2005, this time to the de Young Museum in San Francisco. The de Young Museum was loaned San Lorenzo colossal heads 5 and 9 for its Olmec: Colossal Masterworks of Ancient Mexico exhibition, which ran from 19 February to 8 May 2011.
Vandalism
On 12 January 2009, at least three people, including two Mexicans and one American, entered the Parque-Museo La Venta in Villahermosa and damaged just under 30 archaeological pieces, including the four La Venta colossal heads. The vandals were all members of an evangelical church and appeared to have been carrying out a supposed pre-Columbian ritual, during which salts, grape juice, and oil were thrown on the heads. It was estimated that 300,000 pesos (US$21,900) would be needed to repair the damage, and the restoration process would last four months. The three vandals were released soon after their arrest after paying 330,000 pesos each.
Replicas
The majority of replicas around the world, though not all, were placed under the leadership of Miguel Alemán Velasco, former governor of the state of Veracruz. The following is a list of replicas and their locations:
Austin, Texas. A replica of San Lorenzo Head 1 was placed in the Teresa Lozano Long Institute of Latin American Studies at the University of Texas in November 2008.
Chicago, Illinois. A replica of San Lorenzo Head 8 made by Ignacio Perez Solano was placed in the Field Museum of Natural History in 2000.
Covina, California. A replica of San Lorenzo Head 5 was donated to Covina in 1989, originally intended to be placed in Jalapa Park. Due to concerns over potential vandalism it was instead installed outside the police station. It was removed in 2011 and relocated to Jobe's Glen, Jalapa Park in June 2012.
McAllen, Texas. A replica of San Lorenzo Head 8 is located in the International Museum of Art & Science. The placement was dedicated by Fidel Herrera Beltrán, then governor of Veracruz. This was done in 2010. The head is one of 12 sculpted by Ignacio Perez Solano and sent to various cities around the world.
New York. A replica of San Lorenzo Head 1 was placed next to the main plaza in the grounds of Lehman College in the Bronx, New York. It was installed in 2013 to celebrate the first anniversary of the CUNY Institute of Mexican Studies, housed at the college. The replica was a gift by the government of Veracruz state, Cumbre Tajín and Mexico Trade; it was first placed in Dag Hammerskjold Park, outside the United Nations, in 2012.
Paris. Since 2013, the Musée du Quai Branly – Jacques Chirac displays a replica of San Lorenzo Head 8 in its public gardens.
San Francisco, California. A replica of San Lorenzo Head 1 created by Ignacio Perez Solano was placed in San Francisco City College, Ocean Campus in October 2004.
Washington, D.C. A replica of San Lorenzo Head 4 sculpted by Ignacio Perez Solano was placed near the Constitution Avenue entrance of the Smithsonian National Museum of Natural History in October 2001.
West Valley City, Utah. A replica of San Lorenzo Head 8 was placed in the Utah Cultural Celebration Center in May 2004.
Todos Santos, Baja California Sur. A replica of a San Lorenzo Head 8 was sculpted in July 2018 by Mexican sculptor Benito Ortega Vargas. It is on the mound on the Camino a Las Playitas just north of Todos Santos.
Mexican Government of Veracruz donated a resin replica of an Olmec colossal head to Belgium; it is on display in the Tournay Solvay Park in Brussels.
In February 2010, the Secretaría de Relaciones Exteriores (Secretariat of Foreign Affairs) announced that the Instituto Nacional de Antropología e Historia would be donating a replica Olmec colossal head to Ethiopia. It was placed in Plaza Mexico in Addis Ababa in May 2010 and is locally known as the "Mexican Warrior". Online conspiracy theory memes have surfaced claiming this is 'proof' of Africans arriving in the Americas before Columbus.
In November 2017, President Enrique Peña Nieto donated a full-size replica of San Lorenzo Head 8 to the people of Belize. It was installed in Belmopan at the roundabout facing the Embassy of Mexico.
See also
Maya stelae
Moai
Monte Alto culture
Stone spheres of Costa Rica
Footnotes
References
Further reading
Colossal statues
Indigenous sculpture of the Americas
Olmec art
Stone sculptures in Mexico
Mesoamerican stone sculptures
Human head and neck
Heads in the arts
Sculptures in Veracruz | Olmec colossal heads | [
"Physics",
"Mathematics"
] | 9,302 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
35,722,406 | https://en.wikipedia.org/wiki/Examples%20of%20in%20vivo%20transdifferentiation%20by%20lineage-instructive%20approach | A list of examples of in vivo transdifferentiation through transfection:
mouse hepatocytes → B cells (Pdx1)
exocrine cells → B cells (Pdx1, Ngn3, and v-maf musculoaponeurotic fibrosarcoma oncogene family protein A)
nonsensory cells → inner hair cells (Atoh1 and MathI)
non cardiogenic mesoderm → cardiomyocytes (Gata4, Tbx5 and Smarcd3 or Baf60c)
Through excision:
B-cell precursors → hematopoietic progenitors(-Pax5)
In adult ovarian follicles, granulosa and thecal cells → functional Sertoli-like and Leydig-like cells (-Foxl1)
See also
Transdifferentiation
Induced stem cells
References
Histology
Induced stem cells | Examples of in vivo transdifferentiation by lineage-instructive approach | [
"Chemistry",
"Biology"
] | 198 | [
"Histology",
"Induced stem cells",
"Stem cell research",
"Microscopy"
] |
35,722,424 | https://en.wikipedia.org/wiki/Examples%20of%20in%20vitro%20transdifferentiation%20by%20initial%20epigenetic%20activation%20phase%20approach | List:
Human dermal fibroblasts → multilineage blood progenitors (Oct4 and cytokine treatment)
Mouse dermal fibroblasts → polygonal hyaline chondrogenic cells (Klf4, c-Myc, Sox9)
Mouse dermal fibroblasts → cardiomyocytes (Oct4, Sox2, Klf4, JI1 and Bmp4)
Fibroblasts → neural stem/progenitor cells (Oct4, Sox2, c-Myc, Klf4)
See also
Transdifferentiation
Induced stem cells
References
Histology
Induced stem cells | Examples of in vitro transdifferentiation by initial epigenetic activation phase approach | [
"Chemistry",
"Biology"
] | 133 | [
"Histology",
"Induced stem cells",
"Stem cell research",
"Microscopy"
] |
33,040,530 | https://en.wikipedia.org/wiki/Platform%20engineering | Platform engineering is a software engineering discipline focused on the development of self-service toolchains, services, and processes to create an internal developer platform (IDP). The shared IDP can be utilized by software development teams, enabling them to innovate.
Platform engineering uses components like configuration management, infrastructure orchestration, and role-based access control to improve reliability. The discipline is associated with DevOps and platform as a service practices.
Purpose & Impact
Platform engineering aims to improve software engineering productivity by creating streamlined toolchains that can be used by developers. It can be used for digital transformation, or to expand CI/CD setups.
According to a panel of experts at PlatformCon 2024, it was stated that building an internal developer platform can improve more than just developer productivity. Platform engineering, which centralizes best practices and components for development teams, is gaining prominence as DevSecOps practices and frameworks become increasingly embedded across organizations. Platform engineering aims to normalize and standardize developer workflows by providing developers with optimized “golden paths” for most of their workloads and flexibility to define exceptions for the rest. Organizations can follow one of two paths when developing a new platform engineering initiative. One option is to build an authentication and visualization layer that sits across multiple point tools — but this does not solve the underlying problems of legacy technology stacks and tooling silos. Therefore, this would likely not be a long-term solution. Alternatively, the organization could implement an internal developer platform (IDP) that reduces the cognitive load on developers by bringing multiple technologies and tools into a single self-service experience.
Platform engineering’s benefits include faster time to market, reduced security and compliance risk, and improved developer experience. Establishing a product-oriented culture and setting clear business goals are critical for success in platform engineering. Therefore it can be stated that platform engineering has increased importance wherever businesses strive to do more with less.
Criticism of Platform Engineering
Despite its benefits, platform engineering faces several criticisms. One major concern is the complexity and overhead associated with building and maintaining such platforms. Additionally, creating a one-size-fits-all platform might not address the unique needs of all development teams, leading to inefficiencies and frustration. Siloed teams and a lack of focus on resolving operational issues can also hinder the effectiveness of the platforms created.
References
Engineering disciplines | Platform engineering | [
"Technology",
"Engineering"
] | 483 | [
"Systems engineering",
"Computer engineering",
"Software engineering",
"Information technology",
"nan"
] |
33,042,280 | https://en.wikipedia.org/wiki/Sodium%20laurate | Sodium laurate is a chemical compound with formula CH3(CH2)10CO2Na. As the sodium salt of a fatty acid (lauric acid), it is classified as a soap. It is a white solid.
Use
Sodium laurate is frequently used in bars of soap as an ingredient. Sodium laurate is also a permitted bleaching, washing and peeling agent.
Sodium Laurate has also been used to induce peripheral arterial disease in rats.
References
Laurates
Organic sodium salts
Anionic surfactants | Sodium laurate | [
"Chemistry"
] | 107 | [
"Salts",
"Organic compounds",
"Organic sodium salts",
"Organic compound stubs",
"Organic chemistry stubs"
] |
33,044,498 | https://en.wikipedia.org/wiki/Shikimate%20pathway | The shikimate pathway (shikimic acid pathway) is a seven-step metabolic pathway used by bacteria, archaea, fungi, algae, some protozoans, and plants for the biosynthesis of folates and aromatic amino acids (tryptophan, phenylalanine, and tyrosine). This pathway is not found in mammals.
The five enzymes involved in the shikimate pathway are 3-dehydroquinate dehydratase, shikimate dehydrogenase, shikimate kinase, EPSP synthase, and chorismate synthase. In bacteria and eurkaryotes, the pathway starts with two substrates, phosphoenol pyruvate and erythrose-4-phosphate, are processed by DAHP synthase and 3-dehydroquinate synthase to form 3-dehydroquinate. In archaea, 2-amino-3,7-dideoxy-D-threo-hept-6-ulosonate synthase condenses L-Aspartic-4-semialdehyde with a sugar to form 2-amino-3,7-dideoxy-D-threo-hept-6-ulosonate, which is then turned by 3-dehydroquinate synthase II into 3-dehydroquinate. Both pathways end with chorismate (chrorismic acid), a substrate for the three aromatic amino acids. The fifth enzyme involved is the shikimate kinase, an enzyme that catalyzes the ATP-dependent phosphorylation of shikimate to form shikimate 3-phosphate (shown in the figure below). Shikimate 3-phosphate is then coupled with phosphoenol pyruvate to give 5-enolpyruvylshikimate-3-phosphate via the enzyme 5-enolpyruvylshikimate-3-phosphate (EPSP) synthase.
Glyphosate, the herbicidal ingredient in Roundup, is a competitive inhibitor of EPSP synthase, acting as a transition state analog that binds more tightly to the EPSPS-S3P complex than PEP and inhibits the shikimate pathway.
Then 5-enolpyruvylshikimate-3-phosphate is transformed into chorismate by a chorismate synthase.
Prephenic acid is then synthesized by a Claisen rearrangement of chorismate by chorismate mutase.
Prephenate is oxidatively decarboxylated with retention of the hydroxyl group to give p-hydroxyphenylpyruvate, which is transaminated using glutamate as the nitrogen source to give tyrosine and α-ketoglutarate.
References
Bibliography
Metabolic pathways | Shikimate pathway | [
"Chemistry"
] | 601 | [
"Metabolic pathways",
"Metabolism"
] |
33,046,918 | https://en.wikipedia.org/wiki/GR-113808 | GR-113808 is a drug which acts as a potent and selective 5-HT4 serotonin receptor antagonist. It is used in researching the roles of 5-HT4 receptors in various processes, and has been used to test some of the proposed therapeutic effects of selective 5-HT4 agonists, such as for instance blocking the nootropic effects of 5-HT4 agonists, and worsening the respiratory depression produced by opioid analgesic drugs, which appears to be partly 5-HT4 mediated and can be counteracted by certain 5-HT4 agonists.
References
5-HT4 antagonists
Tertiary amines
Piperidines
Indoles
Carboxylic acids
Sulfonamides | GR-113808 | [
"Chemistry"
] | 159 | [
"Carboxylic acids",
"Functional groups"
] |
33,049,337 | https://en.wikipedia.org/wiki/LY-310762 | LY-310762 is a drug which acts as a potent and selective antagonist for the 5-HT1D serotonin receptor, with reasonable selectivity over the closely related 5-HT1B subtype.
References
5-HT1 antagonists
Fluoroarenes
Piperidines
Indoles
Ketones | LY-310762 | [
"Chemistry"
] | 69 | [
"Ketones",
"Functional groups"
] |
33,049,506 | https://en.wikipedia.org/wiki/SB-204070 | SB-204070 is a drug which acts as a potent and selective 5-HT4 serotonin receptor antagonist (or weak partial agonist), and is used for research into the function of this receptor subtype.
References
5-HT4 antagonists
Piperidines
Amines
Chloroarenes
Carboxylic acids | SB-204070 | [
"Chemistry"
] | 72 | [
"Amines",
"Carboxylic acids",
"Bases (chemistry)",
"Functional groups"
] |
2,056,787 | https://en.wikipedia.org/wiki/Widlar%20current%20source | A Widlar current source is a modification of the basic two-transistor current mirror that incorporates an emitter degeneration resistor for only the output transistor, enabling the current source to generate low currents using only moderate resistor values.
The Widlar circuit may be used with bipolar transistors, MOS transistors, and even vacuum tubes. An example application is the 741 operational amplifier, and Widlar used the circuit as a part in many designs.
This circuit is named after its inventor, Bob Widlar, and was patented in 1967.
DC analysis
Figure 1 is an example Widlar current source using bipolar transistors, where the emitter resistance R2 is connected to the output transistor Q2, and has the effect of reducing the current in Q2 relative to Q1. The key to this circuit is that the voltage drop across the resistance R2 subtracts from the base-emitter voltage of transistor Q2, thereby turning this transistor off compared to transistor Q1. This observation is expressed by equating the base voltage expressions found on either side of the circuit in Figure 1 as:
where β2 is the beta-value of the output transistor, which is not the same as that of the input transistor, in part because the currents in the two transistors are very different. The variable IB2 is the base current of the output transistor, VBE refers to base-emitter voltage. This equation implies (using the Shockley diode equation):
Eq. 1
where VT is the thermal voltage.
This equation makes the approximation that the currents are both much larger than the scale currents, IS1 and IS2; an approximation valid except for current levels near cut off. In the following, the scale currents are assumed to be identical; in practice, this needs to be specifically arranged.
Design procedure with specified currents
To design the mirror, the output current must be related to the two resistor values R1 and R2. A basic observation is that the output transistor is in active mode only so long as its collector-base voltage is non-zero. Thus, the simplest bias condition for design of the mirror sets the applied voltage VA to equal the base voltage VB. This minimum useful value of VA is called the compliance voltage of the current source. With that bias condition, the Early effect plays no role in the design.
These considerations suggest the following design procedure:
Select the desired output current, IO = IC2.
Select the reference current, IR1, assumed to be larger than the output current, probably considerably larger (that is the purpose of the circuit).
Determine the input collector current of Q1, IC1:
Determine the base voltage VBE1 using the Shockley diode law
where IS is a device parameter sometimes called the scale current.
The value of base voltage also sets the compliance voltage VA = VBE1. This voltage is the lowest voltage for which the mirror works properly.
Determine R1:
Determine the emitter leg resistance R2 using Eq. 1 (to reduce clutter, the scale currents are chosen equal):
Finding the current with given resistor values
The inverse of the design problem is finding the current when the resistor values are known. An iterative method is described next. Assume the current source is biased so the collector-base voltage of the output transistor Q2 is zero. The current through R1 is the input or reference current given as,
Rearranging, IC1 is found as:
Eq. 2
The diode equation provides:
Eq. 3
Eq.1 provides:
These three relations are a nonlinear, implicit determination for the currents that can be solved by iteration.
We guess starting values for IC1 and IC2.
We find a value for VBE1:
We find a new value for IC1:
We find a new value for IC2:
This procedure is repeated to convergence, and is set up conveniently in a spreadsheet. One simply uses a macro to copy the new values into the spreadsheet cells holding the initial values to obtain the solution in short order.
Note that with the circuit as shown, if VCC changes, the output current will change. Hence, to keep the output current constant despite fluctuations in VCC, the circuit should be driven by a constant current source rather than using the resistor R1.
Exact solution
The transcendental equations above can be solved exactly in terms of the Lambert W function.
Output impedance
An important property of a current source is its small signal incremental output impedance, which should ideally be infinite. The Widlar circuit introduces local current feedback for transistor . Any increase in the current in Q2 increases the voltage drop across R2, reducing the VBE for Q2, thereby countering the increase in current. This feedback means the output impedance of the circuit is increased, because the feedback involving R2 forces use of a larger voltage to drive a given current.
Output resistance is found using a small-signal model for the circuit, shown in Figure 2. Transistor Q1 is replaced by its small-signal emitter resistance rE because it is diode connected. Transistor Q2 is replaced with its hybrid-pi model. A test current Ix is attached at the output.
Using the figure, the output resistance is determined using Kirchhoff's laws. Using Kirchhoff's voltage law from the ground on the left to the ground connection of R2:
Rearranging:
Using Kirchhoff's voltage law from the ground connection of R2 to the ground of the test current:
or, substituting for Ib:
Eq. 4
According to Eq. 4, the output resistance of the Widlar current source is increased over that of the output transistor itself (which is rO) so long as R2 is large enough compared to the rπ of the output transistor (large resistances R2 make the factor multiplying rO approach the value (β + 1)). The output transistor carries a low current, making rπ large, and increase in R2 tends to reduce this current further, causing a correlated increase in rπ. Therefore, a goal of R2 ≫ rπ can be unrealistic, and further discussion is provided below. The resistance R1∥rE usually is small because the emitter resistance rE usually is only a few ohms.
Current dependence of output resistance
The current dependence of the resistances rπ and rO is discussed in the article hybrid-pi model. The current dependence of the resistor values is:
and
is the output resistance due to the Early effect when VCB = 0 V (device parameter VA is the Early voltage).
From earlier in this article (setting the scale currents equal for convenience):
Eq. 5
Consequently, for the usual case of small rE, and neglecting the second term in RO with the expectation that the leading term involving rO is much larger:
Eq. 6
where the last form is found by substituting Eq. 5 for R2. Eq. 6 shows that a value of output resistance much larger than rO of the output transistor results only for designs with IC1 >> IC2. Figure 3 shows that the circuit output resistance RO is not determined so much by feedback as by the current dependence of the resistance rO of the output transistor (the output resistance in Figure 3 varies four orders of magnitude, while the feedback factor varies only by one order of magnitude).
Increase of IC1 to increase the feedback factor also results in increased compliance voltage, not a good thing as that means the current source operates over a more restricted voltage range. So, for example, with a goal for compliance voltage set, placing an upper limit upon IC1, and with a goal for output resistance to be met, the maximum value of output current IC2 is limited.
The center panel in Figure 3 shows the design trade-off between emitter leg resistance and the output current: a lower output current requires a larger leg resistor, and hence a larger area for the design. An upper bound on area therefore sets a lower bound on the output current and an upper bound on the circuit output resistance.
Eq. 6 for RO depends upon selecting a value of R2 according to Eq. 5. That means Eq. 6 is not a circuit behavior formula, but a design value equation. Once R2 is selected for a particular design objective using Eq. 5, thereafter its value is fixed. If circuit operation causes currents, voltages or temperatures to deviate from the designed-for values; then to predict changes in RO caused by such deviations, Eq. 4 should be used, not Eq. 6.
See also
Current source
Current mirror
Wilson current source
References
Further reading
Current mirrors and active loads: Mu-Huo Cheng
Analog circuits
Electronic design
de:Stromspiegel#Beispiele | Widlar current source | [
"Engineering"
] | 1,843 | [
"Electronic design",
"Analog circuits",
"Electronic engineering",
"Design"
] |
2,057,204 | https://en.wikipedia.org/wiki/Thorin%20%28chemistry%29 | Thorin (also called thoron or thoronol) is an indicator used in the determination of barium, beryllium, lithium, uranium and thorium compounds. Being a compound of arsenic, it is highly toxic.
References
External links
MSDS at Oxford University
Azo compounds
Naphthalenesulfonates
Organic sodium salts
2-Naphthols
Titration
Arsonic acids | Thorin (chemistry) | [
"Chemistry"
] | 81 | [
"Instrumental analysis",
"Organic sodium salts",
"Titration",
"Salts"
] |
2,057,361 | https://en.wikipedia.org/wiki/Nitrosation%20and%20nitrosylation | Nitrosation and nitrosylation are two names for the process of converting organic compounds or metal complexes into nitroso derivatives, i.e., compounds containing the functionality. The synonymy arises because the R-NO functionality can be interpreted two different ways, depending on the physico-chemical environment:
Nitrosylation interprets the process as adding a nitrosyl radical NO•. Nitrosylation commonly occurs in the context of a metal (e.g. iron) or a thiol, leading to nitrosyl iron (e.g., in nitrosylated heme = nitrosylheme) or S-nitrosothiols (RSNOs).
Nitrosation interprets the process as adding a nitrosonium ion . Nitrosation commonly occurs with amines (–), leading to a nitrosamine.
There are multiple chemical mechanisms by which this can be achieved, including enzymes and chemical synthesis.
In biochemistry
The biological functions of nitric oxide include S-nitrosylation, the conjugation of NO to cysteine thiols in proteins, which is an important part of cell signalling.
Organic synthesis
Nitrosation is typically performed with nitrous acid, formed from acidification of a sodium nitrite solution. Nitrous acid is unstable, and high yields require a rapid reaction rate. NO+ synthon transfer is catalyzed by a strong nucleophile, such as (in order of increasing efficacy) chloride, bromide, thiocyanate, or thiourea. Indeed, (meta)stable nitrosation products (alkyl nitrites or nitrosamines) can also nitrosate under such conditions; and the equilibria can be driven in any desired direction. Absent a driving force, thionitrosos form out of nitrosamines, which form out of nitrite esters, which form out of nitrous acid.
Some form of Lewis acid also enhances the electrophilicity of NO+ carriers, but the acid need not be Brønsted: nitroprusside, for example, nitrosates best in neutral-to-basic conditions. Roussin's salts may react similarly, but it is unclear if they release NO+ or NO•.
In general, nitric oxide is a poor nitrosant, Traube-type reactions notwithstanding. But atmospheric oxygen can oxidize nitric oxide to nitrogen dioxide, which does nitrosate. Alternatively cupric ions catalyze disproportionation into NO+ and NO−.
On the carbon skeleton
Nitroso compounds, such as nitrosobenzene, are typically prepared by oxidation of hydroxylamines:
RNHOH + [O] → RNO + H2O
In principle, NO+ can substitute directly onto an aromatic ring, but the ring must be substantially activated, because NO+ is about 14 bel less electrophilic than NO. Unusually for electrophilic aromatic substitution, proton release to the solvent is typically rate-limiting, and the reaction can be suppressed in superacidic conditions.
Excess NO+ typically oxidizes the initially-nitroso product to a nitro compound or diazonium salt.
Of chalcogen heteroatoms
S-nitrosothiols are typically prepared by condensation of a thiol and nitrous acid:
RSH + HONO → RSNO + H2O
They are liable to disproportionate to the disulfide and nitrogen oxides.
Although such cations have not been isolated, nitrosating reagents likely coordinate to sulfides with no hydrogen substituent.
Sulfinates and sulfinic acids add twice to nitrous acid, so that the initial nitroso product (from the first addition) is reduced to a disulfonyl hydroxylamine. A variant on this process with bisulfite is Raschig's hydroxylamine production technique.
O-Nitroso compounds are similar to S-nitroso compounds, but are less reactive because the oxygen atom is less nucleophilic than the sulfur atom. The formation of an alkyl nitrite from an alcohol and nitrous acid is a common example:
ROH + HONO → RONO + H2O
Of amines
N-Nitrosamines arise from the reaction of nitrite sources with amino compounds. Typically, this reaction occurs when the nucleophilic nitrogen of a secondary amine attacks the nitrogen of the electrophilic nitrosonium ion:
NO2− + 2 H+ → NO+ + H2O
R2NH + NO+ → R2N-NO + H+
If the amine is secondary, then the product is stable, but primary amines decompose in acid to the corresponding diazonium cation, and then attack any nearby nucleophile. Nitrosation of a primary amine is thus sometimes referred to as deamination.
The stable secondary nitrosamines are carcinogens in rodents. The compounds are believed to nitrosate primary amines during the acid environment of the stomach, and the resulting diazonium ions alkylate DNA, leading to cancer.
References
External links
Nitrosation of Amines
Chemical reactions
Nitrogen cycle
Organic reactions | Nitrosation and nitrosylation | [
"Chemistry"
] | 1,126 | [
"Nitrogen cycle",
"Metabolism",
"nan",
"Organic reactions"
] |
2,057,973 | https://en.wikipedia.org/wiki/Free-energy%20relationship | In physical organic chemistry, a free-energy relationship or Gibbs energy relation relates the logarithm of a reaction rate constant or equilibrium constant for one series of chemical reactions with the logarithm of the rate or equilibrium constant for a related series of reactions. Free energy relationships establish the extent at which bond formation and breakage happen in the transition state of a reaction, and in combination with kinetic isotope experiments a reaction mechanism can be determined. Free energy relationships are often used to calculate equilibrium constants since they are experimentally difficult to determine.
The most common form of free-energy relationships are linear free-energy relationships (LFER). The Brønsted catalysis equation describes the relationship between the ionization constant of a series of catalysts and the reaction rate constant for a reaction on which the catalyst operates. The Hammett equation predicts the equilibrium constant or reaction rate of a reaction from a substituent constant and a reaction type constant. The Edwards equation relates the nucleophilic power to polarisability and basicity. The Marcus equation is an example of a quadratic free-energy relationship (QFER).
IUPAC has suggested that this name should be replaced by linear Gibbs energy relation, but at present there is little sign of acceptance of this change.
The area of physical organic chemistry which deals with such relations is commonly referred to as 'linear free-energy relationships'.
Chemical and physical properties
A typical LFER relation for predicting the equilibrium concentration of a compound or solute in the vapor phase to a condensed (or solvent) phase can be defined as follows (following M.H. Abraham and co-workers):
where is some free-energy related property, such as an adsorption or absorption constant, , anesthetic potency, etc. The lowercase letters (, , , , ) are system constants describing the contribution of the aerosol phase to the sorption process. The capital letters (, , , , ) are solute descriptors representing the complementary properties of the compounds. Specifically,
is the gas–liquid partition constant on n-hexadecane at 298 K;
= the excess molar refraction ( for n-alkanes).
= the ability of a solute to stabilize a neighbouring dipole by virtue of its capacity for orientation and induction interactions;
= the solute's effective hydrogen bond acidity; and
= the solute's effective hydrogen-bond basicity.
The complementary system constants are identified as
= the contribution from cavity formation and dispersion interactions;
= the contribution from interactions with solute n-electrons and pi electrons;
= the contribution from dipole-type interactions;
= the contribution from hydrogen-bond basicity (because a basic sorbent will interact with an acidic solute); and
= the contribution from hydrogen-bond acidity to the transfer of the solute from air to the aerosol phase.
Similarly, the correlation of solvent–solvent partition coefficients as , is given by
where is McGowan's characteristic molecular volume in cubic centimeters per mole divided by 100.
See also
Brønsted catalysis equation
Hammett equation
Taft equation
Swain–Lupton equation
Grunwald–Winstein equation
Yukawa–Tsuno equation
Edwards equation
Marcus equation
Bell–Evans–Polanyi principle
Quantitative structure–activity relationship
References
External links
Solutions
Physical organic chemistry | Free-energy relationship | [
"Chemistry"
] | 694 | [
"Homogeneous chemical mixtures",
"Solutions",
"Physical organic chemistry"
] |
2,058,807 | https://en.wikipedia.org/wiki/1%2C2-Dichloroethane | The chemical compound 1,2-dichloroethane, commonly known as ethylene dichloride (EDC), is a chlorinated hydrocarbon. It is a colourless liquid with a chloroform-like odour. The most common use of 1,2-dichloroethane is in the production of vinyl chloride, which is used to make polyvinyl chloride (PVC) pipes, furniture and automobile upholstery, wall coverings, housewares, and automobile parts. 1,2-Dichloroethane is also used generally as an intermediate for other organic chemical compounds, and as a solvent. It forms azeotropes with many other solvents, including water (at a boiling point of ) and other chlorocarbons.
History
In 1794, physician Jan Rudolph Deiman, merchant Adriaan Paets van Troostwijk, chemist Anthoni Lauwerenburg, and botanist Nicolaas Bondt, under the name of Society of Dutch Chemists (), were the first to produce 1,2-dichloroethane from olefiant gas (oil-making gas, ethylene) and chlorine gas. Although the Gezelschap in practice did not do much in-depth scientific research, they and their publications were highly regarded. Part of that acknowledgement is that 1,2-dichloroethane was called "Dutch oil" in old chemistry. This is also the origin of the archaic term "olefiant gas" (oil-making gas) for ethylene, for in this reaction it is ethylene that makes the Dutch oil. And "olefiant gas" is the etymological origin of the modern term "olefins", the family of hydrocarbons of which ethylene is the first member.
Production
Nearly 20 million tons of 1,2-dichloroethane are produced annually in the United States, Western Europe, and Japan. Production is primarily achieved through the iron(III) chloride-catalysed reaction of ethylene and chlorine:
(ΔH⊖r = −218 kJ/mol)
1,2-dichloroethane is also generated by the copper(II) chloride-catalysed oxychlorination of ethylene:
+
Uses
Vinyl chloride production
Approximately 95% of the world's production of 1,2-dichloroethane is used in the production of vinyl chloride monomer (VCM) with hydrogen chloride as a byproduct. VCM is the precursor to polyvinyl chloride.
The hydrogen chloride can be re-used in the production of more 1,2-dichloroethane via the oxychlorination route described above.
Other uses
1,2-Dichloroethane has been used as degreaser and paint remover but this use has phased out due to its toxicity. As a useful 'building block' reagent, it is used as an intermediate in the production of diverse organic compounds such as ethylenediamine and higher ethyleneamines. In the laboratory it is occasionally used as a source of chlorine, with elimination of ethene and chloride.
Via several steps, 1,2-dichloroethane is a precursor to 1,1,1-trichloroethane. Historically, before leaded petrol was phased out, chloroethanes were used as an additive in petrol to prevent lead buildup in engines.
Safety
1,2-Dichloroethane is highly flammable and releases hydrochloric acid when combusted:
+
It is also toxic (especially by inhalation due to its high vapour pressure) and possibly carcinogenic. Its high solubility and 50-year half-life in anoxic aquifers make it a perennial pollutant and health risk that is very expensive to treat conventionally, requiring a method of bioremediation. While the chemical is not used in consumer products manufactured in the U.S., a case was reported in 2009 of molded plastic consumer products (toys and holiday decorations) from China that released 1,2-dichloroethane into homes at levels high enough to produce cancer risk.
Substitutes are recommended and will vary according to application. Dioxolane and toluene are possible substitutes as solvents. Dichloroethane is unstable in the presence of aluminium and, when moist, with zinc and iron.
References
External links
Gezelschap der Hollandsche Scheikundigen
ChemicalLand compound database
Environmental Chemistry compound database
Merck Chemicals database
National Pollutant Inventory – 1,2 Dichlorethane Fact Sheet
Locating and estimating air emissions from sources of ethylene dichloride, EPA report EPA-450/4-84-007d, March 1984
Hazardous air pollutants
IARC Group 2B carcinogens
Organochloride insecticides
Chloroalkanes
Plastics
Halogenated solvents
Fuel additives | 1,2-Dichloroethane | [
"Physics"
] | 1,060 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,059,470 | https://en.wikipedia.org/wiki/Chlorine%20pentafluoride | Chlorine pentafluoride is an interhalogen compound with formula ClF5. This colourless gas is a strong oxidant that was once a candidate oxidizer for rockets. The molecule adopts a square pyramidal structure with C4v symmetry, as confirmed by its high-resolution 19F NMR spectrum. It was first synthesized in 1963.
Preparation
Some of the earliest research on the preparation was classified. It was first prepared by fluorination of chlorine trifluoride at high temperatures and high pressures:
ClF3 + F2 → ClF5
ClF + 2F2 → ClF5
Cl2 + 5F2 → 2ClF5
CsClF4 + F2 → CsF + ClF5
NiF2 catalyzes this reaction.
Certain metal fluorides, MClF4 (i.e. KClF4, RbClF4, CsClF4), react with F2 to produce ClF5 and the corresponding alkali metal fluoride.
Reactions
In a highly exothermic reaction, ClF5 reacts with water to produce chloryl fluoride and hydrogen fluoride:
+ 2 → + 4
It is also a strong fluorinating agent. At room temperature it reacts readily with all elements (including otherwise "inert" elements like platinum and gold) except noble gases, nitrogen, oxygen and fluorine.
Uses
Rocket propellant
Chlorine pentafluoride was once considered for use as an oxidizer for rockets. As a propellant, it has a higher maximum specific impulse than ClF3, but with the same difficulties in handling. Due to the hazardous nature of chlorine pentafluoride, it has yet to be used in a large scale rocket propulsion system.
See also
Chlorine trifluoride
Hypervalent molecule
References
External links
National Pollutant Inventory - Fluoride and compounds fact sheet
New Jersey Hazardous Substance Fact Sheet
WebBook page for ClF5
Fluorides
Inorganic chlorine compounds
Interhalogen compounds
Rocket oxidizers
Fluorinating agents
Oxidizing agents
Chlorine(V) compounds
Substances discovered in the 1960s | Chlorine pentafluoride | [
"Chemistry"
] | 450 | [
"Highly-toxic chemical substances",
"Inorganic compounds",
"Redox",
"Harmful chemical substances",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Fluorinating agents",
"Rocket oxidizers",
"Inorganic chlorine compounds",
"Reagents for organic chemistry",
"Fluorides"
] |
2,061,045 | https://en.wikipedia.org/wiki/Counter-electromotive%20force | Counter-electromotive force (counter EMF, CEMF, back EMF), is the electromotive force (EMF) manifesting as a voltage that opposes the change in current which induced it. CEMF is the EMF caused by electromagnetic induction.
Details
For example, the voltage appearing across an inductor or coil is due to a change in current which causes a change in the magnetic field within the coil, and therefore the self-induced voltage. The polarity of the voltage at every moment opposes that of the change in applied voltage, to keep the current constant.
The term back electromotive force is also commonly used to refer to the voltage that occurs in electric motors where there is relative motion between the armature and the magnetic field produced by the motor's field coils or permanent magnet field, thus also acting as a generator while running as a motor. This effect is not due to the motor's inductance, which generates a voltage in opposition to a changing current via Faraday's law, but a separate phenomenon. That is, the back-EMF is also due to inductance and Faraday's law, but occurs even when the motor current is not changing, and arises from the geometric considerations of an armature spinning in a magnetic field.
This voltage is in series with and opposes the original applied voltage and is called "back-electromotive force" (by Lenz's law). With a lower overall voltage across the motor's internal resistance as the motor turns faster, the current flowing into the motor decreases. One practical application of this phenomenon is to indirectly measure motor speed and position, as the back-EMF is proportional to the rotational speed of the armature.
In motor control and robotics, back-EMF often refers most specifically to actually using the voltage generated by a spinning motor to infer the speed of the motor's rotation, for use in better controlling the motor in specific ways.
To observe the effect of back-EMF of a motor, one can perform this simple exercise: with an incandescent light on, cause a large motor such as a drill press, saw, air conditioner compressor, or vacuum cleaner to start. The light may dim briefly as the motor starts. When the armature is not turning (called locked rotor) there is no back-EMF and the motor's current draw is quite high. If the motor's starting current is high enough, it will pull the line voltage down enough to cause noticeable dimming of the light.
References
External links
Counter-electromotive-force in access control applications
Electromagnetism | Counter-electromotive force | [
"Physics"
] | 547 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
2,061,388 | https://en.wikipedia.org/wiki/Few-body%20systems | In physics, a few-body system consists of a small number of well-defined structures or point particles. It usually in-between the two-body and the many-body systems with large N.
Quantum mechanics
In quantum mechanics, examples of few-body systems include light nuclear systems (that is, few-nucleon bound and scattering states), small molecules, light atoms (such as helium in an external electric field), atomic collisions, and quantum dots. A fundamental difficulty in describing few-body systems is that the Schrödinger equation and the classical equations of motion are not analytically solvable for more than two mutually interacting particles even when the underlying forces are precisely known. This is known as the few-body problem. For some three-body systems an exact solution can be obtained iteratively through the Faddeev equations. It can be shown that under certain conditions Faddeev equations should lead to the Efimov effect. Most three-body systems are amenable to extremely accurate numerical solutions that use large sets of basis functions and then variationally optimize the amplitudes of the basis functions. Particular cases are the hydrogen molecular ion or the helium atom. The latter has been solved very precisely using basis sets of Hylleraas or Frankowski-Pekeris functions (see references of the work of G.W.F. Drake and J.D. Morgan III in Helium atom section).
In many cases theory has to resort to approximations to treat few-body systems. These approximations have to be tested by detailed experimental data. Atomic collisions or precision laser spectroscopy are particularly suitable for such tests. The fundamental force underlying atomic systems, the electromagnetic force, is essentially understood. Therefore, any discrepancy found between experiment and theory can be directly related to the theoretical description of few-body effects, or to the existence of new fundamental forces (physics beyond the Standard Model). In nuclear systems, in contrast, the underlying force is much less understood. Furthermore, in atomic collisions the number of particles can be kept small enough so that complete kinematic information about every single particle in the system can be obtained experimentally (see article on kinematically complete experiment). In systems with large particle numbers, in contrast, usually only statistically averaged or collective quantities about the system can be measured.
Classical mechanics
In classical mechanics, the few-body problem is a subset of the N-body problem.
References
L.D. Faddeev, S.P. Merkuriev, Quantum Scattering Theory for Several Particle Systems, Springer, August 31, 1993, .
M. Schulz et al., Three-Dimensional Imaging of Atomic Four-Body Processes, Nature 422, 48 (2003)
Erich Schmid, Horst Ziegelmann, The quantum mechanical three-body problem, University of California, 1974
В.Б. Беляев (V.B. Belyaev), "Лекции по теории малочастичных систем" (Lectures on the theory of few-body systems), М., Энергоатом из дат (Energoatomizdat, Moscow), 1986
External links
Bogolyubov Theoretical Physics Laboratory (Joint Institute of Nuclear Research), Sector Few-Body Systems
Joint Institute of Nuclear Research (Russia)
American Physical Society Few Body Topical Group
Classical mechanics
Quantum mechanics | Few-body systems | [
"Physics"
] | 725 | [
"Classical mechanics",
"Theoretical physics",
"Mechanics",
"Quantum mechanics"
] |
2,061,865 | https://en.wikipedia.org/wiki/Kelvin%E2%80%93Voigt%20material | A Kelvin–Voigt material, also called a Voigt material, is the most simple model viscoelastic material showing typical rubbery properties. It is purely elastic on long timescales (slow deformation), but shows additional resistance to fast deformation. The model was developed independently by the British physicist Lord Kelvin in 1865 and by the German physicist Woldemar Voigt in 1890.
Definition
The Kelvin–Voigt model, also called the Voigt model, is represented by a purely viscous damper and purely elastic spring connected in parallel as shown in the picture.
If, instead, we connect these two elements in series we get a model of a Maxwell material.
Since the two components of the model are arranged in parallel, the strains in each component are identical:
where the subscript D indicates the stress-strain in the damper and the subscript S indicates the stress-strain in the spring. Similarly, the total stress will be the sum of the stress in each component:
From these equations we get that in a Kelvin–Voigt material, stress σ, strain ε and their rates of change with respect to time t are governed by equations of the form:
or, in dot notation:
where E is a modulus of elasticity and is the viscosity. The equation can be applied either to the shear stress or normal stress of a material.
Effect of a sudden stress
If we suddenly apply some constant stress to Kelvin–Voigt material, then the deformations would approach the deformation for the pure elastic material with the difference decaying exponentially:
where t is time and is the retardation time.
If we would free the material at time , then the elastic element would retard the material back until the deformation becomes zero. The retardation obeys the following equation:
The picture shows the dependence of the dimensionless deformation
on dimensionless time . In the picture the stress on the material is loaded at time , and released at the later dimensionless time .
Since all the deformation is reversible (though not suddenly) the Kelvin–Voigt material is a solid.
The Voigt model predicts creep more realistically than the Maxwell model, because in the infinite time limit the strain approaches a constant:
while a Maxwell model predicts a linear relationship between strain and time, which is most often not the case. Although the Kelvin–Voigt model is effective for predicting creep, it is not good at describing the relaxation behavior after the stress load is removed.
Dynamic modulus
The complex dynamic modulus of the Kelvin–Voigt material is given by:
Thus, the real and imaginary components of the dynamic modulus are referred to as storage modulus and respectively:
Note that is constant, while is directly proportional to frequency (where time-scale is the constant of proportionality). Often, this constant multiplied with angular frequency is called the loss modulus .
References
See also
Burgers material
Generalized Maxwell model
Maxwell material
Standard linear solid model
Non-Newtonian fluids
Materials science
William Thomson, 1st Baron Kelvin | Kelvin–Voigt material | [
"Physics",
"Materials_science",
"Engineering"
] | 621 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,061,993 | https://en.wikipedia.org/wiki/Chemical%20impurity | In chemistry and materials science, impurities are chemical substances inside a confined amount of liquid, gas, or solid. They differ from the chemical composition of the material or compound. Firstly, a pure chemical should appear in at least one chemical phase and can also be characterized by its phase diagram. Secondly, a pure chemical should prove to be homogeneous (i.e., a uniform substance that has the same composition throughout the material). The perfect pure chemical will pass all attempts to separate and purify it further. Thirdly, and here we focus on the common chemical definition, it should not contain any trace of any other kind of chemical species. In reality, there are no absolutely 100% pure chemical compounds, as there is always some small amount of contamination.
The levels of impurities in a material are generally defined in relative terms. Standards have been established by various organizations that attempt to define the permitted levels of various impurities in a manufactured product. Strictly speaking, a material's level of purity can only be stated as being more or less pure than some other material.
Impurities are either naturally occurring or added during synthesis of a chemical or commercial product. During production, impurities may be purposely or accidentally added to the substance. The removal of unwanted impurities may require the use of separation or purification techniques such as distillation or zone refining. In other cases, impurities might be added to acquire certain properties of a material such as the color in gemstones or conductivity in semiconductors. Impurities may also affect crystallization as they can act as nucleation sites that start crystal growth. Impurities can also play a role in nucleation of other phase transitions in the form of defects.
Unwanted impurities
Impurities can become unwanted when they prevent the working nature of the material. Examples include ash and debris in metals and leaf pieces in blank white papers. The removal of impurities is usually done chemically. For example, in the manufacturing of iron, calcium carbonate is added to the blast furnaces to remove silicon dioxide from the iron ore. Zone refining, another purification method, is an economically important method for the purification of semiconductors.
However, some kinds of impurities can be removed by physical means. A mixture of water and salt can be separated by distillation, with water as the distillate and salt as the solid residue. This is done by heating the water so it boils and leaves behind the salt. The water is cooled and the gas turns back to a pure liquid. Impurities are usually physically removed from liquids and gases. Removal of sand particles from metal ore is one example with solids.
No matter what method is used, it is usually impossible to separate an impurity completely from a material. The reason that it is impossible to remove impurities completely is of thermodynamic nature and is predicted by the second law of thermodynamics. Removing impurities completely means reducing their concentration to zero. This would require an infinite amount of work and energy as predicted by the second law of thermodynamics. What technicians can do is to increase the purity of a material to as near 100% as possible or economically feasible.
Impurities in pharmaceuticals and therapeutics are of special concern and the last couple of decades have witnessed a fair number of scandals, from insecure ingredients and incorrect dosage forms to intentionally fortified medications and accidental contaminations.
Wanted impurities
Occasionally, we may want to include impurities in a material to change its properties. These impurities can be naturally occurring and left unaltered in a material or be intentionally added during synthesis. These types of impurities can show up in our day-to-day lives such as different colors in gemstones or by doping to tune the conductivity of semiconductors.
An example of when impurities are wanted is shown in gems. These gems have slight impurities that act as chromophores and give the stone its color. An example is the gem family beryl which has the base chemical formula of Be3Al2(SiO3)6. Pure beryl will appear colorless but this rarely occurs and the presence of trace elements change its color. The green of emeralds are from impurities such as chromium, vanadium, or iron. A manganese impurity will give a pink gem called morganite and iron creates the blue gem aquamarine.
Doping is a process where impurities are purposefully added to semiconductors to increase electrical conductivity and improve a semiconductors function. The dopants, the elements added to the original crystal structure, contain a different number of electrons then the base formula. Semiconductors that are p-doped contains a small amount of elements that have less valence electrons then the other elements in the crystal. N-doping is the opposite and the dopant contains more valence electrons.
Impurities and nucleation
When an impure liquid is cooled to its melting point the liquid, undergoing a phase transition, crystallizes around the impurities and becomes a crystalline solid. If there are no impurities then the liquid is said to be pure and can be supercooled below its melting point without becoming a solid. This occurs because the liquid has nothing to condense around so the solid cannot form a natural crystalline solid. The solid is eventually formed when dynamic arrest or glass transition occurs, but it forms into an amorphous solid – a glass, instead, as there is no long-range order in the structure.
Impurities play an important role in the nucleation of other phase transitions. For example, the presence of foreign elements may have important effects on the mechanical and magnetic properties of metal alloys. Iron atoms in copper cause the renowned Kondo effect where the conduction electron spins form a magnetic bound state with the impurity atom. Magnetic impurities in superconductors can serve as generation sites for vortex defects. Point defects can nucleate reversed domains in ferromagnets and dramatically affect their coercivity. In general impurities are able to serve as initiation points for phase transitions because the energetic cost of creating a finite-size domain of a new phase is lower at a point defect. In order for the nucleus of a new phase to be stable, it must reach a critical size. This threshold size is often lower at an impurity site.
See also
Dross
Fineness
Pollution
Semiconductor
Slag
Spin wave
References
Cheng, E. et al., Chemistry – A Modern View, Aristo-Wilson, Hong Kong, 2004
Materials science
Environmental chemistry
Adulteration
ru:Примесь (металлургия) | Chemical impurity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 1,350 | [
"Applied and interdisciplinary physics",
"Adulteration",
"Environmental chemistry",
"Drug safety",
"Materials science",
"nan"
] |
5,185,470 | https://en.wikipedia.org/wiki/Malament%E2%80%93Hogarth%20spacetime | A Malament–Hogarth (M-H) spacetime, named after David B. Malament and Mark Hogarth, is a relativistic spacetime that possesses the following property: there exists a worldline and an event p such that all events along are a finite interval in the past of p, but the proper time along is infinite. The event p is known as an M-H event.
The boundary between events with the M-H property and events without it is a Cauchy horizon. M-H spacetimes correspond to black holes which live forever and have an inner horizon. The inner horizon is the Cauchy surface.
Significance
The significance of M-H spacetimes is that they allow for the implementation of certain non-Turing computable tasks (hypercomputation). The idea is for an observer at some event in p's past to set a computer (Turing machine) to work on some task and then have the Turing machine travel on , computing for all eternity. Since lies in p's past, the Turing machine can signal (a solution) to p at any stage of this never-ending task. Meanwhile, the observer takes a quick trip (finite proper time) through spacetime to p, to pick up the solution. The set-up can be used to decide the halting problem, which is known to be undecidable by an ordinary Turing machine. All the observer needs to do is to prime the Turing machine to signal to p if and only if the Turing machine halts.
As matter and radiation fall into a black hole, they are focused and blueshifted (their wavelengths become shorter) due to the intense gravitational field. This effect is even more pronounced near the inner horizon due to the extreme curvature of spacetime in this region.
The energy of the infalling radiation increases as it approaches the inner horizon because of this blueshifting. The energy appears to become infinite from the perspective of an observer falling into the black hole.
General relativity predicts that energy and momentum affect the curvature of spacetime. This is known as the backreaction. The blueshifted energy of the infalling radiation should, in principle, have a significant impact on the spacetime geometry near the inner horizon.
The backreaction of the blueshifted radiation leads to a runaway effect where the effective mass parameter (or energy density) of the black hole as measured near the inner horizon grows without bound. This is what is referred to as mass inflation. It results in a singularity that is not a point but rather a null, weak, or "whimper" singularity along the inner horizon.
The mass inflation singularity suggests that the inner horizon is unstable. Any small perturbation, such as an infalling particle, can lead to drastic changes in the structure of the inner horizon. This instability is a challenge for the predictability of general relativity because it could potentially lead to a breakdown of the deterministic nature of the theory.
The mass inflation scenario is a product of classical general relativity and does not take into account quantum effects, which are expected to become significant in regions of such high curvature and energy density. Quantum gravity is anticipated to provide a more complete and consistent description of what happens near and inside black holes, potentially resolving the issue of inner horizon instability and mass inflation.
Examples
The Kerr metric, which describes empty spacetime around a rotating black hole, possesses these features: a computer can orbit the black hole indefinitely, while an observer falling into the black hole experiences an M-H event as they cross the inner event horizon. (This, however, neglects the effects of black hole evaporation and the infinite blueshift that is encountered at the inner horizon.)
Notes
Bibliography
General relativity
Hypercomputation | Malament–Hogarth spacetime | [
"Physics"
] | 771 | [
"General relativity",
"Theory of relativity"
] |
5,192,690 | https://en.wikipedia.org/wiki/Glycosyltransferase | Glycosyltransferases (GTFs, Gtfs) are enzymes (EC 2.4) that establish natural glycosidic linkages. They catalyze the transfer of saccharide moieties from an activated nucleotide sugar (also known as the "glycosyl donor") to a nucleophilic glycosyl acceptor molecule, the nucleophile of which can be oxygen- carbon-, nitrogen-, or sulfur-based.
The result of glycosyl transfer can be a carbohydrate, glycoside, oligosaccharide, or a polysaccharide. Some glycosyltransferases catalyse transfer to inorganic phosphate or water. Glycosyl transfer can also occur to protein residues, usually to tyrosine, serine, or threonine to give O-linked glycoproteins, or to asparagine to give N-linked glycoproteins. Mannosyl groups may be transferred to tryptophan to generate C-mannosyl tryptophan, which is relatively abundant in eukaryotes. Transferases may also use lipids as an acceptor, forming glycolipids, and even use lipid-linked sugar phosphate donors, such as dolichol phosphates in eukaryotic organism, or undecaprenyl phosphate in bacteria.
Glycosyltransferases that use sugar nucleotide donors are Leloir enzymes, after Luis F. Leloir, the scientist who discovered the first sugar nucleotide and who received the 1970 Nobel Prize in Chemistry for his work on carbohydrate metabolism. Glycosyltransferases that use non-nucleotide donors such as dolichol or polyprenol pyrophosphate are non-Leloir glycosyltransferases.
Mammals use only 9 sugar nucleotide donors for glycosyltransferases: UDP-glucose, UDP-galactose, UDP-GlcNAc, UDP-GalNAc, UDP-xylose, UDP-glucuronic acid, GDP-mannose, GDP-fucose, and CMP-sialic acid. The phosphate(s) of these donor molecules are usually coordinated by divalent cations such as manganese, however metal independent enzymes exist.
Many glycosyltransferases are single-pass transmembrane proteins, and they are usually anchored to membranes of Golgi apparatus
Mechanism
Glycosyltransferases can be segregated into "retaining" or "inverting" enzymes according to whether the stereochemistry of the donor's anomeric bond is retained (α→α) or inverted (α→β) during the transfer. The inverting mechanism is straightforward, requiring a single nucleophilic attack from the accepting atom to invert stereochemistry.
The retaining mechanism has been a matter of debate, but there exists strong evidence against a double displacement mechanism (which would cause two inversions about the anomeric carbon for a net retention of stereochemistry) or a dissociative mechanism (a prevalent variant of which was known as SNi). An "orthogonal associative" mechanism has been proposed which, akin to the inverting enzymes, requires only a single nucleophilic attack from an acceptor from a non-linear angle (as observed in many crystal structures) to achieve anomer retention.
Reaction reversibility
The recent discovery of the reversibility of many reactions catalyzed by inverting glycosyltransferases served as a paradigm shift in the field and raises questions regarding the designation of sugar nucleotides as 'activated' donors.
Classification by sequence
Sequence-based classification methods have proven to be a powerful way of generating hypotheses for protein function based on sequence alignment to related proteins. The carbohydrate-active enzyme database presents a sequence-based classification of glycosyltransferases into over 90 families. The same three-dimensional fold is expected to occur within each of the families.
Structure
In contrast to the diversity of 3D structures observed for glycoside hydrolases, glycosyltransferase have a much smaller range of structures. In fact, according to the Structural Classification of Proteins database, only three different folds have been observed for glycosyltransferases Very recently, a new glycosyltransferase fold was identified for the glycosyltransferases involved in the biosynthesis of the NAG-NAM polymer backbone of peptidoglycan.
Inhibitors
Many inhibitors of glycosyltransferases are known. Some of these are natural products, such as moenomycin, an inhibitor of peptidoglycan glycosyltransferases, the nikkomycins, inhibitors of chitin synthase, and the echinocandins, inhibitors of fungal β-1,3-glucan synthases. Some glycosyltransferase inhibitors are of use as drugs or antibiotics. Moenomycin is used in animal feed as a growth promoter. Caspofungin has been developed from the echinocandins and is in use as an antifungal agent. Ethambutol is an inhibitor of mycobacterial arabinotransferases and is used for the treatment of tuberculosis. Lufenuron is an inhibitor of insect chitin syntheses and is used to control fleas in animals. Imidazolium-based synthetic inhibitors of glycosyltransferases have been designed for use as antimicrobial and antiseptic agents.
Determinant of blood type
The ABO blood group system is determined by what type of glycosyltransferases are expressed in the body.
The ABO gene locus expressing the glycosyltransferases has three main allelic forms: A, B, and O. The A allele encodes 1-3-N-acetylgalactosaminyltransferase that bonds α-N-acetylgalactosamine to D-galactose end of H antigen, producing the A antigen. The B allele encodes 1-3-galactosyltransferase that joins α-D-galactose bonded to D-galactose end of H antigen, creating the B antigen. In case of O allele the exon 6 contains a deletion that results in a loss of enzymatic activity. The O allele differs slightly from the A allele by deletion of a single nucleotide - Guanine at position 261. The deletion causes a frameshift and results in translation of an almost entirely different protein that lacks enzymatic activity. This results in H antigen remaining unchanged in case of O groups.
The combination of glycosyltransferases by both alleles present in each person determines whether there is an AB, A, B or O blood type.
Uses
Glycosyltransferases have been widely used in both the targeted synthesis of specific glycoconjugates as well as the synthesis of differentially glycosylated libraries of drugs, biological probes or natural products in the context of drug discovery and drug development (a process known as glycorandomization). Suitable enzymes can be isolated from natural sources or produced recombinantly. As an alternative, whole cell-based systems using either endogenous glycosyl donors or cell-based systems containing cloned and expressed systems for synthesis of glycosyl donors have been developed. In cell-free approaches, the large-scale application of glycosyltransferases for glycoconjugate synthesis has required access to large quantities of the glycosyl donors. On the flip-side, nucleotide recycling systems that allow the resynthesis of glycosyl donors from the released nucleotide have been developed. The nucleotide recycling approach has a further benefit of reducing the amount of nucleotide formed as a by-product, thereby reducing the amount of inhibition caused to the glycosyltransferase of interest – a commonly observed feature of the nucleotide byproduct.
See also
Carbohydrate chemistry
Chemical glycosylation
Glucuronosyltransferase
Glycogen synthase
Glycosyl acceptor
Glycosyl donor
Glycosylation
Oligosaccharyltransferase
References
Carbohydrates
Carbohydrate chemistry
Transferases
EC 2.4
EC 2.4.1
EC 2.4.2
Peripheral membrane proteins
Glycobiology | Glycosyltransferase | [
"Chemistry",
"Biology"
] | 1,872 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Biochemistry",
"Glycobiology"
] |
5,192,839 | https://en.wikipedia.org/wiki/Magneto-optic%20Kerr%20effect | In physics the magneto-optic Kerr effect (MOKE) or the surface magneto-optic Kerr effect (SMOKE) is one of the magneto-optic effects. It describes the changes to light reflected from a magnetized surface. It is used in materials science research in devices such as the Kerr microscope, to investigate the magnetization structure of materials.
Definition
The magneto-optic Kerr effect manifests when light is reflected from a magnetized surface and may change both polarization and reflected intensity. The magneto-optic Kerr effect is similar to the Faraday effect, which describes changes to light transmission through a magnetic material. In contrast, the magneto-optic Kerr effect describes changes to light reflected from a magnetic surface. Both effects result from the off-diagonal components of the dielectric tensor . These off-diagonal components give the magneto-optic material an anisotropic permittivity, meaning that its permittivity is different in different directions. The permittivity affects the speed of light in a material:
where is the velocity of light through the material, is the material permittivity, and is the magnetic permeability; and thus the speed of light varies depending on its orientation. This causes fluctuations in the phase of polarized incident light.
This effect is often quantified in terms of its Kerr angle and its Kerr ellipticity.
The Kerr angle is the angle that linearly polarized light will be rotated after hitting the sample.
The Kerr ellipticity or (not to be confused with ellipticity from mathematics) is the ratio of the semimajor and semiminor axes of the elliptically polarized light, generated from reflection of linearly polarized light.
Geometries
MOKE can be further categorized by the direction of the magnetization vector with respect to the reflecting surface and the plane of incidence.
Polar MOKE
When the magnetization vector is perpendicular to the reflection surface and parallel to the plane of incidence, the effect is called the polar Kerr effect. To simplify the analysis, and because the other two configurations have vanishing Kerr rotation at normal incidence, near normal incidence is usually employed when doing experiments in the polar geometry.
Longitudinal MOKE
In the longitudinal effect, the magnetization vector is parallel to both the reflection surface and the plane of incidence. The longitudinal setup involves light reflected at an angle from the reflection surface and not normal to it, as is used for polar MOKE. In the same manner, linearly polarized light incident on the surface becomes elliptically polarized, with the change in polarization directly proportional to the component of magnetization that is parallel to the reflection surface and parallel to the plane of incidence. This elliptically polarized light to first-order has two perpendicular vectors, namely the standard Fresnel amplitude coefficient of reflection and the Kerr coefficient . The Kerr coefficient is typically much smaller than the coefficient of reflection.
Transversal MOKE
When the magnetization is perpendicular to the plane of incidence and parallel to the surface it is said to be in the transverse configuration. In this case, the incident light is also not normal to the reflection surface but instead of measuring the polarity of the light after reflection, the reflectivity is measured. This change in reflectivity is proportional to the component of magnetization that is perpendicular to the plane of incidence and parallel to the surface, as above. If the magnetization component points to the right of the incident plane, as viewed from the source, then the Kerr vector adds to the Fresnel amplitude vector and the intensity of the reflected light is . On the other hand, if the component of magnetization component points to the left of the incident plane as viewed from the source, the Kerr vector subtracts from the Fresnel amplitude and the reflected intensity is given by .
Quadratic MOKE
In addition to the polar, longitudinal and transverse Kerr effect which depend linearly on the respective magnetization components, there are also higher order quadratic effects, for which the Kerr angle depends on product terms involving the polar, longitudinal and transverse magnetization components. Those effects
are referred to as Voigt effect or quadratic Kerr effect. Quadratic magneto-optic Kerr effect (QMOKE) is found strong in Heusler alloys such as Co2FeSi and Co2MnGe
Applications
Microscopy
A Kerr microscope relies on the MOKE in order to image differences in the magnetization on a surface of magnetic material. In a Kerr microscope, the illuminating light is first passed through a polarizer filter, then reflects from the sample and passes through an analyzer polarizing filter, before going through a regular optical microscope. Because the different MOKE geometries require different polarized light, the polarizer should have the option to change the polarization of the incident light (circular, linear, and elliptical). When the polarized light is reflected off the sample material, a change in any combination of the following may occur: Kerr rotation, Kerr ellipticity, or polarized amplitude. The changes in polarization are converted by the analyzer into changes in light intensity, which are visible. A computer system is often used to create an image of the magnetic field on the surface from these changes in polarization.
Magnetic media
Magneto-optical (MO) drives were introduced in 1985. MO discs are written using a laser and an electromagnet. The laser would heat the platter above its Curie temperature at which point the electromagnet would orient that bit as a 1 or 0. To read, the laser is operated at a lower intensity, and emits polarized light. Reflected light is analyzed showing a noticeable difference between a 0 or 1.
Discovery
The magneto-optic Kerr effect was discovered in 1877 by John Kerr.
See also
Faraday effect
Fresnel equations
John Kerr
Thin-film optics
Voigt Effect
Zeeman Effect
References
Further reading
External links
Kerr Calculation Applet – Java applet, computes the Kerr angle of multilayered thin films
yeh-moke – Free software computes the Magneto-optic Kerr effect of multilayered thin films
MOKE Microscope – Magneto-Optical Kerr Effect Microscope [PDF: 3.2MB]
MOKE tutorial – A step by step tutorial on the longitudinal, polar and transverse Magneto-Optical Kerr Effect.
Broadband magneto-optical Kerr spectroscopy
Magneto-optic effects | Magneto-optic Kerr effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,289 | [
"Optical phenomena",
"Physical phenomena",
"Electric and magnetic fields in matter",
"Magneto-optic effects"
] |
25,824,456 | https://en.wikipedia.org/wiki/Double%20suspension%20theorem | In geometric topology, the double suspension theorem of James W. Cannon () and Robert D. Edwards states that the double suspension S2X of a homology sphere X is a topological sphere.
If X is a piecewise-linear homology sphere but not a sphere, then its double suspension S2X (with a triangulation derived by applying the double suspension operation to a triangulation of X) is an example of a triangulation of a topological sphere that is not piecewise-linear. The reason is that, unlike in piecewise-linear manifolds, the link of one of the suspension points is not a sphere.
See also
References
Steve Ferry, Geometric Topology Notes (See Chapter 26, page 166)
Geometric topology
Theorems in topology | Double suspension theorem | [
"Mathematics"
] | 155 | [
"Geometric topology",
"Theorems in topology",
"Topology",
"Mathematical problems",
"Mathematical theorems"
] |
25,824,564 | https://en.wikipedia.org/wiki/Ottoman%20battleship%20Abd%C3%BCl%20Kadir | Abdül Kadir was a pre-dreadnought battleship laid down in 1892 at the Imperial Arsenal in Constantinople for the Ottoman Navy, the first vessel of this type to be ordered by the Ottoman Empire. The ship was the first capital ship to be laid down by the Ottomans in more than a decade. She was to have a main armament of four guns, with an armored belt that was thick. Work proceeded on the ship very slowly, primarily the result of a lack of funds; after two years, only the frames for the hull had been erected, and by the time work stopped in 1906, the hull had been only partially plated. During the long construction period, the supports for the keel shifted, which distorted the structure and prevented completion. The unfinished ship was ultimately broken up for scrap in 1909.
Design
Abdül Kadir was to have been the Ottoman Navy's first pre-dreadnought battleship. She followed a series of ironclad warships built in the 1860s and 1870s. In 1876, Sultan Murad V was deposed; the Ottoman Navy had played a role in the coup, which installed Abdul Hamid II on the throne. The new sultan was as a result suspicious of the navy, and attempted to reduce its power by withholding funding and ordering no new capital ships over the course of the following decade. By the late 1880s, however, the ships built by his predecessors were rapidly becoming obsolescent, especially compared to foreign designs like the British s.
More importantly, the Greek Navy—a major rival of the Ottoman fleet—had ordered three ironclad battleships in 1885. These ships, though smaller than the older Ottoman ironclads, were kept in a much better state of readiness than the Ottoman vessels, which were left idle in the Sea of Marmara, with little maintenance done. In 1890, the Ottoman government authorized a large construction program that included two battleships based on the French , along with several cruisers and smaller vessels. The two Hoche-class battleships were not built; instead, a smaller design, to be named Abdül Kadir, was ordered that year. Along with the elderly central battery ironclad , she would have been one of the largest ships in the Ottoman Navy.
General characteristics and armor
Abdül Kadir was long, and had a beam of and a draft of . As designed, she would have displaced . She would have been powered by a pair of vertical triple-expansion steam engines each driving a screw propeller, with steam provided by six coal-fired boilers that were ducted into a pair of funnels. Both the engines and the boilers would have been manufactured by the Imperial Arsenal. The engines were estimated to have been rated at , which should have provided a top speed of . The ship would have had a capacity of of coal.
Abdül Kadir was to have had an armored belt that was thick, and was to have been wide. The upper decks above the main belt would not have had any armor protection. The transverse bulkheads connecting the ends of the belt were to have been thick. Her main battery guns were mounted on barbettes that were thick.
Armament
The German firm Krupp had secured the contract to supply the ship's armament. Abdül Kadir was designed to carry a main battery of four guns in two twin turrets on the centerline, one forward and one aft. The secondary battery was to have comprised six guns in casemates. Close-range defense against torpedo boats was to have been provided by a battery of eight /30 quick-firing (QF) guns and eight QF guns, all in single mounts. Her armament suite was rounded out with six torpedo tubes in above water mounts. By 1904, her planned armament had been revised, with the 283 mm guns replaced with four guns in single turrets, and the number of 150 mm guns increased to ten. The 88 mm guns were replaced with guns, and the number of 37 mm guns was increased to ten. Two of the torpedo tubes were removed.
Construction
Abdül Kadir was laid down at the Imperial Arsenal in Constantinople in October 1892. Rather than use an actual slipway, the builders simply began laying the keel pieces on empty ground near the shipyard, using only a small number of wooden beams to support the structure. The slipway that had been used to build the ironclad was, for some reason, left unused. By 1895, the steel frames for her hull had been erected, but work proceeded very slowly and frequently stopped, primarily due to the chronically tight Ottoman budget. In 1897, for instance, work had been halted for some time, and the contemporary journal The Navy and Army Illustrated predicted that the ship would not be finished. Similar large-scale building projects during this period also fell apart due to lack of funds; a major construction program launched in the aftermath of the Ottoman Navy's poor performance in the Greco-Turkish War of 1897 stalled after funds could not be appropriated for the new ships. By 1906, when work on Abdül Kadir stopped for the last time, the hull had been only partially plated. By this time, the blocks that supported the hull during construction had shifted, which destroyed the keel. As a result, the unfinished ship was broken up on the slipway in 1909.
See also
List of battleships of the Ottoman Empire
List of naval steamships of the Ottoman Empire
Notes
References
1892 ships
Abandoned military projects
Battleships of the Ottoman Navy
Ships built in the Ottoman Empire | Ottoman battleship Abdül Kadir | [
"Engineering"
] | 1,100 | [
"Military projects",
"Abandoned military projects"
] |
25,825,429 | https://en.wikipedia.org/wiki/C11H14O2 | {{DISPLAYTITLE:C11H14O2}}
The molecular formula C11H14O2 (molar mass: 178.23 g/mol) may refer to:
Actinidiolide
4-tert-Butylbenzaldehyde
para-tert-Butylbenzoic acid
Methyl eugenol
Methyl isoeugenol
2-Phenethyl propionate
Wieland–Miescher ketone
Molecular formulas | C11H14O2 | [
"Physics",
"Chemistry"
] | 95 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
25,828,273 | https://en.wikipedia.org/wiki/Explosive-driven%20ferroelectric%20generator | An explosive-driven ferroelectric generator (EDFEG, explosively pumped ferroelectric generator, EPFEG, or FEG) is a compact pulsed power generator, a device used for generation of short high-voltage high-current pulse. The energies available are fairly low, in the range of single joules, the voltages range in tens of kilovolts to over 100 kV, and the powers range in hundreds of kilowatts to megawatts. They are suitable for delivering high voltage pulses to high-impedance loads and can directly drive radiating circuits.
ECFEGs operate by releasing the electrical charge stored in the poled crystal structure of a suitable ferroelectric material, e.g. PZT, by an intense mechanical shock. They are a kind of phase transition generators.
The structure of an EDFEG is generally a block of a suitable high explosive, accelerating a metal plate into a target made of ferroelectric material.
FEGs find multiple uses due to their compact character; charging banks of capacitors, initiation of slapper detonator arrays in nuclear weapons and other devices, driving nuclear fusion reactions, powering pulsed neutron generators, seed power sources for stronger pulse generators (e.g. EPFCGs), electromagnetic pulse generators, electromagnetic weapons, vector inversion generators, etc.
A 2.4 megawatt HERF generator (an EDFEG with a pulse forming network directly driving a dipole antenna) with peak output frequency at 21.4 MHz was demonstrated.
See also
Explosively pumped flux compression generator
Explosive-driven ferromagnetic generator
References
Explosive pulsed power: an enabling technology
Pulsed power
Ferroelectric materials | Explosive-driven ferroelectric generator | [
"Physics",
"Materials_science"
] | 348 | [
"Physical phenomena",
"Physical quantities",
"Ferroelectric materials",
"Power (physics)",
"Materials",
"Electrical phenomena",
"Pulsed power",
"Hysteresis",
"Matter"
] |
25,828,373 | https://en.wikipedia.org/wiki/Alpha%20case | In metallurgy, alpha case is the oxygen-enriched surface phase that occurs when titanium and its alloys are exposed to heated air or oxygen. Alpha case is hard and brittle, and tends to create a series of microcracks that will reduce the metal's performance and its fatigue properties. Alpha case can be minimized or avoided by processing titanium at very deep vacuum levels. However once present on the surface, the currently applied method to remove the alpha case is by the subtractive methods of machining and/or chemical milling.
An emerging technique is to subject the metal to an electrochemical treatment in molten salts, such as calcium chloride or lithium chloride at elevated temperatures. This method removes the dissolved oxygen from the alpha case, hence restoring the oxygen-free metal. However, an unwanted consequence of the high temperature treatment is the growth of the grains in the metal. Grain growth may be limited by lowering the molten salt temperature. Alternatively, the metal may be rolling-pressed again to break the large grains into smaller ones.
References
Titanium
Metallurgy | Alpha case | [
"Chemistry",
"Materials_science",
"Engineering"
] | 214 | [
"Metallurgy",
"Materials science",
"nan"
] |
21,436,303 | https://en.wikipedia.org/wiki/ATryn | ATryn is the brand name of the anticoagulant antithrombin manufactured by the Massachusetts-based U.S. company rEVO Biologics (formerly known as GTC Biotherapeutics). It is made from the milk of goats that have been genetically modified to produce human antithrombin, a plasma protein with anticoagulant properties. Microinjection was used to insert human antithrombin genes into the cell nucleus of their embryos. ATryn is the first medicine produced using genetically engineered animals. GTC states that one genetically modified goat can produce the same amount of antithrombin in a year as 90,000 blood donations. GTC chose goats for the process because they reproduce more rapidly than cattle and produce more protein than rabbits or mice.
On February 6, 2009, ATryn was approved by the U.S. Food and Drug Administration (FDA) for treatment of patients with hereditary antithrombin deficiency who are undergoing surgical or childbirth procedures. Along with the approval from the FDA's pharmaceutical regulatory board, the Center for Veterinary Medicine of the FDA also approved the genetic makeup of the goats that are used to manufacture ATryn. rEVO has the sole rights to sell ATryn in the United States, and the drug is available in the U.S. market. Earlier in 2006, the European Medicines Agency (EMA) initially rejected and, after an appeal from GTC, approved the drug for use in the European Union countries.
According to Tom Newberry, the spokesperson for GTC, the company plans to acquire additional approval for treatment of those with non-hereditary antithrombin deficiency.
The Humane Society of the United States has said of the process used to manufacture ATryn, "It is a mechanistic use of animals that seems to perpetuate the notion of their being merely tools for human use rather than sentient creatures." However, the genetic changes have no known ill-effects on the host animal.
References
Further reading
External links
FDA Product Approval Information for ATryn
Anticoagulants
Genetic engineering
Goats
de:Antithrombin Alfa | ATryn | [
"Chemistry",
"Engineering",
"Biology"
] | 434 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
21,437,222 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20plot | A Poincaré plot, named after Henri Poincaré, is a graphical representation used to visualize the relationship between consecutive data points in time series to detect patterns and irregularities in the time series, revealing information about the stability of dynamical systems, providing insights into periodic orbits, chaotic motions, and bifurcations. It plays a role in controlling and predicting the system's long-term behavior, making it an indispensable tool for various scientific and engineering disciplines. It is also known as a return map. Poincaré plots can be used to distinguish chaos from randomness by embedding a data set in a higher-dimensional state space.
Given a time series of the form
a Poincaré map in its simplest form first plots dots in a scatter plot at the positions , then plots , then , and so on.
Example Logistic map
For iterative (discrete time) maps, the Poincaré map represents the function that maps the values of the system from one time step to the next. In the Logistic map , the Poincaré plot would represent a shape corresponding to the function .
Applications in electrocardiography
An electrocardiogram (ECG) is a tracing of the voltage changes in the chest generated by the heart, whose contraction in a normal person is triggered by an electrical impulse that originates in the sinoatrial node. The ECG normally consists of a series of waves, labeled the P, Q, R, S and T waves. The P wave represents depolarization of the atria, the Q-R-S series of waves depolarization of the ventricles and the T wave repolarization of the ventricles. The interval between two successive R waves (the RR interval) is a measure of the heart rate.
The heart rate normally varies slightly: during a deep breath, it speeds up and during a deep exhalation, it slows down. (The RR interval will shorten when the heart speeds up, and lengthen when it slows.) An RR tachograph is a graph of the numerical value of the RR-interval versus time.
In the context of RR tachography, a Poincaré plot is a graph of RR(n) on the x-axis versus RR(n + 1) (the succeeding RR interval) on the y-axis, i.e. one takes a sequence of intervals and plots each interval against the following interval.
The recurrence plot is used as a standard visualizing technique to detect the presence of oscillations in non-linear dynamic systems. In the context of electrocardiography, the rate of the healthy heart is normally tightly controlled by the body's regulatory mechanisms (specifically, by the autonomic nervous system). Several research papers demonstrate the potential of ECG signal-based Poincaré plots in detecting heart-related diseases or abnormalities.
See also
Recurrence plot
Poincaré map
Heart rate variability (HRV), a use of Poincaré plots to assess heart functionality.
PhysioNet tool for constructing multi-scale Poincaré plots from a heartbeat time series.
References
Scaling symmetries
Dynamical systems
Chaos theory
Statistical charts and diagrams
Plot | Poincaré plot | [
"Physics",
"Mathematics"
] | 658 | [
"Scaling symmetries",
"Mechanics",
"Symmetry",
"Dynamical systems"
] |
21,442,530 | https://en.wikipedia.org/wiki/Barlow%27s%20formula | Barlow's formula (called "Kesselformel" in German) relates the internal pressure that a pipe can withstand to its dimensions and the strength of its material.
This approximate formula is named after Peter Barlow, an English mathematician.
,
where
: internal pressure,
: allowable stress,
: wall thickness,
: outside diameter.
This formula (DIN 2413) figures prominently in the design of autoclaves and other pressure vessels.
Other formulations
The design of a complex pressure containment system involves much more than the application of Barlow's formula. For example, in 100 countries the ASME BPVCcode stipulates the requirements for design and testing of pressure vessels.
The formula is also common in the pipeline industry to verify that pipe used for gathering, transmission, and distribution lines can safely withstand operating pressures. The design factor is multiplied by the resulting pressure which gives the maximum operating pressure (MAOP) for the pipeline. In the United States, this design factor is dependent on Class locations which are defined in DOT Part 192. There are four class locations corresponding to four design factors:
External links
Barlow's Formula Calculator
Barlow's Equation and Calculator
Barlow's Formula Solver
Barlow's Formula Calculator for Copper Tubes
References
Mathematical analysis
Piping
Pressure vessels | Barlow's formula | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 260 | [
"Structural engineering",
"Mathematical analysis",
"Mathematical analysis stubs",
"Building engineering",
"Chemical engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Piping",
"Pressure vessels"
] |
21,442,762 | https://en.wikipedia.org/wiki/Berkeley%20Geochronology%20Center | The Berkeley Geochronology Center (BGC) is a non-profit geochronology research institute in Berkeley, California. It was originally a research group in the laboratory of geochronologist Garniss Curtis at the University of California, Berkeley. The center is now an independent scientific research institute with close Berkeley affiliations and directed by geologist and geochronologist Paul Renne, a professor in residence in the department of earth and planetary science at Berkeley.
History
In 1985, Curtis, set to retire in 1989, moved the group from his lab at the university to the basement of the independent Institute for Human Origins (IHO), at the suggestion of American anthropologist F. Clark Howell. The geochronologists worked separately from the IHO, although IHO contained their bureaucratic infrastructure, until 1989 when they became officially known as the Institute for Human Origins Geochronology Center. In 1994 the group officially split from the IHO based on different viewpoints of their respective missions.
Both Curtis and IHO founder, Donald Johanson, were known to have egos that might "clash", but Howell thought that bringing the two research groups together could benefit both. The IHO's mission included publicizing the anthropology of ancient human ancestors to the general public, and the geochronology scientists felt the anthropologists emphasized this at the expense of more basic science, while the paleoanthropologist felt the geochronologists were devoting too much research time and funding to general geology questions not related to the institute's primary mission. The anthropologists had more public recognition in the press, while the geochronologists were obtaining more scientific grant moneys and publishing more scientific papers. The split was acrimonious and garnered negative publicity for some of those involved from their peers in professional organizations, particularly as Gordon Getty, the single largest donor and a board member of IHO, withdrew funding to the parent institute (IHO) while providing start-up funding to the geochronology group.
Functions
The Institute specializes in fundamental questions of the age of the Earth, using state-of-the-art instrumentation to find the age of rocks that will answer questions about geology and geobiology in Earth's history. The institute is capable of performing gas extraction, and thermal ionization mass spectrometry analysis on rocks up to billions of years old using the techniques of argon–argon dating and uranium–lead dating. BGC also performs paleomagnetic analysis to establish correlating or independent ages from the fossilized magnetic fields. The staff includes research scientists specializing in various geological periods and areas, in addition to postdoctoral scholars and graduate students. Scientists at BGC have also been active in dating extraterrestrial materials such as meteorites.
References
External links
Berkeley Geochronology Center website
Geochronological institutions and organizations
Mass spectrometry
Education in Berkeley, California | Berkeley Geochronology Center | [
"Physics",
"Chemistry"
] | 594 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
37,072,870 | https://en.wikipedia.org/wiki/Whitham%20equation | In mathematical physics, the Whitham equation is a non-local model for non-linear dispersive waves.
The equation is notated as follows:This integro-differential equation for the oscillatory variable η(x,t) is named after Gerald Whitham who introduced it as a model to study breaking of non-linear dispersive water waves in 1967. Wave breaking – bounded solutions with unbounded derivatives – for the Whitham equation has recently been proven.
For a certain choice of the kernel K(x − ξ) it becomes the Fornberg–Whitham equation.
Water waves
Using the Fourier transform (and its inverse), with respect to the space coordinate x and in terms of the wavenumber k:
For surface gravity waves, the phase speed c(k) as a function of wavenumber k is taken as:
while
with g the gravitational acceleration and h the mean water depth. The associated kernel Kww(s) is, using the inverse Fourier transform:
since cww is an even function of the wavenumber k.
The Korteweg–de Vries equation (KdV equation) emerges when retaining the first two terms of a series expansion of cww(k) for long waves with :
with δ(s) the Dirac delta function.
Bengt Fornberg and Gerald Whitham studied the kernel Kfw(s) – non-dimensionalised using g and h:
and with
The resulting integro-differential equation can be reduced to the partial differential equation known as the Fornberg–Whitham equation:
This equation is shown to allow for peakon solutions – as a model for waves of limiting height – as well as the occurrence of wave breaking (shock waves, absent in e.g. solutions of the Korteweg–de Vries equation).
Notes and references
Notes
References
Water waves
Partial differential equations
Equations of fluid dynamics | Whitham equation | [
"Physics",
"Chemistry"
] | 396 | [
"Equations of fluid dynamics",
"Physical phenomena",
"Equations of physics",
"Water waves",
"Waves",
"Fluid dynamics"
] |
37,074,140 | https://en.wikipedia.org/wiki/Mechanics%20of%20Oscar%20Pistorius%27s%20running%20blades | The mechanics of the running blades used by South African former Paralympic runner Oscar Pistorius depend on special carbon-fiber-reinforced polymer prosthetics. Pistorius has double below-the-knee amputations and competed in both non-disabled and T44 amputee athletics events. Pistorius's eligibility to run in international non-disabled events is sanctioned by the International Association of Athletics Federations (IAAF).
Pistorius began running in 2004 after a rugby knee injury which led to rehabilitation at the University of Pretoria's High Performance Centre with coach Ampie Louw. His first racing blades were fitted by South African prosthetist Francois Vanderwatt. Because he was unable to find suitable running blades in Pretoria, Vanderwatt ordered some to be made by a local engineer at Hanger Orthopedic Group. These quickly broke, and Vanderwatt referred Pistorius to American prosthetist and Paralympic sprinter Brian Frasure to be fitted for carbon-fibre blades by Icelandic company Össur.
Pistorius's participation in non-disabled international sprinting competitions in 2007 raised questions about his use of running blades, and the IAAF amended their rules to ban the use of "any technical device that incorporates springs, wheels or any other element that provides a user with an advantage over another athlete not using such a device." After initial studies, Pistorius was ruled ineligible for competitions under these IAAF rules. After further research was presented, the Court of Arbitration (CAS) ruled that his running prostheses were not shown to provide a net competitive advantage over biological legs. In 2012, Pistorius qualified for and competed in both the 2012 Olympic Games and the 2012 Paralympic Games using his running blades, becoming the first amputee sprinter to run in the Olympic Games.
Pistorius's athletics prostheses
The blades are transtibial prostheses, meaning they replace legs and feet that are amputated below the knee (BK). They were developed by medical engineer Van Phillips who incorporated Flex-Foot, Inc. in 1984. In 2000, Van Phillips sold the company to Össur which, as of 2012, still manufactures the blades. They are designed to store kinetic energy like a spring, allowing the wearer to jump and run effectively.
Carbon fibre is actually a carbon-fiber-reinforced polymer, and is a strong, light-weight material used in a number of applications, including sporting goods like baseball bats, car parts, helmets, sailboats, bicycles and other equipment where rigidity and high strength-to-weight ratio is important. The polymer used for this equipment is normally epoxy, but other polymers are also used, depending on the application, and other reinforcing fibres may also be included. In the blade manufacturing process, sheets of impregnated material are cut into square sheets and pressed onto a form to produce the final shape. From 30 to 90 sheets may be layered, depending on the expected weight of the athlete, and the mold is then autoclaved to fuse the sheets into a solid plate. This method reduces air bubbles that can cause breaks. Once the result is cooled, it is cut into the shape of the blades. The finished blade is bolted to a carbon fibre socket that is an intimate fit to each of Pistorius' legs. These are custom made and make up the bulk of the total cost, along with the assessment and setting up of the finished prostheses. Each limb costs between $15–18,000 USD.
Pistorius has been using the same Össur blades since 2004. He was born without fibulae and with malformed feet, and his legs were amputated about halfway between knee and ankle so he could wear prosthetic legs. He wears socks and pads which are visible above the sockets to reduce chafing and to prevent blisters, and the sockets have straps in the front that can be tightened to make the prosthesis fit more snugly.
Pistorius uses custom-made spike pads on the blades. Before development of the pads, his spikes were changed by roughing up the surface and applying over-the-counter spikes by hand, but the results using this method were inconsistent. Research was conducted in Össur's Iceland lab using a pressure-sensitive treadmill and film at 500 fps to measure the blade strike, and produced a spike pad which includes a midsole of two machine-molded pieces of foam of different densities to cushion impact, with a carbon fibre plate on the bottom. The developers attached the pad with contact cement, which can be quickly removed with the application of heat when the spike pad needs to be changed.
Because of the curved design, the blades have to be slightly longer than a runner's biological leg and foot would be. The blades replace the hinge of an ankle with elastic compression that bends and releases the blade with every stride, so the uncompressed blade leaves the user standing on tiptoe. They are designed to move forward, so have no heel support in the back. According to Josh McHugh of Wired Magazine, "The Cheetahs seem to bounce of their own accord. It’s impossible to stand still on them, and difficult to move slowly. Once they get going, Cheetahs are extremely hard to control."
How the blades work
In 2007 Pistorius applied to run in non-disabled track meets. He was at first accepted, but questions quickly arose about whether the blades give him an unfair advantage. After initial research showed the blades did provide an advantage, the International Association of Athletics Federations (IAAF) changed their rules to ban the use of technical devices that provide an advantage and ruled him ineligible to compete. Pistorius challenged the ruling with additional research and was reinstated by the Court of Arbitration for Sport (CAS) in 2008, meaning that he can continue to run in non-disabled meets as long as he uses the equipment that was studied in the research.
Pistorius's performance in the early non-disabled races raised questions because of two major concerns: his pattern of running the races and his leg-swing times. Most sprinters spring out of the blocks with their fastest time and slow down as the race progresses, but Pistorius ran a "negative split", starting slowly and building up speed in the last half of the race (though he no longer uses this pattern). His average time was also less in the 400m race when compared to other runners than in the 200m. Controversy about the use of the blades persists, but the research provided considerable information on how they work in application, and other research is expected to follow.
Non-disabled sprinters have calves and ankles that return and amplify the energy supplied by their hips and knees, while Pistorius compensates with additional work because he does not have calves and ankles with their associated tendons and muscles. An analysis published by Engineering & Technology magazine estimates that in using the blades, Pistorius must generate twice the power from his gluteal and quadriceps muscles that a normal sprinter would. Other sources also credit core abdominal muscles and a faster arm swing. His trainer estimates that about 85% of his power comes from his hips and the rest from his knees. This results in a gait that waddles slightly, as Pistorius swings his upper body to balance the springing action of the blades. The blades compress under his weight, then release as he moves forward, providing forward thrust from the tips as they return to their molded shape. As they spring off, he swings them slightly out to the side and throws them forward for the next stride.
Pistorius is always slow in starting a race because the flexible blades do not provide thrust out of the blocks. Pistorius must begin from an awkward position, swing his leg to the outside and pop straight up from the blocks to begin running, when the preferred method is to push off with horizontal force. For the first 30 meters of a race, he keeps his head down and takes short, quick strides. As he establishes a rhythm, he can raise his head and increase his speed. While some runners jog up and down, losing energy, Pistorius directs energy forward, looking somewhat like he is rolling on wheels. He also compensates for the adjustments ankles make on the turns, breaking the curves into short, straight lines. According to his coach Ampie Louw, Pistorius may be able to use the inward lean to generate force and come out of a turn going faster.
Research
Brüggemann study
To resolve questions about the blades, Pistorius was asked to take part in a series of scientific tests in November 2007 at the German Sport University Cologne with Professor of Biomechanics Peter Brüggemann and IAAF technical expert Elio Locatelli. After two days of tests, Brüggemann reported that Pistorius used about 25% less energy expenditure than non-disabled athletes once he achieved a given speed. The study also found that he showed major differences in sprint mechanics, with significantly different maximum vertical ground return forces, and that the positive work or returned energy was close to three times higher than that of a human ankle. The energy loss in the blade during stance phase when the foot was on the ground was measured as 9.3%, while that of normal ankle joint was measured at 42.4%, showing a difference of more than 30%. Brüggemann's analysis stated that the blades allowed lower energy consumption at the same speed, and that the energy loss in the blade is significantly less than in a human ankle at maximum speed. In December of that year, Brüggemann stated to Die Welt newspaper that Pistorius "has considerable advantages over athletes without prosthetic limbs who were tested by us. It was more than just a few percentage points. I did not expect it to be so clear." The study was published in 2008 in Sports Technology, but later researchers stated that the analysis "did not take enough variables into consideration". Commentators have also argued that the IAAF study did not accurately determine whether Cheetahs confer a net advantage because measuring the net advantage or disadvantage conferred on an athlete using Cheetahs is not possible given current scientific knowledge. Second, the IAAF study may not have measured Pistorius's performance against appropriate controls. IAAF used five non-disabled athletes, who run 400-meter races in similar times to Pistorius, as controls. However, because Pistorius was relatively new to the sport of running, he may not have trained enough to maximize his physical potential and reach his peak performance when the IAAF study was conducted. In March 2007, approximately 9 months before the IAAF study was conducted, Pistorius's coach commented that Pistorius had not trained enough to achieve an upper body commensurate with the upper bodies of most elite sprinters. To obtain the most accurate understanding of how the prostheses affect Pistorius's performance, he should be compared to athletes with similar physical potential. Consequently, the IAAF study may have been flawed because it compared Pistorius, who might have the physical potential to run faster than his current times, against athletes at their peak.
Weyand, et al. study
In 2008 a team of seven researchers conducted tests at Rice University, including Peter Weyand, Hugh Herr, Rodger Kram, Matthew Bundle and Alena Grabowski. The team collected metabolic and mechanical data by indirect calorimetry and ground reaction force measurements on Pistorius's performance during constant-speed, level treadmill running, and found that the energy usage was 3.8% lower than average values for elite non-disabled distance runners, 6.7% lower than for average distance runners and 17% lower than for non-disabled 400m sprint runners. At sprinting speeds of 8.0, 9.0 and 10.0 m/s, Pistorius produced longer foot to ground contact times, shorter leg swing times, and lower average vertical forces than able bodied sprinters. The team concluded that running on the blades appears to be physiologically similar but mechanically different from running with biological legs. The study was published several months later in the Journal of Applied Physiology. Kram also stated that Pistorius's "rate of energy consumption was lower than an average person but comparable to other high-caliber athletes".
The lightness and rigidity of the blade compared to muscle and bone may allow blade runners to swing their legs faster than non-disabled runners. In comments on the article, Peter Weyand and biomechanist Matthew Bundle noted that the study found that Pistorius re-positioned his legs 15.7% faster than most world record sprinters, allowing for a 15–30% increase in sprint speed.
Grabowski, et al. study
In 2008 a research team including Alena Grabowski, Rodger Kram and Hugh Herr conducted a follow-up study of single amputees with running blades which was published in Biology Letters. Each of six amputees' affected leg performance was compared against that of their biological leg. The team measured leg swing times and force applied to the running surface on a high-speed treadmill at the Biomechanics Laboratory of the Orthopedic Specialty Hospital, and also studied video of sprint runners from the Olympics and Paralympics. They found no difference in leg swing times at different speeds, and recorded leg swing times similar to that of non-disabled sprinters. They also found that single running blades reduced the foot to ground force production of the tested runners by an average of 9%. Because force production is generally considered the most significant factor in running speed, the researchers concluded that this reduction in force limited the sprinters' top speed. Grabowski also found that amputees typically increased their leg swing times to compensate for the lack of force.
Other discussion
Discussion continues about the relative advantage or disadvantage of using the blades. Researchers and analysts also point out that the research studies are done on level, stationary treadmills, and do not measure performance from starting blocks or on actual curved tracks. They also do not take into account differences in physiology between amputees and non-amputees, who have such factors as musculature, blade height and weight and differences in blood circulation patterns due to the history of their limb loss.
2012 Paralympics
A controversy over the effects of running blade length arose at the 2012 Paralympic Games, as Brazilian runner Alan Oliveira and USA runner Blake Leeper changed to longer running blades within a few months before the 2012 Paralympic Games. This led to marked improvement in their running times. Pistorius complained after the 200m race that the blades provided artificially lengthened running strides, which would be an infringement of the IPC rules, regardless of that the blades were within the allowable height limits for the athletes concerned. His complaint was supported by single-leg runners including Jerome Singleton and Jack Swift, who called for the T43 double blade and T44 single blade classes to be separated in future events, as single blade runners were unable to adjust the height of the prostheses, and must always match the length of their biological leg with a running blade.
The improvement in running time and the wide broadcast of the race results provided a public demonstration of how the blade length affects performance. Pistorius' stride length was actually 9% longer (2.2 m vs 2.0 m), but Oliveira took more strides (99 vs 92). The combination of stride length and stride rate led to a clearly unusual performance with the longer blades. Pistorius's management issued a statement saying that Pistorius is always 1.84 meters tall, regardless of what prostheses he wears, and that the decision to maintain this height for his running blades was an issue of fairness.
See also
References
External links
Video of the Rice University motion study.
Elena Grawbowski lecture on blade mechanics
Video includes the manufacturing process
Prosthetics
Sport of athletics equipment
Biomechanics
Motor control
Sports science
Olympic Games controversies
Oscar Pistorius | Mechanics of Oscar Pistorius's running blades | [
"Physics",
"Biology"
] | 3,290 | [
"Biomechanics",
"Behavior",
"Mechanics",
"Motor control"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.