id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
37,079,339
https://en.wikipedia.org/wiki/Rydberg%E2%80%93Klein%E2%80%93Rees%20method
The Rydberg–Klein–Rees method is a procedure used in the analysis of rotational-vibrational spectra of diatomic molecules to obtain a potential energy curve from the experimentally-known line positions. References Atomic physics
Rydberg–Klein–Rees method
[ "Physics", "Chemistry" ]
46
[ " and optical physics stubs", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
34,529,648
https://en.wikipedia.org/wiki/Bloch%27s%20principle
Bloch's principle is a philosophical principle in mathematics stated by André Bloch. Bloch states the principle in Latin as: Nihil est in infinito quod non prius fuerit in finito, and explains this as follows: Every proposition in whose statement the actual infinity occurs can be always considered a consequence, almost immediate, of a proposition where it does not occur, a proposition in finite terms. Bloch mainly applied this principle to the theory of functions of a complex variable. Thus, for example, according to this principle, Picard's theorem corresponds to Schottky's theorem, and Valiron's theorem corresponds to Bloch's theorem. Based on his Principle, Bloch was able to predict or conjecture several important results such as the Ahlfors's Five Islands theorem, Cartan's theorem on holomorphic curves omitting hyperplanes, Hayman's result that an exceptional set of radii is unavoidable in Nevanlinna theory. In the more recent times several general theorems were proved which can be regarded as rigorous statements in the spirit of the Bloch Principle: Zalcman's lemma A family of functions meromorphic on the unit disc is not normal if and only if there exist: a number points functions numbers such that spherically uniformly on compact subsets of where is a nonconstant meromorphic function on Zalcman's lemma may be generalized to several complex variables. First, define the following: A family of holomorphic functions on a domain is normal in if every sequence of functions contains either a subsequence which converges to a limit function uniformly on each compact subset of or a subsequence which converges uniformly to on each compact subset. For every function of class define at each point a Hermitian form and call it the Levi form of the function at If function is holomorphic on set This quantity is well defined since the Levi form is nonnegative for all In particular, for the above formula takes the form and coincides with the spherical metric on The following characterization of normality can be made based on Marty's theorem, which states that a family is normal if and only if the spherical derivatives are locally bounded: Suppose that the family of functions holomorphic on is not normal at some point Then there exist sequences such that the sequence converges locally uniformly in to a non-constant entire function satisfying Brody's lemma Let X be a compact complex analytic manifold, such that every holomorphic map from the complex plane to X is constant. Then there exists a metric on X such that every holomorphic map from the unit disc with the Poincaré metric to X does not increase distances. References Mathematical principles Philosophy of mathematics
Bloch's principle
[ "Mathematics" ]
575
[ "Mathematical principles", "nan" ]
34,530,756
https://en.wikipedia.org/wiki/Entropy%20influence%20conjecture
In mathematics, the entropy influence conjecture is a statement about Boolean functions originally conjectured by Ehud Friedgut and Gil Kalai in 1996. Statement For a function note its Fourier expansion The entropy–influence conjecture states that there exists an absolute constant C such that where the total influence is defined by and the entropy (of the spectrum) is defined by (where x log x is taken to be 0 when x = 0). See also Analysis of Boolean functions References Unsolved Problems in Number Theory, Logic and Cryptography The Open Problems Project, discrete and computational geometry problems Entropy Conjectures
Entropy influence conjecture
[ "Physics", "Chemistry", "Mathematics" ]
121
[ "Thermodynamic properties", "Unsolved problems in mathematics", "Physical quantities", "Quantity", "Conjectures", "Entropy", "Asymmetry", "Wikipedia categories named after physical quantities", "Mathematical problems", "Symmetry", "Dynamical systems" ]
34,531,194
https://en.wikipedia.org/wiki/Thiophosphoryl%20fluoride
Thiophosphoryl fluoride is an inorganic molecular gas with formula containing phosphorus, sulfur and fluorine. It spontaneously ignites in air and burns with a cool flame. The discoverers were able to have flames around their hands without discomfort, and called it "probably one of the coldest flames known". The gas was discovered in 1888. It is useless for chemical warfare as it burns immediately and is not toxic enough. Preparation Thiophosphoryl fluoride was discovered and named by J. W. Rodger and T. E. Thorpe in 1888. They prepared it by heating arsenic trifluoride and thiophosphoryl chloride together in a sealed glass tube to 150 °C. Also produced in this reaction was silicon tetrafluoride and phosphorus fluorides. By increasing the the proportion of was increased. They observed the spontaneous inflammability. They also used this method: at 170 °C, and also substituting a mixture of red phosphorus and sulfur, and substituting bismuth trifluoride. Another way to prepare is to add fluoride to using sodium fluoride in acetonitrile. A high yield reaction can be used to produce the gas: Under high pressure phosphorus trifluoride can react with hydrogen sulfide to yield: (1350 bar at 200 °C) Another high pressure production uses phosphorus trifluoride with sulfur. Reactions is unstable against moisture or heat. The pure gas is completely absorbed by alkali solutions, producing the fluoride and a thiophosphate (), but stable against CaO. The latter can be used to remove or impurities. Hydrolysis and decomposition Reaction with neutral water is slow: Nevertheless, dissociation constants for related acids suggest that the phosphorus atom is at least as electrophilic as in phosphoryl fluoride. Autodecomposition from heat gives phosphorus fluorides, sulfur, and phosphorus: Hot PSF3 reacts with glass, producing , sulfur and elemental phosphorus. If water is present and the glass is leaded, then the hydrofluoric acid and hydrogen sulfide combination produces a black plumbous sulfide deposit on the inner surface. Oxidation In air, PSF3 burns spontaneously with a greyish green flame, producing solid white fumes containing and . The flame is one of the coldest known. With dry oxygen, combustion may not be spontaneous and the flame is yellow. Thiophosphoryl fluoride reduces oxygenated compounds to give phosphoryl fluoride and sulfur: The latter reaction also indicates why is not formed from and . Various oxidants can convert thiophosphoryl fluoride to phosphorus dichloride trifluoride, e.g.: . Nucleophilic substitution Thiophosphoryl difluoride isocyanate can be formed by reacting with silicon tetraisocyanate at 200 °C in an autoclave. In general, nucleophilic substitution onto thiophosphoryl fluoride is complex, because free fluoride ions tend to induce disproportionation to hexafluorophosphate and dithiodifluorophosphate (). For example, with cesium fluoride: Thus combines with dimethylamine in solution to produce dimethylaminothiophosphoryl difluoride and difluorophosphate and hexafluorophosphate ions: 4 SPF3 + 4 HNMe2 → 2 SPF2NMe2 + [H2NMe2]PF6 + [H2NMe2]S2PF2. PSF3 reacts with four times its volume of ammonia gas producing ammonium fluoride and a mystery product, possibly . Miscellaneous does not react with ether, benzene, carbon disulfide, or pure sulfuric acid. It initiates tetrahydrofuran polymerization. reacts with in a mass spectrometer to form . Related compounds One fluorine can be substituted by iodine to give thiophosphoryl difluoride iodide, . can be converted to hydrothiophosphoryldifluoride, , by reducing it with hydrogen iodide. In , one sulfur forms a bridge between two phosphorus atoms. Dimethylaminothiophosphoryl difluoride () is a foul smelling liquid with a boiling point of 117 °C. It has a Trouton constant (entropy of vaporization at the boiling point of the liquid) of 24.4, and a heat of evaporation of 9530 cal/mole. Alternately it can be produced by fluorination of dimethylaminothiophosphoryl dichloride (). Physical properties The thiophosphoryl trifluoride molecule shape has been determined using electron diffraction. The interatomic distances are P=S 0.187±0.003 nm, 0.153±0.002 nm and bond angles of bonding is 100.3±2°, The microwave rotational spectrum has been measured for several different isotopologues. The critical point is at 346 K at 3.82 MPa. The liquid refractive index is 1.353. The enthalpy of vaporisation 19.6 kJ/mol at boiling point. The enthalpy of vaporisation at other temperatures is a function of temperature T: H(T)=28.85011(346-T)0.38 kJ/mol. The molecule is polar. It has a non-uniform distribution of positive and negative charge which gives it a dipole moment. When an electric field is applied more energy is stored than if the molecules did not respond by rotating. This increases the dielectric constant. The dipole moment of one molecule of thiophosphoryl trifluoride is 0.640 Debye. The infrared spectrum includes vibrations at 275, 404, 442, 698, 951 and 983 cm−1. These can be used to identify the molecule. References Other references Phosphorus halides Thiophosphoryl compounds
Thiophosphoryl fluoride
[ "Chemistry" ]
1,299
[ "Functional groups", "Thiophosphoryl compounds" ]
34,536,095
https://en.wikipedia.org/wiki/Thermotoga%20maritima
Thermotoga maritima is a hyperthermophilic, anaerobic organism that is a member of the order Thermotogales. T. maritima is well known for its ability to produce hydrogen (clean energy) and it is the only fermentative bacterium that has been shown to produce hydrogen more than the Thauer limit (>4 mol H2 /mol glucose). It employs [FeFe]-hydrogenases to produce hydrogen gas (H2) by fermenting many different types of carbohydrates. History First discovered in the sediment of a marine geothermal area near Vulcano, Italy, Thermotoga maritima resides in hot springs as well as hydrothermal vents. The ideal environment for the organism is a water temperature of , though it is capable of growing in waters of . Thermotoga maritima is the only bacterium known to grow at this high a temperature; the only other organisms known to live in environments this extreme are members of the domain Archaea. The hyperthermophilic abilities of T. maritima, along with its deep lineage, suggests that it is potentially a very ancient organism. Physical attributes Thermotoga maritima is a non-sporulating, rod shaped, gram-negative bacterium. When viewed under a microscope, it can be seen to be encased in a sheath-like envelope which resembles a toga, hence the "toga" in its name. Metabolism As an anaerobic fermentative chemoorganotrophic organism, T. maritima catabolizes sugars and polymers and produces carbon dioxide (CO2) and hydrogen (H2) gas as by-products of fermentation. T. maritima is also capable of metabolizing cellulose as well as xylan, yielding H2 that could potentially be utilized as an alternative energy source to fossil fuels. Additionally, this species of bacteria is able to reduce Fe(III) to produce energy using anaerobic respiration. Various flavoproteins and iron-sulphur proteins have been identified as potential electron carriers for use during cellular respiration. However, when growing with sulfur as the final electron acceptor, no ATP is produced. Instead, this process eliminates inhibitory H2 produced from fermentative growth. Collectively, these attributes indicate that T. maritima has become resourceful and capable of metabolizing a host of substances in order to carry out its life processes. Clean energy (biohydrogen) from T. maritima Energy is a growing need of the world and it is expected to grow in the next 20 years. Among various energy sources, hydrogen serves as the best energy carrier due to its higher energy content per unit weight. T. maritima is one of fermentative bacteria that produces hydrogen to levels that approach the thermodynamic limit (4 mol H2/ mol glucose). However, similar to other fermentative bacteria, the biohydrogen yield in this bacterium does not go beyond 4 mol H2 / glucose (Thaeur limit) because of its inherent nature to use more energy for its own cell division to grow rapidly than producing H2. Because of these reasons fermentative bacteria have not been thought to produce higher amounts of hydrogen at a commercial scale. Overcoming this limit by improving the conversion of sugar to H2 could lead to a superior H2 producing biological system that may supersede fossil fuel-based H2 production. Metabolic engineering in this bacterium led to development of strains of T. maritima that surpassed the Thauer limit of hydrogen production. One of the strains, also known as Tma200, produced 5.77 mol H2/ mol glucose which is the highest yield so far reported in a fermentative bacterium. In this strain, energy redistribution, and metabolic rerouting through the pentose phosphate pathway (PPP) generated excess reductants while uncoupling growth from hydrogen synthesis. Uncoupling of growth from product formation has been viewed as a viable strategy to maximize the product yield which has been achieved in the higher hydrogen producing bacterium. Similar strategies can be adopted for other hydrogen producing bacterium to maximize product yields. Hydrogenase activity Hydrogenases are metalloenzymes that catalyze the reversible hydrogen conversion reaction: H2 ⇄ 2 H++ 2 e−. A Group C [FeFe]-hydrogenase from Thermotoga maritima (TmHydS) has showed modest hydrogen conversion activity and reduced sensitivity to the enzyme's inhibitor, CO, in comparison to Group A prototypical and bifurcating [FeFe]-hydrogenases. The TmHydS has a hydrogenase domain with distinct amino acid modifications in the active site pocket, including the presence of a Per-Arnt-Sim (PAS) domain. Genomic composition The genome of T. maritima consists of a single circular 1.8 megabase chromosome encoding for 1877 proteins. Within its genome it has several heat and cold shock proteins that are most likely involved in metabolic regulation and response to environmental temperature changes. It shares 24% of its genome with members of the Archaea; the highest percentage overlap of any bacteria. This similarity suggests horizontal gene transfer between Archaea and ancestors of T. maritima and could help to explain why T. maritima is capable of surviving in such extreme temperatures and conditions. The genome of T. maritima has been sequenced multiple times. Genome resequencing of T. maritima MSB8 genomovar DSM3109 determined that the earlier sequenced genome was an evolved laboratory variant of T. maritima with an approximately 8-kb deletion. Moreover, a variety of duplicated genes and direct repeats in its genome suggest their role in intra-molecular homologous recombination leading to genes deletion. A strain with a 10-kb gene deletion has been developed using the experimental microbial evolution in T. maritima. Genetic system of Thermotoga maritima Thermotoga maritima has a great potential in hydrogen synthesis because it can ferment a wide variety of sugars and has been reported to produce the highest amount of H2 (4 mol H2/ mol glucose). Due to lack of a genetic system for the past 30 years majority of the studies have been either focused on heterologous gene expression in E. coli or predicting models since a gene knockout mutant of T. maritima remained unavailable. Developing a genetic system for T. maritima has been a challenging task primarily because of a lack of a suitable heat-stable selectable marker. Recently, the most reliable genetic system based on pyrimidine biosynthesis has been established in T. maritima. This newly developed genetic system relies upon a pyrE− mutant that was isolated after cultivating T. maritima on a pyrimidine biosynthesis inhibiting drug called 5-fluoroorotic acid (5-FOA). The pyrE− mutant is an auxotrophic mutant for uracil. The pyrE from a distantly related genus of T. maritima rescued the uracil auxotrophy of the pyrE− mutant of T. maritima and has been proven to be a suitable marker. For the first time, the use of this marker allowed the development of an arabinose (araA) mutant of T. maritima. This mutant explored the role of the pentose phosphate pathway of T. maritima in hydrogen synthesis. The genome of T. maritima possesses direct repeats that have developed into paralogs. Due to lack of a genetic system the true function of these paralogs has remained unknown. Recently developed genetic system in T. maritima has been very useful to determine the function of the ATPase protein (MalK) of the maltose transporter that is present in a multi-copy (three copies) fashion. The gene disruptions of all three putative ATPase encoding subunit (malK) and phenotype have concluded that only one of the three copies serves as an ATPase function of the maltose transporter. It is interesting to know that T. maritima has several paralogs of many genes and the true function of these genes is now dependent upon the use of the recently developed system. The newly developed genetic system in T. maritima has a great potential to make T. maritima as a host for hyperthermophilic bacterial gene expression studies. Protein expression in this model organism is promising to synthesize fully functional protein without any treatment. Evolution Thermotoga maritima contains homologues of several competence genes, suggesting that it has an inherent system of internalizing exogenous genetic material, possibly facilitating genetic exchange between this bacterium and free DNA. Based on phylogenetic analysis of the small sub-unit of its ribosomal RNA, it has been recognized as having one of the deepest lineages of Bacteria. Furthermore, its lipids have a unique structure that differs from all other bacteria. References External links Thermotoga maritima genome Sequenced genome of Thermotoga maritima Type strain of Thermotoga maritima at BacDive - the Bacterial Diversity Metadatabase Thermotogota Organisms living on hydrothermal vents Bacteria described in 1986
Thermotoga maritima
[ "Biology" ]
1,961
[ "Organisms by adaptation", "Organisms by habitat", "Organisms living on hydrothermal vents" ]
2,859,356
https://en.wikipedia.org/wiki/Pipeline%20video%20inspection
Pipeline video inspection is a form of telepresence used to visually inspect the interiors of pipelines, plumbing systems, and storm drains. A common application is for a plumber to determine the condition of small diameter sewer lines and household connection drain pipes. Older sewer lines of small diameter, typically , are made by the union of a number of short sections. The pipe segments may be made of cast iron, with to sections, but are more often made of vitrified clay pipe (VCP), a ceramic material, in , & sections. Each iron or clay segment will have an enlargement (a "bell") on one end to receive the end of the adjacent segment. Roots from trees and vegetation may work into the joins between segments and can be forceful enough to break open a larger opening in terra cotta or corroded cast iron. Eventually a root ball will form that will impede the flow and this may cleaned out by a cutter mechanism or plumber's snake and subsequently inhibited by use of a chemical foam - a rooticide. With modern video equipment, the interior of the pipe may be inspected - this is a form of non-destructive testing. A small diameter collector pipe will typically have a cleanout access at the far end and will be several hundred feet long, terminating at a manhole. Additional collector pipes may discharge at this manhole and a pipe (perhaps of larger diameter) will carry the effluent to the next manhole, and so forth to a pump station or treatment plant. Without regular inspection of public sewers, a significant amount of waste may accumulate unnoticed until the system fails. In order to prevent resulting catastrophic events such as pipe bursts and raw sewage flooding onto city streets, municipalities usually conduct pipeline video inspections as a precautionary measure. Inspection equipment Service truck The service truck contains a power supply in the form of a small generator, a small air-conditioned compartment containing video monitoring and recording equipment, and related computer and display for feature recording. Cable and winch At the back end of the truck is a powered reel with video cable reinforced with kevlar or steel wire braid. Some trucks also contain a powered winch that booms out from the truck allowing for lowering and retrieval of the inspection equipment from the pipeline. Inspection camera Sometimes referred to as a PIG (pipeline inspection gauge), the camera and lights are mounted in a swivelling head attached to a cylindrical body. The camera head can pan and tilt remotely. Integrated into the camera head are lighting devices, typically LEDs, for illuminating the pipeline. The camera is connected to display equipment via a long cable wound upon a winch. Some companies, such as Rausch Electronics USA, incorporate a series of lasers in the camera to accurately measure the pipe diameter and other data. Inspection process Using a camera tractor A run to be inspected will either start from an access pipe leading at an angle down to the sewer and then run downstream to a manhole, or will run between manholes. The service truck is parked above the access point of the pipe. The camera tractor, with a flexible cable attached to the rear, is then lowered into the pipeline. The tractor is moved forward so that it is barely inside of the pipeline. A "down-hole roller" is set up between the camera tractor and the cable reel in the service truck, preventing cable damage from rubbing the top of the pipeline. The operator then retires to the inside of the truck and begins the inspection, remotely operating the camera tractor from the truck. When the inspection is complete or the camera cable is fully extended, the camera tractor is put in reverse gear and the cable is wound up simultaneously. When the camera tractor is near the original access point, the downhole roller is pulled up and the camera tractor is moved into the access point and pulled up to the service truck. A tractor may be used to inspect a complete blockage or collapse that would prevent using a fish and rope as described below. Pulling the camera backwards For small diameter pipes there may not be enough room for the tractor mechanism. Instead, a somewhat rigid "fish" is pushed through the pipe and attached to a rope at the access point near the truck. The fish is then pulled to place the rope along the pipe. The rope is then used to pull the inspection pig and cable through the pipe. Detaching the rope, the cable is then used to pull the pig backwards as the pipe is inspected on the monitor (this is the method shown in the illustrations below). Analysis of video footage Much of the analysis of what was viewed in the pipeline is conducted at the time of the inspection by the camera operator, but the entire inspection is always recorded and saved for review. Commercial software and hardware for video pipe inspection are available from a variety of vendors, including Cues, ITpipes, and WinCan. Conduit rehabilitation Depending mostly upon the change in conditions from a previous inspection various improvements may be made to the pipe. It may be cleaned with a rotating root cutting blade on the end of a segmented rotating chain, or a chemical foam may be applied to discourage root growth. If damage is found limited to only a few locations these may be excavated and repaired. Extensive moderate defects may be repaired by lining with a fabric liner that is pulled through the pipe, inflated, and then made rigid through chemical means. Severe damage may require excavation and replacement of the conduit. See also CCTV drain camera (plumbing) References Piping Plumbing Telepresence Robotics engineering
Pipeline video inspection
[ "Chemistry", "Technology", "Engineering" ]
1,120
[ "Computer engineering", "Robotics engineering", "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Mechanical engineering", "Piping" ]
2,860,045
https://en.wikipedia.org/wiki/Square%20antiprism
In geometry, the square antiprism is the second in an infinite family of antiprisms formed by an even-numbered sequence of triangle sides closed by two polygon caps. It is also known as an anticube. If all its faces are regular, it is a semiregular polyhedron or uniform polyhedron. A nonuniform D4-symmetric variant is the cell of the noble square antiprismatic 72-cell. Points on a sphere When eight points are distributed on the surface of a sphere with the aim of maximising the distance between them in some sense, the resulting shape corresponds to a square antiprism rather than a cube. Specific methods of distributing the points include, for example, the Thomson problem (minimizing the sum of all the reciprocals of distances between points), maximising the distance of each point to the nearest point, or minimising the sum of all reciprocals of squares of distances between points. Molecules with square antiprismatic geometry According to the VSEPR theory of molecular geometry in chemistry, which is based on the general principle of maximizing the distances between points, a square antiprism is the favoured geometry when eight pairs of electrons surround a central atom. One molecule with this geometry is the octafluoroxenate(VI) ion () in the salt nitrosonium octafluoroxenate(VI); however, the molecule is distorted away from the idealized square antiprism. Very few ions are cubical because such a shape would cause large repulsion between ligands; is one of the few examples. In addition, the element sulfur forms octatomic S8 molecules as its most stable allotrope. The S8 molecule has a structure based on the square antiprism, in which the eight atoms occupy the eight vertices of the antiprism, and the eight triangle-triangle edges of the antiprism correspond to single covalent bonds between sulfur atoms. In architecture The main building block of the One World Trade Center (at the site of the old World Trade Center destroyed on September 11, 2001) has the shape of an extremely tall tapering square antiprism. It is not a true antiprism because of its taper: the top square has half the area of the bottom one. Topologically identical polyhedra Twisted prism A twisted prism can be made (clockwise or counterclockwise) with the same vertex arrangement. It can be seen as the convex form with 4 tetrahedrons excavated around the sides. However, after this it can no longer be triangulated into tetrahedra without adding new vertices. It has half of the symmetry of the uniform solution: D4 order 4. Crossed antiprism A crossed square antiprism is a star polyhedron, topologically identical to the square antiprism with the same vertex arrangement, but it can't be made uniform; the sides are isosceles triangles. Its vertex configuration is 3.3/2.3.4, with one triangle retrograde. It has d4d symmetry, order 8. Related polyhedra Derived polyhedra The gyroelongated square pyramid is a Johnson solid (specifically, J10) constructed by augmenting one a square pyramid. Similarly, the gyroelongated square bipyramid (J17) is a deltahedron (a polyhedron whose faces are all equilateral triangles) constructed by replacing both squares of a square antiprism with a square pyramid. The snub disphenoid (J84) is another deltahedron, constructed by replacing the two squares of a square antiprism by pairs of equilateral triangles. The snub square antiprism (J85) can be seen as a square antiprism with a chain of equilateral triangles inserted around the middle. The sphenocorona (J86) and the sphenomegacorona (J88) are other Johnson solids that, like the square antiprism, consist of two squares and an even number of equilateral triangles. The square antiprism can be truncated and alternated to form a snub antiprism: Symmetry mutation As an antiprism, the square antiprism belongs to a family of polyhedra that includes the octahedron (which can be seen as a triangle-capped antiprism), the pentagonal antiprism, the hexagonal antiprism, and the octagonal antiprism. The square antiprism is first in a series of snub polyhedra and tilings with vertex figure 3.3.4.3.n. Examples See also Biscornu Notes External links Square Antiprism interactive model Virtual Reality Polyhedra www.georgehart.com: The Encyclopedia of Polyhedra VRML model polyhedronisme A4 Prismatoid polyhedra Snub tilings
Square antiprism
[ "Physics" ]
1,021
[ "Tessellation", "Snub tilings", "Symmetry" ]
2,860,340
https://en.wikipedia.org/wiki/Electrodynamic%20tether
Electrodynamic tethers (EDTs) are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through a planet's magnetic field. A number of missions have demonstrated electrodynamic tethers in space, most notably the TSS-1, TSS-1R, and Plasma Motor Generator (PMG) experiments. Tether propulsion As part of a tether propulsion system, craft can use long, strong conductors (though not all tethers are conductive) to change the orbits of spacecraft. It has the potential to make space travel significantly cheaper. When direct current is applied to the tether, it exerts a Lorentz force against the magnetic field, and the tether exerts a force on the vehicle. It can be used either to accelerate or brake an orbiting spacecraft. In 2012 Star Technology and Research was awarded a $1.9 million contract to qualify a tether propulsion system for orbital debris removal. Uses for ED tethers Over the years, numerous applications for electrodynamic tethers have been identified for potential use in industry, government, and scientific exploration. The table below is a summary of some of the potential applications proposed thus far. Some of these applications are general concepts, while others are well-defined systems. Many of these concepts overlap into other areas; however, they are simply placed under the most appropriate heading for the purposes of this table. All of the applications mentioned in the table are elaborated upon in the Tethers Handbook. Three fundamental concepts that tethers possess, are gravity gradients, momentum exchange, and electrodynamics. Potential tether applications can be seen below: ISS reboost EDT has been proposed to maintain the ISS orbit and save the expense of chemical propellant reboosts. It could improve the quality and duration of microgravity conditions. Electrodynamic tether fundamentals The choice of the metal conductor to be used in an electrodynamic tether is determined by a variety of factors. Primary factors usually include high electrical conductivity, and low density. Secondary factors, depending on the application, include cost, strength, and melting point. An electromotive force (EMF) is generated across a tether element as it moves relative to a magnetic field. The force is given by Faraday's Law of Induction: Without loss of generality, it is assumed the tether system is in Earth orbit and it moves relative to Earth's magnetic field. Similarly, if current flows in the tether element, a force can be generated in accordance with the Lorentz force equation In self-powered mode (deorbit mode), this EMF can be used by the tether system to drive the current through the tether and other electrical loads (e.g. resistors, batteries), emit electrons at the emitting end, or collect electrons at the opposite. In boost mode, on-board power supplies must overcome this motional EMF to drive current in the opposite direction, thus creating a force in the opposite direction, as seen in below figure, and boosting the system. Take, for example, the NASA Propulsive Small Expendable Deployer System (ProSEDS) mission as seen in above figure. At 300 km altitude, the Earth's magnetic field, in the north-south direction, is approximately 0.18–0.32 gauss up to ~40° inclination, and the orbital velocity with respect to the local plasma is about 7500 m/s. This results in a Vemf range of 35–250 V/km along the 5 km length of tether. This EMF dictates the potential difference across the bare tether which controls where electrons are collected and / or repelled. Here, the ProSEDS de-boost tether system is configured to enable electron collection to the positively biased higher altitude section of the bare tether, and returned to the ionosphere at the lower altitude end. This flow of electrons through the length of the tether in the presence of the Earth's magnetic field creates a force that produces a drag thrust that helps de-orbit the system, as given by the above equation. The boost mode is similar to the de-orbit mode, except for the fact that a High Voltage Power Supply (HVPS) is also inserted in series with the tether system between the tether and the higher positive potential end. The power supply voltage must be greater than the EMF and the polar opposite. This drives the current in the opposite direction, which in turn causes the higher altitude end to be negatively charged, while the lower altitude end is positively charged(Assuming a standard east to west orbit around Earth). To further emphasize the de-boosting phenomenon, a schematic sketch of a bare tether system with no insulation (all bare) can be seen in below figure. The top of the diagram, point A, represents the electron collection end. The bottom of the tether, point C, is the electron emission end. Similarly, and represent the potential difference from their respective tether ends to the plasma, and is the potential anywhere along the tether with respect to the plasma. Finally, point B is the point at which the potential of the tether is equal to the plasma. The location of point B will vary depending on the equilibrium state of the tether, which is determined by the solution of Kirchhoff's voltage law (KVL) and Kirchhoff's current law (KCL) along the tether. Here , , and describe the current gain from point A to B, the current lost from point B to C, and the current lost at point C, respectively. Since the current is continuously changing along the bare length of the tether, the potential loss due to the resistive nature of the wire is represented as . Along an infinitesimal section of tether, the resistance multiplied by the current traveling across that section is the resistive potential loss. After evaluating KVL & KCL for the system, the results will yield a current and potential profile along the tether, as seen in above figure. This diagram shows that, from point A of the tether down to point B, there is a positive potential bias, which increases the collected current. Below that point, the becomes negative and the collection of ion current begins. Since it takes a much greater potential difference to collect an equivalent amount of ion current (for a given area), the total current in the tether is reduced by a smaller amount. Then, at point C, the remaining current in the system is drawn through the resistive load (), and emitted from an electron emissive device (), and finally across the plasma sheath (). The KVL voltage loop is then closed in the ionosphere where the potential difference is effectively zero. Due to the nature of the bare EDTs, it is often not optional to have the entire tether bare. In order to maximize the thrusting capability of the system a significant portion of the bare tether should be insulated. This insulation amount depends on a number of effects, some of which are plasma density, the tether length and width, the orbiting velocity, and the Earth's magnetic flux density. Tethers as generators An electrodynamic tether is attached to an object, the tether being oriented at an angle to the local vertical between the object and a planet with a magnetic field. The tether's far end can be left bare, making electrical contact with the ionosphere. When the tether intersects the planet's magnetic field, it generates a current, and thereby converts some of the orbiting body's kinetic energy to electrical energy. Functionally, electrons flow from the space plasma into the conductive tether, are passed through a resistive load in a control unit and are emitted into the space plasma by an electron emitter as free electrons. As a result of this process, an electrodynamic force acts on the tether and attached object, slowing their orbital motion. In a loose sense, the process can be likened to a conventional windmill- the drag force of a resistive medium (air or, in this case, the magnetosphere) is used to convert the kinetic energy of relative motion (wind, or the satellite's momentum) into electricity. In principle, compact high-current tether power generators are possible and, with basic hardware, tens, hundreds, and thousands of kilowatts appears to be attainable. Voltage and current NASA has conducted several experiments with Plasma Motor Generator (PMG) tethers in space. An early experiment used a 500-meter conducting tether. In 1996, NASA conducted an experiment with a 20,000-meter conducting tether. When the tether was fully deployed during this test, the orbiting tether generated a potential of 3,500 volts. This conducting single-line tether was severed after five hours of deployment. It is believed that the failure was caused by an electric arc generated by the conductive tether's movement through the Earth's magnetic field. When a tether is moved at a velocity (v) at right angles to the Earth's magnetic field (B), an electric field is observed in the tether's frame of reference. This can be stated as: E = v * B = vB The direction of the electric field (E) is at right angles to both the tether's velocity (v) and magnetic field (B). If the tether is a conductor, then the electric field leads to the displacement of charges along the tether. Note that the velocity used in this equation is the orbital velocity of the tether. The rate of rotation of the Earth, or of its core, is not relevant. In this regard, see also homopolar generator. Voltage across conductor With a long conducting wire of length L, an electric field E is generated in the wire. It produces a voltage V between the opposite ends of the wire. This can be expressed as: where the angle τ is between the length vector (L) of the tether and the electric field vector (E), assumed to be in the vertical direction at right angles to the velocity vector (v) in plane and the magnetic field vector (B) is out of the plane. Current in conductor An electrodynamic tether can be described as a type of thermodynamically "open system". Electrodynamic tether circuits cannot be completed by simply using another wire, since another tether will develop a similar voltage. Fortunately, the Earth's magnetosphere is not "empty", and, in near-Earth regions (especially near the Earth's atmosphere) there exist highly electrically conductive plasmas which are kept partially ionized by solar radiation or other radiant energy. The electron and ion density varies according to various factors, such as the location, altitude, season, sunspot cycle, and contamination levels. It is known that a positively charged bare conductor can readily remove free electrons out of the plasma. Thus, to complete the electrical circuit, a sufficiently large area of uninsulated conductor is needed at the upper, positively charged end of the tether, thereby permitting current to flow through the tether. However, it is more difficult for the opposite (negative) end of the tether to eject free electrons or to collect positive ions from the plasma. It is plausible that, by using a very large collection area at one end of the tether, enough ions can be collected to permit significant current through the plasma. This was demonstrated during the Shuttle orbiter's TSS-1R mission, when the shuttle itself was used as a large plasma contactor to provide over an ampere of current. Improved methods include creating an electron emitter, such as a thermionic cathode, plasma cathode, plasma contactor, or field electron emission device. Since both ends of the tether are "open" to the surrounding plasma, electrons can flow out of one end of the tether while a corresponding flow of electrons enters the other end. In this fashion, the voltage that is electromagnetically induced within the tether can cause current to flow through the surrounding space environment, completing an electrical circuit through what appears to be, at first glance, an open circuit. Tether current The amount of current (I) flowing through a tether depends on various factors. One of these is the circuit's total resistance (R). The circuit's resistance consist of three components: the effective resistance of the plasma, the resistance of the tether, and a control variable resistor. In addition, a parasitic load is needed. The load on the current may take the form of a charging device which, in turn, charges reserve power sources such as batteries. The batteries in return will be used to control power and communication circuits, as well as drive the electron emitting devices at the negative end of the tether. As such the tether can be completely self-powered, besides the initial charge in the batteries to provide electrical power for the deployment and startup procedure. The charging battery load can be viewed as a resistor which absorbs power, but stores this for later use (instead of immediately dissipating heat). It is included as part of the "control resistor". The charging battery load is not treated as a "base resistance" though, as the charging circuit can be turned off at any time. When off, the operations can be continued without interruption using the power stored in the batteries. Current collection / emission for an EDT system: theory and technology Understanding electron and ion current collection to and from the surrounding ambient plasma is critical for most EDT systems. Any exposed conducting section of the EDT system can passively ('passive' and 'active' emission refers to the use of pre-stored energy in order to achieve the desired effect) collect electron or ion current, depending on the electric potential of the spacecraft body with respect to the ambient plasma. In addition, the geometry of the conducting body plays an important role in the size of the sheath and thus the total collection capability. As a result, there are a number of theories for the varying collection techniques. The primary passive processes that control the electron and ion collection on an EDT system are thermal current collection, ion ram collection effects, electron photoemission, and possibly secondary electron and ion emission. In addition, the collection along a thin bare tether is described using orbital motion limited (OML) theory as well as theoretical derivations from this model depending on the physical size with respect to the plasma Debye length. These processes take place all along the exposed conducting material of the entire system. Environmental and orbital parameters can significantly influence the amount collected current. Some important parameters include plasma density, electron and ion temperature, ion molecular weight, magnetic field strength and orbital velocity relative to the surrounding plasma. Then there are active collection and emission techniques involved in an EDT system. This occurs through devices such as hollow cathode plasma contactors, thermionic cathodes, and field emitter arrays. The physical design of each of these structures as well as the current emission capabilities are thoroughly discussed. Bare conductive tethers The concept of current collection to a bare conducting tether was first formalized by Sanmartin and Martinez-Sanchez. They note that the most area efficient current collecting cylindrical surface is one that has an effective radius less than ~1 Debye length where current collection physics is known as orbital motion limited (OML) in a collisionless plasma. As the effective radius of the bare conductive tether increases past this point then there are predictable reductions in collection efficiency compared to OML theory. In addition to this theory (which has been derived for a non-flowing plasma), current collection in space occurs in a flowing plasma, which introduces another collection effect. These issues are explored in greater detail below. Orbit motion limited (OML) theory The electron Debye length is defined as the characteristic shielding distance in a plasma, and is described by the equation This distance, where all electric fields in the plasma resulting from the conductive body have fallen off by 1/e, can be calculated. OML theory is defined with the assumption that the electron Debye length is equal to or larger than the size of the object and the plasma is not flowing. The OML regime occurs when the sheath becomes sufficiently thick such that orbital effects become important in particle collection. This theory accounts for and conserves particle energy and angular momentum. As a result, not all particles that are incident onto the surface of the thick sheath are collected. The voltage of the collecting structure with respect to the ambient plasma, as well as the ambient plasma density and temperature, determines the size of the sheath. This accelerating (or decelerating) voltage combined with the energy and momentum of the incoming particles determines the amount of current collected across the plasma sheath. The orbital-motion-limit regime is attained when the cylinder radius is small enough such that all incoming particle trajectories that are collected are terminated on the cylinder's surface are connected to the background plasma, regardless of their initial angular momentum (i.e., none are connected to another location on the probe's surface). Since, in a quasi-neutral collisionless plasma, the distribution function is conserved along particle orbits, having all “directions of arrival” populated corresponds to an upper limit on the collected current per unit area (not total current). In an EDT system, the best performance for a given tether mass is for a tether diameter chosen to be smaller than an electron Debye length for typical ionospheric ambient conditions (Typical ionospheric conditions in the from 200 to 2000 km altitude range, have a T_e ranging from 0.1 eV to 0.35 eV, and n_e ranging from 10^10 m^-3 to 10^12 m^-3 ), so it is therefore within the OML regime. Tether geometries outside this dimension have been addressed. OML collection will be used as a baseline when comparing the current collection results for various sample tether geometries and sizes. In 1962 Gerald H. Rosen derived the equation that is now known as the OML theory of dust charging. According to Robert Merlino of the University of Iowa, Rosen seems to have arrived at the equation 30 years before anyone else. Deviations from OML theory in a non-flowing plasma For a variety of practical reasons, current collection to a bare EDT does not always satisfy the assumption of OML collection theory. Understanding how the predicted performance deviates from theory is important for these conditions. Two commonly proposed geometries for an EDT involve the use of a cylindrical wire and a flat tape. As long as the cylindrical tether is less than one Debye length in radius, it will collect according to the OML theory. However, once the width exceeds this distance, then the collection increasingly deviates from this theory. If the tether geometry is a flat tape, then an approximation can be used to convert the normalized tape width to an equivalent cylinder radius. This was first done by Sanmartin and Estes and more recently using the 2-Dimensional Kinetic Plasma Solver (KiPS 2-D) by Choiniere et al. Flowing plasma effect There is at present, no closed-form solution to account for the effects of plasma flow relative to the bare tether. However, numerical simulation has been recently developed by Choiniere et al. using KiPS-2D which can simulate flowing cases for simple geometries at high bias potentials. This flowing plasma analysis as it applies to EDTs have been discussed. This phenomenon is presently being investigated through recent work, and is not fully understood. Endbody collection This section discusses the plasma physics theory that explains passive current collection to a large conductive body which will be applied at the end of an ED tether. When the size of the sheath is much smaller than the radius of the collecting body then depending on the polarity of the difference between the potential of the tether and that of the ambient plasma, (V – Vp), it is assumed that all of the incoming electrons or ions that enter the plasma sheath are collected by the conductive body. This 'thin sheath' theory involving non-flowing plasmas is discussed, and then the modifications to this theory for flowing plasma is presented. Other current collection mechanisms will then be discussed. All of the theory presented is used towards developing a current collection model to account for all conditions encountered during an EDT mission. Passive collection theory In a non-flowing quasi-neutral plasma with no magnetic field, it can be assumed that a spherical conducting object will collect equally in all directions. The electron and ion collection at the end-body is governed by the thermal collection process, which is given by Ithe and Ithi. Flowing plasma electron collection mode The next step in developing a more realistic model for current collection is to include the magnetic field effects and plasma flow effects. Assuming a collisionless plasma, electrons and ions gyrate around magnetic field lines as they travel between the poles around the Earth due to magnetic mirroring forces and gradient-curvature drift. They gyrate at a particular radius and frequency dependence upon their mass, the magnetic field strength, and energy. These factors must be considered in current collection models. Flowing plasma ion collection model When the conducting body is negatively biased with respect to the plasma and traveling above the ion thermal velocity, there are additional collection mechanisms at work. For typical Low Earth Orbits (LEOs), between 200 km and 2000 km, the velocities in an inertial reference frame range from 7.8 km/s to 6.9 km/s for a circular orbit and the atmospheric molecular weights range from 25.0 amu (O+, O2+, & NO+) to 1.2 amu (mostly H+), respectively. Assuming that the electron and ion temperatures range from ~0.1 eV to 0.35 eV, the resulting ion velocity ranges from 875 m/s to 4.0 km/s from 200 km to 2000 km altitude, respectively. The electrons are traveling at approximately 188 km/s throughout LEO. This means that the orbiting body is traveling faster than the ions and slower than the electrons, or at a mesosonic speed. This results in a unique phenomenon whereby the orbiting body 'rams' through the surrounding ions in the plasma creating a beam like effect in the reference frame of the orbiting body. Porous endbodies Porous endbodies have been proposed as a way to reduce the drag of a collecting endbody while ideally maintaining a similar current collection. They are often modeled as solid endbodies, except they are a small percentage of the solid spheres surface area. This is, however, an extreme oversimplification of the concept. Much has to be learned about the interactions between the sheath structure, the geometry of the mesh, the size of the endbody, and its relation to current collection. This technology also has the potential to resolve a number of issues concerning EDTs. Diminishing returns with collection current and drag area have set a limit that porous tethers might be able to overcome. Work has been accomplished on current collection using porous spheres, by Stone et al. and Khazanov et al. It has been shown that the maximum current collected by a grid sphere compared to the mass and drag reduction can be estimated. The drag per unit of collected current for a grid sphere with a transparency of 80 to 90% is approximately 1.2 – 1.4 times smaller than that of a solid sphere of the same radius. The reduction in mass per unit volume, for this same comparison, is 2.4 – 2.8 times. Other current collection methods In addition to the electron thermal collection, other processes that could influence the current collection in an EDT system are photoemission, secondary electron emission, and secondary ion emission. These effects pertain to all conducting surfaces on an EDT system, not just the end-body. Space charge limits across plasma sheaths In any application where electrons are emitted across a vacuum gap, there is a maximum allowable current for a given bias due to the self repulsion of the electron beam. This classical 1-D space charge limit (SCL) is derived for charged particles of zero initial energy, and is termed the Child-Langmuir Law. This limit depends on the emission surface area, the potential difference across the plasma gap and the distance of that gap. Further discussion of this topic can be found. Electron emitters There are three active electron emission technologies usually considered for EDT applications: hollow cathode plasma contactors (HCPCs), thermionic cathodes (TCs), and field emission cathodes (FEC), often in the form of field emitter arrays (FEAs). System level configurations will be presented for each device, as well as the relative costs, benefits, and validation. Thermionic cathode (TC) Thermionic emission is the flow of electrons from a heated charged metal or metal oxide surface, caused by thermal vibrational energy overcoming the work function (electrostatic forces holding electrons to the surface). The thermionic emission current density, J, rises rapidly with increasing temperature, releasing a significant number of electrons into the vacuum near the surface. The quantitative relation is given in the equation This equation is called the Richardson-Dushman or Richardson equation. (ф is approximately 4.54 eV and AR ~120 A/cm2 for tungsten). Once the electrons are thermionically emitted from the TC surface they require an acceleration potential to cross a gap, or in this case, the plasma sheath. Electrons can attain this necessary energy to escape the SCL of the plasma sheath if an accelerated grid, or electron gun, is used. The equation shows what potential is needed across the grid in order to emit a certain current entering the device. Here, η is the electron gun assembly (EGA) efficiency (~0.97 in TSS-1), ρ is the perveance of the EGA (7.2 micropervs in TSS-1), ΔVtc is the voltage across the accelerating grid of the EGA, and It is the emitted current. The perveance defines the space charge limited current that can be emitted from a device. The figure below displays commercial examples of thermionic emitters and electron guns produced at Heatwave Labs Inc. TC electron emission will occur in one of two different regimes: temperature or space charge limited current flow. For temperature limited flow every electron that obtains enough energy to escape from the cathode surface is emitted, assuming the acceleration potential of the electron gun is large enough. In this case, the emission current is regulated by the thermionic emission process, given by the Richardson Dushman equation. In SCL electron current flow there are so many electrons emitted from the cathode that not all of them are accelerated enough by the electron gun to escape the space charge. In this case, the electron gun acceleration potential limits the emission current. The below chart displays the temperature limiting currents and SCL effects. As the beam energy of the electrons is increased, the total escaping electrons can be seen to increase. The curves that become horizontal are temperature limited cases. Field emission cathode (FEC) In field electron emission, electrons tunnel through a potential barrier, rather than escaping over it as in thermionic emission or photoemission. For a metal at low temperature, the process can be understood in terms of the figure below. The metal can be considered a potential box, filled with electrons to the Fermi level (which lies below the vacuum level by several electron volts). The vacuum level represents the potential energy of an electron at rest outside the metal in the absence of an external field. In the presence of a strong electric field, the potential outside the metal will be deformed along the line AB, so that a triangular barrier is formed, through which electrons can tunnel. Electrons are extracted from the conduction band with a current density given by the Fowler−Nordheim equation AFN and BFN are the constants determined by measurements of the FEA with units of A/V2 and V/m, respectively. EFN is the electric field that exists between the electron emissive tip and the positively biased structure drawing the electrons out. Typical constants for Spindt type cathodes include: AFN = 3.14 x 10-8 A/V2 and BFN = 771 V/m. (Stanford Research Institute data sheet). An accelerating structure is typically placed in close proximity with the emitting material as in the below figure. Close (micrometer scale) proximity between the emitter and gate, combined with natural or artificial focusing structures, efficiently provide the high field strengths required for emission with relatively low applied voltage and power. A carbon nanotube field-emission cathode was successfully tested on the KITE Electrodynamic tether experiment on the Japanese H-II Transfer Vehicle. Field emission cathodes are often in the form of Field Emitter Arrays (FEAs), such as the cathode design by Spindt et al. The figure below displays close up visual images of a Spindt emitter. A variety of materials have been developed for field emitter arrays, ranging from silicon to semiconductor fabricated molybdenum tips with integrated gates to a plate of randomly distributed carbon nanotubes with a separate gate structure suspended above. The advantages of field emission technologies over alternative electron emission methods are: No requirement for a consumable (gas) and no resulting safety considerations for handling a pressurized vessel A low-power capability Having moderate power impacts due to space-charge limits in the emission of the electrons into the surrounding plasma. One major issue to consider for field emitters is the effect of contamination. In order to achieve electron emission at low voltages, field emitter array tips are built on a micrometer-level scale sizes. Their performance depends on the precise construction of these small structures. They are also dependent on being constructed with a material possessing a low work-function. These factors can render the device extremely sensitive to contamination, especially from hydrocarbons and other large, easily polymerized molecules. Techniques for avoiding, eliminating, or operating in the presence of contaminations in ground testing and ionospheric (e.g. spacecraft outgassing) environments are critical. Research at the University of Michigan and elsewhere has focused on this outgassing issue. Protective enclosures, electron cleaning, robust coatings, and other design features are being developed as potential solutions. FEAs used for space applications still require the demonstration of long term stability, repeatability, and reliability of operation at gate potentials appropriate to the space applications. Hollow cathode Hollow cathodes emit a dense cloud of plasma by first ionizing a gas. This creates a high density plasma plume which makes contact with the surrounding plasma. The region between the high density plume and the surrounding plasma is termed a double sheath or double layer. This double layer is essentially two adjacent layers of charge. The first layer is a positive layer at the edge of the high potential plasma (the contactor plasma cloud). The second layer is a negative layer at the edge of the low potential plasma (the ambient plasma). Further investigation of the double layer phenomenon has been conducted by several people. One type of hollow cathode consists of a metal tube lined with a sintered barium oxide impregnated tungsten insert, capped at one end by a plate with a small orifice, as shown in the below figure. Electrons are emitted from the barium oxide impregnated insert by thermionic emission. A noble gas flows into the insert region of the HC and is partially ionized by the emitted electrons that are accelerated by an electric field near the orifice (Xenon is a common gas used for HCs as it has a low specific ionization energy (ionization potential per unit mass). For EDT purposes, a lower mass would be more beneficial because the total system mass would be less. This gas is just used for charge exchange and not propulsion.). Many of the ionized xenon atoms are accelerated into the walls where their energy maintains the thermionic emission temperature. The ionized xenon also exits out of the orifice. Electrons are accelerated from the insert region, through the orifice to the keeper, which is always at a more positive bias. In electron emission mode, the ambient plasma is positively biased with respect to the keeper. In the contactor plasma, the electron density is approximately equal to the ion density. The higher energy electrons stream through the slowly expanding ion cloud, while the lower energy electrons are trapped within the cloud by the keeper potential. The high electron velocities lead to electron currents much greater than xenon ion currents. Below the electron emission saturation limit the contactor acts as a bipolar emissive probe. Each outgoing ion generated by an electron allows a number of electrons to be emitted. This number is approximately equal to the square root of the ratio of the ion mass to the electron mass. It can be seen in the below chart what a typical I-V curve looks like for a hollow cathode in electron emission mode. Given a certain keeper geometry (the ring in the figure above that the electrons exit through), ion flow rate, and Vp, the I-V profile can be determined. [111-113]. The operation of the HC in the electron collection mode is called the plasma contacting (or ignited) operating mode. The “ignited mode” is so termed because it indicates that multi-ampere current levels can be achieved by using the voltage drop at the plasma contactor. This accelerates space plasma electrons which ionize neutral expellant flow from the contactor. If electron collection currents are high and/or ambient electron densities are low, the sheath at which electron current collection is sustained simply expands or shrinks until the required current is collected. In addition, the geometry affects the emission of the plasma from the HC as seen in the below figure. Here it can be seen that, depending on the diameter and thickness of the keeper and the distance of it with respect to the orifice, the total emission percentage can be affected. Plasma collection and emission summary All of the electron emission and collection techniques can be summarized in the table following. For each method there is a description as to whether the electrons or ions in the system increased or decreased based on the potential of the spacecraft with respect to the plasma. Electrons (e-) and ions (ions+) indicates that the number of electrons or ions are being increased (↑) or reduced (↓). Also, for each method some special conditions apply (see the respective sections in this article for further clarification of when and where it applies). {| class="wikitable" |- ! Passive e− and ion emission/collection ! V − Vp < 0 ! V − Vp > 0 |- | Bare tether: OML | ions+ ↑ | e− ↑ |- | Ram collection | ions+ ↑ | 0 |- | Thermal collection | ions+ ↑ | e− ↑ |- | Photoemmision | e− ↓ | e− ↓,~0 |- | Secondary electron emission | e− ↓ | e− ↓ |- | Secondary ion emission | ions+ ↓,~0 | 0 |- | Retardation regieme | e− ↑ | ions+ ↑, ~0 |- ! Active e− and ion emission |colspan="2"| Potential does not matter |- | Thermionic emission |colspan="2"| e− ↓ |- | Field emitter arrays |colspan="2"| e− ↓ |- | Hollow cathodes | e− ↓ | e− ↑ |} For use in EDT system modeling, each of the passive electron collection and emission theory models has been verified by reproducing previously published equations and results. These plots include: orbital motion limited theory, Ram collection, and thermal collection, photoemission, secondary electron emission, and secondary ion emission. Electrodynamic tether system fundamentals In order to integrate all the most recent electron emitters, collectors, and theory into a single model, the EDT system must first be defined and derived. Once this is accomplished it will be possible to apply this theory toward determining optimizations of system attributes. There are a number of derivations that solve for the potentials and currents involved in an EDT system numerically. The derivation and numerical methodology of a full EDT system that includes a bare tether section, insulating conducting tether, electron (and ion) endbody emitters, and passive electron collection is described. This is followed by the simplified, all insulated tether model. Special EDT phenomena and verification of the EDT system model using experimental mission data will then be discussed. Bare tether system derivation An important note concerning an EDT derivation pertains to the celestial body which the tether system orbits. For practicality, Earth will be used as the body that is orbited; however, this theory applies to any celestial body with an ionosphere and a magnetic field. The coordinates are the first thing that must be identified. For the purposes of this derivation, the x- and y-axis are defined as the east-west, and north-south directions with respect to the Earth's surface, respectively. The z-axis is defined as up-down from the Earth's center, as seen in the figure below. The parameters – magnetic field B, tether length L, and the orbital velocity vorb – are vectors that can be expressed in terms of this coordinate system, as in the following equations: (the magnetic field vector), (the tether position vector), and (the orbital velocity vector). The components of the magnetic field can be obtained directly from the International Geomagnetic Reference Field (IGRF) model. This model is compiled from a collaborative effort between magnetic field modelers and the institutes involved in collecting and disseminating magnetic field data from satellites and from observatories and surveys around the world. For this derivation, it is assumed that the magnetic field lines are all the same angle throughout the length of the tether, and that the tether is rigid. Realistically, the transverse electrodynamic forces cause the tether to bow and to swing away from the local vertical. Gravity gradient forces then produce a restoring force that pulls the tether back towards the local vertical; however, this results in a pendulum-like motion (Gravity gradient forces also result in pendulous motions without ED forces). The B direction changes as the tether orbits the Earth, and thus the direction and magnitude of the ED forces also change. This pendulum motion can develop into complex librations in both the in-plane and out-of-plane directions. Then, due to coupling between the in-plane motion and longitudinal elastic oscillations, as well as coupling between in-plane and out-of-plane motions, an electrodynamic tether operated at a constant current can continually add energy to the libration motions. This effect then has a chance to cause the libration amplitudes to grow and eventually cause wild oscillations, including one such as the 'skip-rope effect', but that is beyond the scope of this derivation. In a non-rotating EDT system (A rotating system, called Momentum Exchange Electrodynamic Reboost [MXER]), the tether is predominantly in the z-direction due to the natural gravity gradient alignment with the Earth. Derivations The following derivation will describe the exact solution to the system accounting for all vector quantities involved, and then a second solution with the nominal condition where the magnetic field, the orbital velocity, and the tether orientation are all perpendicular to one another. The final solution of the nominal case is solved for in terms of just the electron density, n_e, the tether resistance per unit length, R_t, and the power of the high voltage power supply, P_hvps. The below figure describes a typical EDT system in a series bias grounded gate configuration (further description of the various types of configurations analyzed have been presented) with a blow-up of an infinitesimal section of bare tether. This figure is symmetrically set up so either end can be used as the anode. This tether system is symmetrical because rotating tether systems will need to use both ends as anodes and cathodes at some point in its rotation. The V_hvps will only be used in the cathode end of the EDT system, and is turned off otherwise. In-plane and out-of-plane direction is determined by the orbital velocity vector of the system. An in-plane force is in the direction of travel. It will add or remove energy to the orbit, thereby increasing the altitude by changing the orbit into an elliptical one. An out-of-plane force is in the direction perpendicular to the plane of travel, which causes a change in inclination. This will be explained in the following section. To calculate the in-plane and out-of-plane directions, the components of the velocity and magnetic field vectors must be obtained and the force values calculated. The component of the force in the direction of travel will serve to enhance the orbit raising capabilities, while the out-of-plane component of thrust will alter the inclination. In the below figure, the magnetic field vector is solely in the north (or y-axis) direction, and the resulting forces on an orbit, with some inclination, can be seen. An orbit with no inclination would have all the thrust in the in-plane direction. There has been work conducted to stabilize the librations of the tether system to prevent misalignment of the tether with the gravity gradient. The below figure displays the drag effects an EDT system will encounter for a typical orbit. The in-plane angle, α_ip, and out-of-plane angle, α_op, can be reduced by increasing the endmass of the system, or by employing feedback technology. Any deviations in the gravity alignment must be understood, and accounted for in the system design. Interstellar travel An application of the EDT system has been considered and researched for interstellar travel by using the local interstellar medium of the Local Bubble. It has been found to be feasible to use the EDT system to supply on-board power given a crew of 50 with a requirement of 12 kilowatts per person. Energy generation is achieved at the expense of kinetic energy of the spacecraft. In reverse the EDT system could be used for acceleration. However, this has been found to be ineffective. Thrustless turning using the EDT system is possible to allow for course correction and rendezvous in interstellar space. It will not, however, allow rapid thrustless circling to allow a starship to re-enter a power beam or make numerous solar passes due to an extremely large turning radius of 3.7*1013 km (~3.7 lightyears). See also STARS-II HTV-6 Tether propulsion Earth's magnetic field Tether satellite Atmospheric electricity STS-75 Magnetic sail Electric sail Spacecraft propulsion References General information Cosmo, M.L., and Lorenzini, E.C., "Tethers in Space Handbook," NASA Marchall Space Flight Center, 1997, pp. 274–1-274. Mariani, F., Candidi, M., Orsini, S., "Current Flow Through High-Voltage Sheaths Observer by the TEMAG Experiment During TSS-1R," Geophysical Research Letters, Vol. 25, No. 4, 1998, pp. 425–428. Citations Further reading Dobrowolny, M. (1979). Wave and particle phenomena induced by an electrodynamic tether. SAO special report, 388. Cambridge, Mass: Smithsonian Institution Astrophysical Observatory. Williamson, P. R. (1986). High voltage characteristics of the electrodynamic tether and the generation of power and propulsion final report. [NASA contractor report], NASA CR-178949. Washington, DC: National Aeronautics and Space Administration. External links Related patents , "Space station and system for operating same". , "Ionospheric battery". , "Satellite connected by means of a long tether to a powered spacecraft ". , "Electrodynamic Tether And Method of Use". Publications Cosmo, M. L., and E. C. Lorenzini, "Tethers in Space Handbook" (3rd ed). Prepared for NASA/MSFC by Smithsonian Astrophysical Observatory, Cambridge, MA, December 1997. (PDF) Other articles "Electrodynamic Tethers ". Tethers.com. "Shuttle Electrodynamic Tether System (SETS)". Enrico Lorenzini and Juan Sanmartín, "Electrodynamic Tethers in Space; By exploiting fundamental physical laws, tethers may provide low-cost electrical power, drag, thrust, and artificial gravity for spaceflight". Scientific American, August 2004. "Tethers". Astronomy Study Guide, BookRags. David P. Stern, "The Space Tether Experiment". 25 November 2001. Spacecraft propulsion Spacecraft components Electrodynamics Magnetic propulsion devices Electrical generators
Electrodynamic tether
[ "Physics", "Mathematics", "Technology" ]
9,444
[ "Electrical generators", "Machines", "Physical systems", "Electrodynamics", "Dynamical systems" ]
2,860,674
https://en.wikipedia.org/wiki/Q%20value%20%28nuclear%20science%29
In nuclear physics and chemistry, the value for a nuclear reaction is the amount of energy absorbed or released during the reaction. The value relates to the enthalpy of a chemical reaction or the energy of radioactive decay products. It can be determined from the masses of reactants and products. values affect reaction rates. In general, the larger the positive value for the reaction, the faster the reaction proceeds, and the more likely the reaction is to "favor" the products. where the masses are in atomic mass units. Also, both and are the sums of the reactant and product masses respectively. Definition The conservation of energy, between the initial and final energy of a nuclear process enables the general definition of based on the mass–energy equivalence. For any radioactive particle decay, the kinetic energy difference will be given by: where denotes the kinetic energy of the mass  . A reaction with a positive value is exothermic, i.e. has a net release of energy, since the kinetic energy of the final state is greater than the kinetic energy of the initial state. A reaction with a negative value is endothermic, i.e. requires a net energy input, since the kinetic energy of the final state is less than the kinetic energy of the initial state. Observe that a chemical reaction is exothermic when it has a negative enthalpy of reaction, in contrast a positive value in a nuclear reaction. The value can also be expressed in terms of the Mass excess of the nuclear species as: Proof The mass of a nucleus can be written as where is the mass number (sum of number of protons and neutrons) and MeV/c. Note that the count of nucleons is conserved in a nuclear reaction. Hence, and . Applications Chemical values are measurement in calorimetry. Exothermic chemical reactions tend to be more spontaneous and can emit light or heat, resulting in runaway feedback(i.e. explosions). values are also featured in particle physics. For example, Sargent's rule states that weak reaction rates are proportional to 5. The value is the kinetic energy released in the decay at rest. For neutron decay, some mass disappears as neutrons convert to a proton, electron and antineutrino: where mn is the mass of the neutron, p is the mass of the proton, is the mass of the electron antineutrino, and e is the mass of the electron; and the are the corresponding kinetic energies. The neutron has no initial kinetic energy since it is at rest. In beta decay, a typical is around 1 MeV. The decay energy is divided among the products in a continuous distribution for more than two products. Measuring this spectrum allows one to find the mass of a product. Experiments are studying emission spectrums to search for neutrinoless decay and neutrino mass; this is the principle of the ongoing KATRIN experiment. See also Binding energy Calorimeter (particle physics) Decay energy Fusion energy gain factor Pandemonium effect Notes and references External links – interactive query form for -value of requested decay. – demonstrates simply the mass-energy equivalence. Nuclear physics Energy
Q value (nuclear science)
[ "Physics" ]
639
[ "Energy (physics)", "Energy", "Physical quantities", "Nuclear physics" ]
2,861,044
https://en.wikipedia.org/wiki/Land%20description
In surveying and property law, a land description or legal description is a written statement that delineates the boundaries of a piece of real property. In the written transfer of real property, it is universally required that the instrument of conveyance (deed) include a written description of the property. Legal land description Canada In many parts of Canada the original subdivision of crown land was done by township surveys. Different sizes of townships have been used (e.g. Québec's irregularly shaped cantons and Ontario's concession townships), but all were designed to provide rectangular farm lots within a defined rural community. The survey of a township was essentially a subdivision survey, because the plan of the township was registered and the lots (sometimes called sections) were numbered. The description of a whole lot for legal purposes is complete in the identification of the township and the lot within the township. A legal land description in Manitoba, Saskatchewan, and Alberta would be defined by the Dominion Land Survey. For example, the village of Yarbo, Saskatchewan is located at the legal land description of SE-12-20-33-W1, which would be the South East quarter of Section 12, Township 20, Range 33, West of the first meridian. A legal land description in British Columbia Fraser Valley Lower Mainland (Metro Vancouver) is defined by land surveys based out of New Westminster. Land in New Westminster Townsite corresponding to present day New Westminster is labelled as such while land outside the townsite is labelled as being in New Westminster District. The Main subdivisions are District Lots that represent parcel sales to settlers mostly in the time from 1860-1890. District lots are numbered from DL1 to over DL3,000. These District Lots are still represented on the cadastral maps of British Columbia. Later these lots would be subdivided to form blocks and residential lots. A typical address would thus indicate a lot number, a block range, and the original District Lot from which it was subdivided. References External links Cadastral Map of British Columbia showing District Lots Mouland D.J. (1987) Land Descriptions. In: Brinker R.C., Minnick R. (eds) The Surveying Handbook. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-1188-2_30 Surveying Real estate in Canada Real property law
Land description
[ "Engineering" ]
483
[ "Surveying", "Civil engineering" ]
2,861,047
https://en.wikipedia.org/wiki/Variational%20methods%20in%20general%20relativity
Variational methods in general relativity refers to various mathematical techniques that employ the use of variational calculus in Einstein's theory of general relativity. The most commonly used tools are Lagrangians and Hamiltonians and are used to derive the Einstein field equations. Lagrangian methods The equations of motion in physical theories can often be derived from an object called the Lagrangian. In classical mechanics, this object is usually of the form, 'kinetic energy − potential energy'. In general, the Lagrangian is that function which when integrated over produces the Action functional. David Hilbert gave an early and classic formulation of the equations in Einstein's general relativity. This used the functional now called the Einstein-Hilbert action. See also Palatini action Plebanski action MacDowell–Mansouri action Freidel–Starodubtsev action Mathematics of general relativity Fermat's and energy variation principles in field theory References
Variational methods in general relativity
[ "Physics" ]
195
[ "Relativity stubs", "Theory of relativity" ]
2,861,460
https://en.wikipedia.org/wiki/Cooling%20curve
A cooling curve is a line graph that represents the change of phase of matter, typically from a gas to a solid or a liquid to a solid. The independent variable (X-axis) is time and the dependent variable (Y-axis) is temperature. Below is an example of a cooling curve used in castings. The initial point of the graph is the starting temperature of the matter, here noted as the "pouring temperature". When the phase change occurs, there is a "thermal arrest"; that is, the temperature stays constant. This is because the matter has more internal energy as a liquid or gas than in the state that it is cooling to. The amount of energy required for a phase change is known as latent heat. The "cooling rate" is the slope of the cooling curve at any point. Alloys have a melting point range. It solidifies as shown in the figure above. First, the molten alloy reaches to liquidus temperature and then freezing range starts. At solidus temperature, the molten alloy becomes solid. References Phase transitions Thermodynamics
Cooling curve
[ "Physics", "Chemistry", "Mathematics" ]
220
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Physical phenomena", "Phase transitions", "Critical phenomena", "Phases of matter", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs", "Matter", "Dynamical systems" ]
2,862,625
https://en.wikipedia.org/wiki/Fouling
Fouling is the accumulation of unwanted material on solid surfaces. The fouling materials can consist of either living organisms (biofouling, organic) or a non-living substance (inorganic). Fouling is usually distinguished from other surface-growth phenomena in that it occurs on a surface of a component, system, or plant performing a defined and useful function and that the fouling process impedes or interferes with this function. Other terms used in the literature to describe fouling include deposit formation, encrustation, crudding, deposition, scaling, scale formation, slagging, and sludge formation. The last six terms have a more narrow meaning than fouling within the scope of the fouling science and technology, and they also have meanings outside of this scope; therefore, they should be used with caution. Fouling phenomena are common and diverse, ranging from fouling of ship hulls, natural surfaces in the marine environment (marine fouling), fouling of heat-transfer components through ingredients contained in cooling water or gases, and even the development of plaque or calculus on teeth or deposits on solar panels on Mars, among other examples. This article is primarily devoted to the fouling of industrial heat exchangers, although the same theory is generally applicable to other varieties of fouling. In cooling technology and other technical fields, a distinction is made between macro fouling and micro fouling. Of the two, micro fouling is the one that is usually more difficult to prevent and therefore more important. Components subject to fouling Examples of components that may be subject to fouling and the corresponding effects of fouling: Heat exchanger surfaces – reduces thermal efficiency, decreases heat flux, increases temperature on the hot side, decreases temperature on the cold side, induces under-deposit corrosion, increases use of cooling water; Piping, flow channels – reduces flow, increases pressure drop, increases upstream pressure, increases energy expenditure, may cause flow oscillations, slugging in two-phase flow, cavitation; may increase flow velocity elsewhere, may induce vibrations, may cause flow blockage; Ship hulls – creates additional drag, increases fuel usage, reduces maximum speed; Turbines – reduces efficiency, increases probability of failure; Solar panels – decreases the electrical power generated; Reverse osmosis membranes – increases pressure drop, increases energy expenditure, reduces flux, membrane failure (in severe cases); Electrical heating elements – increases temperature of the element, increases corrosion, reduces lifespan; Firearm barrels - increases chamber pressure; hampers loading for muzzleloaders Nuclear fuel in pressurized water reactors – axial offset anomaly, may need to de-rate the power plant; Injection/spray nozzles (e.g., a nozzle spraying a fuel into a furnace) – incorrect amount injected, malformed jet, component inefficiency, component failure; Venturi tubes, orifice plates – inaccurate or incorrect measurement of flow rate; Pitot tubes in airplanes – inaccurate or incorrect indication of airplane speed; Spark plug electrodes in cars – engine misfiring; Production zone of petroleum reservoirs and oil wells – decreased petroleum production with time; plugging; in some cases complete stoppage of flow in a matter of days; Teeth – promotes tooth or gum disease, decreases aesthetics; Living organisms – deposition of excess minerals (e.g., calcium, iron, copper) in tissues is (sometimes controversially) linked to aging/senescence. Macro fouling Macro fouling is caused by coarse matter of either biological or inorganic origin, for example industrially produced refuse. Such matter enters into the cooling water circuit through the cooling water pumps from sources like the open sea, rivers or lakes. In closed circuits, like cooling towers, the ingress of macro fouling into the cooling tower basin is possible through open canals or by the wind. Sometimes, parts of the cooling tower internals detach themselves and are carried into the cooling water circuit. Such substances can foul the surfaces of heat exchangers and may cause deterioration of the relevant heat transfer coefficient. They may also create flow blockages, redistribute the flow inside the components, or cause fretting damage. Examples Manmade refuse; Detached internal parts of components; Tools and other "foreign objects" accidentally left after maintenance; Algae; Mussels; Leaves, parts of plants up to entire trunks. Micro fouling As to micro fouling, distinctions are made between: Scaling or precipitation fouling, as crystallization of solid salts, oxides, and hydroxides from water solutions (e.g., calcium carbonate or calcium sulfate) Particulate fouling, i.e., accumulation of particles, typically colloidal particles, on a surface Corrosion fouling, i.e., in-situ growth of corrosion deposits, for example, magnetite on carbon steel surfaces Chemical reaction fouling, for example, decomposition or polymerization of organic matter on heating surfaces Solidification fouling - when components of the flowing fluid with a high-melting point freeze onto a subcooled surface Biofouling, like settlements of bacteria and algae Composite fouling, whereby fouling involves more than one foulant or fouling mechanism Precipitation fouling Scaling or precipitation fouling involves crystallization of solid salts, oxides, and hydroxides from solutions. These are most often water solutions, but non-aqueous precipitation fouling is also known. Precipitation fouling is a very common problem in boilers and heat exchangers operating with hard water and often results in limescale. Through changes in temperature, or solvent evaporation or degasification, the concentration of salts may exceed the saturation, leading to a precipitation of solids (usually crystals). As an example, the equilibrium between the readily soluble calcium bicarbonate - always prevailing in natural water - and the poorly soluble calcium carbonate, the following chemical equation may be written: \mathsf{{Ca(HCO3)2}_{(aqueous)} -> {CaCO3(v)} + {CO2}\!{\uparrow} + H2O} The calcium carbonate that forms through this reaction precipitates. Due to the temperature dependence of the reaction, and increasing volatility of CO2 with increasing temperature, the scaling is higher at the hotter outlet of the heat exchanger than at the cooler inlet. In general, the dependence of the salt solubility on temperature or presence of evaporation will often be the driving force for precipitation fouling. The important distinction is between salts with "normal" or "retrograde" dependence of solubility on temperature. Salts with the "normal" solubility increase their solubility with increasing temperature and thus will foul the cooling surfaces. Salts with "inverse" or "retrograde" solubility will foul the heating surfaces. An example of the temperature dependence of solubility is shown in the figure. Calcium sulfate is a common precipitation foulant of heating surfaces due to its retrograde solubility. Precipitation fouling can also occur in the absence of heating or vaporization. For example, calcium sulfate decreases its solubility with decreasing pressure. This can lead to precipitation fouling of reservoirs and wells in oil fields, decreasing their productivity with time. Fouling of membranes in reverse osmosis systems can occur due to differential solubility of barium sulfate in solutions of different ionic strength. Similarly, precipitation fouling can occur because of solubility changes induced by other factors, e.g., liquid flashing, liquid degassing, redox potential changes, or mixing of incompatible fluid streams. The following lists some of the industrially common phases of precipitation fouling deposits observed in practice to form from aqueous solutions: Calcium carbonate (calcite, aragonite usually at t > ~50 °C, or rarely vaterite); Calcium sulfate (anhydrite, hemihydrate, gypsum); Calcium oxalate (e.g., beerstone); Barium sulfate (barite); Magnesium hydroxide (brucite); magnesium oxide (periclase); Silicates (serpentine, acmite, gyrolite, gehlenite, amorphous silica, quartz, cristobalite, pectolite, xonotlite); Aluminium oxide hydroxides (boehmite, gibbsite, diaspore, corundum); Aluminosilicates (analcite, cancrinite, noselite); Copper (metallic copper, cuprite, tenorite); Phosphates (hydroxyapatite); Magnetite or nickel ferrite (NiFe2O4) from extremely pure, low-iron water. The deposition rate by precipitation is often described by the following equations: Transport: Surface crystallisation: Overall: where: - mass of the material (per unit surface area), kg/m2 - time, s - concentration of the substance in the bulk of the fluid, kg/m3 - concentration of the substance at the interface, kg/m3 - equilibrium concentration of the substance at the conditions of the interface, kg/m3 - order of reaction for the crystallization reaction and the overall deposition process, respectively, dimensionless - kinetic rate constants for the transport, the surface reaction, and the overall deposition reaction, respectively; with the dimension of m/s (when ) Particulate fouling Fouling by particles suspended in water ("crud") or in gas progresses by a mechanism different than precipitation fouling. This process is usually most important for colloidal particles, i.e., particles smaller than about 1 μm in at least one dimension (but which are much larger than atomic dimensions). Particles are transported to the surface by a number of mechanisms and there they can attach themselves, e.g., by flocculation or coagulation. Note that the attachment of colloidal particles typically involves electrical forces and thus the particle behaviour defies the experience from the macroscopic world. The probability of attachment is sometimes referred to as "sticking probability", : where and are the kinetic rate constants for deposition and transport, respectively. The value of for colloidal particles is a function of both the surface chemistry, geometry, and the local thermohydraulic conditions. An alternative to using the sticking probability is to use a kinetic attachment rate constant, assuming the first order reaction: and then the transport and attachment kinetic coefficients are combined as two processes occurring in series: where: is the rate of the deposition by particles, kg m−2 s−1, are the kinetic rate constants for deposition, m/s, and are the concentration of the particle foulant at the interface and in the bulk fluid, respectively; kg m−3. Being essentially a surface chemistry phenomenon, this fouling mechanism can be very sensitive to factors that affect colloidal stability, e.g., zeta potential. A maximum fouling rate is usually observed when the fouling particles and the substrate exhibit opposite electrical charge, or near the point of zero charge of either of them. Particles larger than those of colloidal dimensions may also foul e.g., by sedimentation ("sedimentation fouling") or straining in small-size openings. With time, the resulting surface deposit may harden through processes collectively known as "deposit consolidation" or, colloquially, "aging". The common particulate fouling deposits formed from aqueous suspensions include: iron oxides and iron oxyhydroxides (magnetite, hematite, lepidocrocite, maghemite, goethite); Sedimentation fouling by silt and other relatively coarse suspended matter. Fouling by particles from gas aerosols is also of industrial significance. The particles can be either solid or liquid. The common examples can be fouling by flue gases, or fouling of air-cooled components by dust in air. The mechanisms are discussed in article on aerosol deposition. Corrosion fouling Corrosion deposits are created in-situ by the corrosion of the substrate. They are distinguished from fouling deposits, which form from material originating ex-situ. Corrosion deposits should not be confused with fouling deposits formed by ex-situ generated corrosion products. Corrosion deposits will normally have composition related to the composition of the substrate. Also, the geometry of the metal-oxide and oxide-fluid interfaces may allow practical distinction between the corrosion and fouling deposits. An example of corrosion fouling can be formation of an iron oxide or oxyhydroxide deposit from corrosion of the carbon steel underneath. Corrosion fouling should not be confused with fouling corrosion, i.e., any of the types of corrosion that may be induced by fouling. Chemical reaction fouling Chemical reactions may occur on contact of the chemical species in the process fluid with heat transfer surfaces. In such cases, the metallic surface sometimes acts as a catalyst. For example, corrosion and polymerization occurs in cooling water for the chemical industry which has a minor content of hydrocarbons. Systems in petroleum processing are prone to polymerization of olefins or deposition of heavy fractions (asphaltenes, waxes, etc.). High tube wall temperatures may lead to carbonizing of organic matter. The food industry, for example milk processing, also experiences fouling problems by chemical reactions. Fouling through an ionic reaction with an evolution of an inorganic solid is commonly classified as precipitation fouling (not chemical reaction fouling). Solidification fouling Solidification fouling occurs when a component of the flowing fluid "freezes" onto a surface forming a solid fouling deposit. Examples may include solidification of wax (with a high melting point) from a hydrocarbon solution, or of molten ash (carried in a furnace exhaust gas) onto a heat exchanger surface. The surface needs to have a temperature below a certain threshold; therefore, it is said to be subcooled in respect to the solidification point of the foulant. Biofouling Biofouling or biological fouling is the undesirable accumulation of micro-organisms, algae and diatoms, plants, and animals on surfaces, such as ships and submarine hulls, or piping and reservoirs with untreated water. This can be accompanied by microbiologically influenced corrosion (MIC). Bacteria can form biofilms or slimes. Thus the organisms can aggregate on surfaces using colloidal hydrogels of water and extracellular polymeric substances (EPS) (polysaccharides, lipids, nucleic acids, etc.). The biofilm structure is usually complex. Bacterial fouling can occur under either aerobic (with oxygen dissolved in water) or anaerobic (no oxygen) conditions. In practice, aerobic bacteria prefer open systems, when both oxygen and nutrients are constantly delivered, often in warm and sunlit environments. Anaerobic fouling more often occurs in closed systems when sufficient nutrients are present. Examples may include sulfate-reducing bacteria (or sulfur-reducing bacteria), which produce sulfide and often cause corrosion of ferrous metals (and other alloys). Sulfide-oxidizing bacteria (e.g., Acidithiobacillus), on the other hand, can produce sulfuric acid, and can be involved in corrosion of concrete. Zebra mussels serve as an example of larger animals that have caused widespread fouling in North America. Composite fouling Composite fouling is common. This type of fouling involves more than one foulant or more than one fouling mechanism working simultaneously. The multiple foulants or mechanisms may interact with each other resulting in a synergistic fouling which is not a simple arithmetic sum of the individual components. Fouling on Mars NASA Mars Exploration Rovers (Spirit and Opportunity) experienced (presumably) abiotic fouling of solar panels by dust particles from the Martian atmosphere. Some of the deposits subsequently spontaneously cleaned off. This illustrates the universal nature of the fouling phenomena. Quantification of fouling The most straightforward way to quantify fairly uniform fouling is by stating the average deposit surface loading, i.e., kg of deposit per m2 of surface area. The fouling rate will then be expressed in kg/m2s, and it is obtained by dividing the deposit surface loading by the effective operating time. The normalized fouling rate (also in kg/m2s) will additionally account for the concentration of the foulant in the process fluid (kg/kg) during preceding operations, and is useful for comparison of fouling rates between different systems. It is obtained by dividing the fouling rate by the foulant concentration. The fouling rate constant (m/s) can be obtained by dividing the normalized fouling rate by the mass density of the process fluid (kg/m3). Deposit thickness (μm) and porosity (%) are also often used for description of fouling amount. The relative reduction of diameter of piping or increase of the surface roughness can be of particular interest when the impact of fouling on pressure drop is of interest. In heat transfer equipment, where the primary concern is often the effect of fouling on heat transfer, fouling can be quantified by the increase of the resistance to the flow of heat (m2K/W) due to fouling (termed "fouling resistance"), or by development of heat transfer coefficient (W/m2K) with time. If under-deposit or crevice corrosion is of primary concern, it is important to note non-uniformity of deposit thickness (e.g., deposit waviness), localized fouling, packing of confined regions with deposits, creation of occlusions, "crevices", "deposit tubercles", or sludge piles. Such deposit structures can create environment for underdeposit corrosion of the substrate material, e.g., intergranular attack, pitting, stress corrosion cracking, or localized wastage. Porosity and permeability of the deposits will likely influence the probability of underdeposit corrosion. Deposit composition can also be important - even minor components of the deposits can sometimes cause severe corrosion of the underlying metal (e.g., vanadium in deposits of fired boilers causing hot corrosion). There is no general rule on how much deposit can be tolerated, it depends on the system. In many cases, a deposit even a few micrometers thick can be troublesome. A deposit in a millimeter-range thickness will be of concern in almost any application. Progress of fouling with time Deposit on a surface does not always develop steadily with time. The following fouling scenarios can be distinguished, depending on the nature of the system and the local thermohydraulic conditions at the surface: Induction period - Sometimes, a near-nil fouling rate is observed when the surface is new or very clean. This is often observed in biofouling and precipitation fouling. After the "induction period", the fouling rate increases. "Negative" fouling - This can occur when fouling rate is quantified by monitoring heat transfer. Relatively small amounts of deposit can improve heat transfer, relative to clean surface, and give an appearance of "negative" fouling rate and negative total fouling amount. Negative fouling is often observed under nucleate-boiling heat-transfer conditions (deposit improves bubble nucleation) or forced-convection (if the deposit increases the surface roughness and the surface is no longer "hydraulically smooth"). After the initial period of "surface roughness control", the fouling rate usually becomes strongly positive. Linear fouling - The fouling rate can be steady with time. This is a common case. Falling fouling - In this scenario, the fouling rate decreases with time, but never drops to zero. The deposit thickness does not achieve a constant value. The progress of fouling can be often described by two numbers: the initial fouling rate (a tangent to the fouling curve at zero deposit loading or zero time) and the fouling rate after a long period of time (an oblique asymptote to the fouling curve). Asymptotic fouling - Here, the fouling rate decreases with time, until it finally reaches zero. At this point, the deposit thickness remains constant with time (a horizontal asymptote). This is often the case for relatively soft or poorly adherent deposits in areas of fast flow. The asymptote is usually interpreted as the deposit loading at which the deposition rate equals the deposit removal rate. Accelerating fouling - In this scenario, the fouling rate increases with time; the rate of deposit buildup accelerates with time (perhaps until it becomes transport limited). Mechanistically, this scenario can develop when fouling increases the surface roughness, or when the deposit surface exhibits higher chemical propensity to fouling than the pure underlying metal. Seesaw fouling - Here, fouling loading generally increases with time (often assuming a generally linear or falling rate), but, when looked at in more detail, the fouling progress is periodically interrupted and takes the form of sawtooth curve. The periodic sharp variations in the apparent fouling amount often correspond to the moments of system shutdowns, startups or other transients in operation. The periodic variations are often interpreted as periodic removal of some of the deposit (perhaps deposit re-suspension due to pressure pulses, spalling due thermal stresses, or exfoliation due to redox transients). Steam blanketing has been postulated to occur between the partially spalled deposits and the heat transfer surface. However, other reasons are possible, e.g., trapping of air inside the surface deposits during shutdowns, or inaccuracy of temperature measurements during transients ("temperature streaming"). Fouling modelling Fouling of a system can be modelled as consisting of several steps: Generation or ingress of the species that causes fouling ("foulant sourcing"); Foulant transport with the stream of the process fluid (most often by advection); Foulant transport from the bulk of the process fluid to the fouling surface. This transport is often by molecular or turbulent-eddy diffusion, but may also occur by inertial coasting/impaction, particle interception by the surface (for particles with finite sizes), electrophoresis, thermophoresis, diffusiophoresis, Stefan flow (in condensation and evaporation), sedimentation, Magnus force (acting on rotating particles), thermoelectric effect, and other mechanisms. Induction period, i.e., a near-nil fouling rate at the initial period of fouling (observed only for some fouling mechanisms); Foulant crystallisation on the surface (or attachment of the colloidal particle, or chemical reaction, or bacterial growth); Sometimes fouling autoretardation, i.e., reduction (or potentially enhancement) of crystallisation/attachment rate due to changes in the surface conditions caused by the fouling deposit; Deposit dissolution (or re-entrainment of loosely attached particles); Deposit consolidation on the surface (e.g., through Ostwald ripening or differential solubility in temperature gradient) or cementation, which account for deposit losing its porosity and becoming more tenacious with time; Deposit spalling, erosion wear, or exfoliation. Deposition consists of transport to the surface and subsequent attachment. Deposit removal is either through deposit dissolution, particle re-entrainment, or deposit spalling, erosive wear, or exfoliation. Fouling results from foulant generation, foulant deposition, deposit removal, and deposit consolidation. For the modern model of fouling involving deposition with simultaneous deposit re-entrainment and consolidation, the fouling process can be represented by the following scheme: [ rate of deposit accumulation ] = [ rate of deposition ] - [ rate of re-entrainment of unconsolidated deposit ] [ rate of accumulation of unconsolidated deposit ] = [ rate of deposition ] - [ rate of re-entrainment of unconsolidated deposit ] - [ rate of consolidation of unconsolidated deposit ] Following the above scheme, the basic fouling equations can be written as follows (for steady-state conditions with flow, when concentration remains constant with time): where: m is the mass loading of the deposit (consolidated and unconsolidated) on the surface (kg/m2); t is time (s); kd is the deposition rate constant (m/s); ρ is the fluid density (kg/m3); Cm - mass fraction of foulant in the fluid (kg/kg); λr is the re-entrainment rate constant (1/s); mr is the mass loading of the removable (i.e., unconsolidated) fraction of the surface deposit (kg/m2); and λc is the consolidation rate constant (1/s). This system of equations can be integrated (taking that m = 0 and mr = 0 at t = 0) to the form: where λ = λr + λc. This model reproduces either linear, falling, or asymptotic fouling, depending on the relative values of k, λr, and λc. The underlying physical picture for this model is that of a two-layer deposit consisting of consolidated inner layer and loose unconsolidated outer layer. Such a bi-layer deposit is often observed in practice. The above model simplifies readily to the older model of simultaneous deposition and re-entrainment (which neglects consolidation) when λc=0. In the absence of consolidation, the asymptotic fouling is always anticipated by this older model and the fouling progress can be described as: where m* is the maximum (asymptotic) mass loading of the deposit on the surface (kg/m2). Economic and environmental importance of fouling Fouling is ubiquitous and generates tremendous operational losses, not unlike corrosion. For example, one estimate puts the losses due to fouling of heat exchangers in industrialized nations to be about 0.25% of their GDP. Another analysis estimated (for 2006) the economical loss due to boiler and turbine fouling in China utilities at 4.68 billion dollars, which is about 0.169% the country GDP. The losses initially result from impaired heat transfer, corrosion damage (in particular under-deposit and crevice corrosion), increased pressure drop, flow blockages, flow redistribution inside components, flow instabilities, induced vibrations (possibly leading to other problems, e.g., fatigue), fretting, premature failure of electrical heating elements, and a large number of other often unanticipated problems. In addition, the ecological costs should be (but typically are not) considered. The ecological costs arise from the use of biocides for the avoidance of biofouling, from the increased fuel input to compensate for the reduced output caused by fouling, and an increased use of cooling water in once-through cooling systems. For example, "normal" fouling at a conventionally fired 500 MW (net electrical power) power station unit accounts for output losses of the steam turbine of 5 MW and more. In a 1,300 MW nuclear power station, typical losses could be 20 MW and up (up to 100% if the station shuts down due to fouling-induced component degradation). In seawater desalination plants, fouling may reduce the gained output ratio by two-digit percentages (the gained output ratio is an equivalent that puts the mass of generated distillate in relation to the steam used in the process). The extra electrical consumption in compressor-operated coolers is also easily in the two-digit area. In addition to the operational costs, also the capital cost increases because the heat exchangers have to be designed in larger sizes to compensate for the heat-transfer loss due to fouling. To the output losses listed above, one needs to add the cost of down-time required to inspect, clean, and repair the components (millions of dollars per day of shutdown in lost revenue in a typical power plant), and the cost of actually doing this maintenance. Finally, fouling is often a root cause of serious degradation problems that may limit the life of components or entire plants. Fouling control The most fundamental and usually preferred method of controlling fouling is to prevent the ingress of the fouling species into the cooling water circuit. In steam power stations and other major industrial installations of water technology, macro fouling is avoided by way of pre-filtration and cooling water debris filters. Some plants employ foreign-object exclusion program (to eliminate the possibility of salient introduction of unwanted materials, e.g., forgetting tools during maintenance). Acoustic monitoring is sometimes employed to monitor for fretting by detached parts. In the case of micro fouling, water purification is achieved with extensive methods of water treatment, microfiltration, membrane technology (reverse osmosis, electrodeionization) or ion-exchange resins. The generation of the corrosion products in the water piping systems is often minimized by controlling the pH of the process fluid (typically alkalinization with ammonia, morpholine, ethanolamine or sodium phosphate), control of oxygen dissolved in water (for example, by addition of hydrazine), or addition of corrosion inhibitors. For water systems at relatively low temperatures, the applied biocides may be classified as follows: inorganic chlorine and bromide compounds, chlorine and bromide cleavers, ozone and oxygen cleavers, unoxidizable biocides. One of the most important unoxidizable biocides is a mixture of chloromethyl-isothiazolinone and methyl-isothiazolinone. Also applied are dibrom nitrilopropionamide and quaternary ammonium compounds. For underwater ship hulls bottom paints are applied. Chemical fouling inhibitors can reduce fouling in many systems, mainly by interfering with the crystallization, attachment, or consolidation steps of the fouling process. Examples for water systems are: chelating agents (for example, EDTA), long-chain aliphatic amines or polyamines (for example, octadecylamine, helamin, and other "film-forming" amines), organic phosphonic acids (for example, etidronic acid), or polyelectrolytes (for example, polyacrylic acid, polymethacrylic acid, usually with a molecular weight lower than 10000). For fired boilers, aluminum or magnesium additives can lower the melting point of ash and promote creation of deposits which are easier to remove. See also process chemicals. Magnetic water treatment has been a subject of controversy as to its effectiveness for fouling control since the 1950s. The prevailing opinion is that it simply "does not work". Nevertheless, some studies suggest that it may be effective under some conditions to reduce buildup of calcium carbonate deposits. On the component design level, fouling can often (but not always) be minimized by maintaining a relatively high (for example, 2 m/s) and uniform fluid velocity throughout the component. Stagnant regions need to be eliminated. Components are normally overdesigned to accommodate the fouling anticipated between cleanings. However, a significant overdesign can be a design error because it may lead to increased fouling due to reduced velocities. Periodic on-line pressure pulses or backflow can be effective if the capability is carefully incorporated at the design time. Blowdown capability is always incorporated into steam generators or evaporators to control the accumulation of non-volatile impurities that cause or aggravate fouling. Low-fouling surfaces (for example, very smooth, implanted with ions, or of low surface energy like Teflon) are an option for some applications. Modern components are typically required to be designed for ease of inspection of internals and periodic cleaning. On-line fouling monitoring systems are designed for some application so that blowing or cleaning can be applied before unpredictable shutdown is necessary or damage occurs. Chemical or mechanical cleaning processes for the removal of deposits and scales are recommended when fouling reaches the point of impacting the system performance or an onset of significant fouling-induced degradation (e.g., by corrosion). These processes comprise pickling with acids and complexing agents, cleaning with high-velocity water jets ("water lancing"), recirculating ("blasting") with metal, sponge or other balls, or propelling offline mechanical "bullet-type" tube cleaners. Whereas chemical cleaning causes environmental problems through the handling, application, storage and disposal of chemicals, the mechanical cleaning by means of circulating cleaning balls or offline "bullet-type" cleaning can be an environmentally friendlier alternative. In some heat-transfer applications, mechanical mitigation with dynamic scraped surface heat exchangers is an option. Also ultrasonic or abrasive cleaning methods are available for many specific applications. See also International Convention on the Control of Harmful Anti-fouling Systems on Ships Oilfield scale inhibition Particle deposition Steam generator (nuclear power) Tube cleaning References External links Crude Oil Fouling research Filters Hydraulic engineering Transport phenomena Water technology Water treatment
Fouling
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
6,905
[ "Transport phenomena", "Physical phenomena", "Hydrology", "Fouling", "Water treatment", "Chemical engineering", "Chemical equipment", "Filters", "Water pollution", "Physical systems", "Hydraulics", "Civil engineering", "Filtration", "Environmental engineering", "Water technology", "Mat...
2,863,149
https://en.wikipedia.org/wiki/Envelope%20%28motion%29
In mechanical engineering, an envelope is a solid representing all positions which may be occupied by an object during its normal range of motion. Another (jargon) word for this is a "flop". Wheel envelope In automobile design, a wheel envelope may be used to model all positions a wheel and tire combo may be expected to occupy during driving. This will take into account the maximum jounce and rebound allowed by the suspension system and the maximum turn and tilt allowed by the steering mechanism. Minimum and maximum tire inflation pressures and wear conditions may also be considered when generating the envelope. This envelope is then compared with the wheel housing and other components in the area to perform an interference/collision analysis. The results of this analysis tell the engineers whether that wheel/tire combo will strike the housing and components under normal driving conditions. If so, either a redesign is in order, or that wheel/tire combo will not be recommended. A different wheel envelope must be generated for each wheel/tire combo for which the vehicle is rated. Much of this analysis is done using CAD/CAE systems running on computers. Of course, high speed collisions, during an accident, are not considered "normal driving conditions", so the wheel and tire may very well contact other parts of the vehicle at that time. Robot's working envelope In robotics, the working envelope or work area is the volume of working or reaching space. Some factors of a robot's design (configurations, axes or degrees of freedom) influence its working envelope. References Mechanical engineering Robot control
Envelope (motion)
[ "Physics", "Engineering" ]
310
[ "Robot control", "Applied and interdisciplinary physics", "Mechanical engineering", "Robotics engineering" ]
2,863,719
https://en.wikipedia.org/wiki/Hexagonal%20tiling
In geometry, the hexagonal tiling or hexagonal tessellation is a regular tiling of the Euclidean plane, in which exactly three hexagons meet at each vertex. It has Schläfli symbol of or (as a truncated triangular tiling). English mathematician John Conway called it a hextille. The internal angle of the hexagon is 120 degrees, so three hexagons at a point make a full 360 degrees. It is one of three regular tilings of the plane. The other two are the triangular tiling and the square tiling. Applications Hexagonal tiling is the densest way to arrange circles in two dimensions. The honeycomb conjecture states that hexagonal tiling is the best way to divide a surface into regions of equal area with the least total perimeter. The optimal three-dimensional structure for making honeycomb (or rather, soap bubbles) was investigated by Lord Kelvin, who believed that the Kelvin structure (or body-centered cubic lattice) is optimal. However, the less regular Weaire–Phelan structure is slightly better. This structure exists naturally in the form of graphite, where each sheet of graphene resembles chicken wire, with strong covalent carbon bonds. Tubular graphene sheets have been synthesised, known as carbon nanotubes. They have many potential applications, due to their high tensile strength and electrical properties. Silicene is similar. Chicken wire consists of a hexagonal lattice (often not regular) of wires. The hexagonal tiling appears in many crystals. In three dimensions, the face-centered cubic and hexagonal close packing are common crystal structures. They are the densest sphere packings in three dimensions. Structurally, they comprise parallel layers of hexagonal tilings, similar to the structure of graphite. They differ in the way that the layers are staggered from each other, with the face-centered cubic being the more regular of the two. Pure copper, amongst other materials, forms a face-centered cubic lattice. Uniform colorings There are three distinct uniform colorings of a hexagonal tiling, all generated from reflective symmetry of Wythoff constructions. The (h,k) represent the periodic repeat of one colored tile, counting hexagonal distances as h first, and k second. The same counting is used in the Goldberg polyhedra, with a notation {p+,3}h,k, and can be applied to hyperbolic tilings for p > 6. The 3-color tiling is a tessellation generated by the order-3 permutohedrons. Chamfered hexagonal tiling A chamfered hexagonal tiling replaces edges with new hexagons and transforms into another hexagonal tiling. In the limit, the original faces disappear, and the new hexagons degenerate into rhombi, and it becomes a rhombic tiling. Related tilings The hexagons can be dissected into sets of 6 triangles. This process leads to two 2-uniform tilings, and the triangular tiling: The hexagonal tiling can be considered an elongated rhombic tiling, where each vertex of the rhombic tiling is stretched into a new edge. This is similar to the relation of the rhombic dodecahedron and the rhombo-hexagonal dodecahedron tessellations in 3 dimensions. It is also possible to subdivide the prototiles of certain hexagonal tilings by two, three, four or nine equal pentagons: Symmetry mutations This tiling is topologically related as a part of a sequence of regular tilings with hexagonal faces, starting with the hexagonal tiling, with Schläfli symbol {6,n}, and Coxeter diagram , progressing to infinity. This tiling is topologically related to regular polyhedra with vertex figure n3, as a part of a sequence that continues into the hyperbolic plane. It is similarly related to the uniform truncated polyhedra with vertex figure n.6.6. This tiling is also part of a sequence of truncated rhombic polyhedra and tilings with [n,3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are squares. The truncated forms have regular n-gons at the truncated vertices, and nonregular hexagonal faces. Wythoff constructions from hexagonal and triangular tilings Like the uniform polyhedra there are eight uniform tilings that can be based on the regular hexagonal tiling (or the dual triangular tiling). Drawing the tiles colored red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms, 7 of which are topologically distinct. (The truncated triangular tiling is topologically identical to the hexagonal tiling.) Monohedral convex hexagonal tilings There are 3 types of monohedral convex hexagonal tilings. They are all isohedral. Each has parametric variations within a fixed symmetry. Type 2 contains glide reflections, and is 2-isohedral keeping chiral pairs distinct. There are also 15 monohedral convex pentagonal tilings, as well as all quadrilaterals and triangles. Topologically equivalent tilings Hexagonal tilings can be made with the identical {6,3} topology as the regular tiling (3 hexagons around every vertex). With isohedral faces, there are 13 variations. Symmetry given assumes all faces are the same color. Colors here represent the lattice positions. Single-color (1-tile) lattices are parallelogon hexagons. Other isohedrally-tiled topological hexagonal tilings are seen as quadrilaterals and pentagons that are not edge-to-edge, but interpreted as colinear adjacent edges: The 2-uniform and 3-uniform tessellations have a rotational degree of freedom which distorts 2/3 of the hexagons, including a colinear case that can also be seen as a non-edge-to-edge tiling of hexagons and larger triangles. It can also be distorted into a chiral 4-colored tri-directional weaved pattern, distorting some hexagons into parallelograms. The weaved pattern with 2 colored faces has rotational 632 (p6) symmetry. A chevron pattern has pmg (22*) symmetry, which is lowered to p1 (°) with 3 or 4 colored tiles. Circle packing The hexagonal tiling can be used as a circle packing, placing equal-diameter circles at the center of every point. Every circle is in contact with 3 other circles in the packing (kissing number). The gap inside each hexagon allows for one circle, creating the densest packing from the triangular tiling, with each circle in contact with a maximum of 6 circles. Related regular complex apeirogons There are 2 regular complex apeirogons, sharing the vertices of the hexagonal tiling. Regular complex apeirogons have vertices and edges, where edges can contain 2 or more vertices. Regular apeirogons p{q}r are constrained by: 1/p + 2/q + 1/r = 1. Edges have p vertices, and vertex figures are r-gonal. The first is made of 2-edges, three around every vertex, the second has hexagonal edges, three around every vertex. A third complex apeirogon, sharing the same vertices, is quasiregular, which alternates 2-edges and 6-edges. See also Hexagonal lattice Hexagonal prismatic honeycomb Tilings of regular polygons List of uniform tilings List of regular polytopes Hexagonal tiling honeycomb Hex map board game design References Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs (Chapter 2.1: Regular and uniform tilings, pp. 58–65) John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, External links Euclidean tilings Isogonal tilings Isohedral tilings Regular tilings Regular tessellations
Hexagonal tiling
[ "Physics", "Mathematics" ]
1,737
[ "Regular tessellations", "Planes (geometry)", "Isogonal tilings", "Tessellation", "Euclidean plane geometry", "Euclidean tilings", "Isohedral tilings", "Symmetry" ]
2,864,030
https://en.wikipedia.org/wiki/Truncated%20hexagonal%20tiling
In geometry, the truncated hexagonal tiling is a semiregular tiling of the Euclidean plane. There are 2 dodecagons (12-sides) and one triangle on each vertex. As the name implies this tiling is constructed by a truncation operation applied to a hexagonal tiling, leaving dodecagons in place of the original hexagons, and new triangles at the original vertex locations. It is given an extended Schläfli symbol of t{6,3}. Conway calls it a truncated hextille, constructed as a truncation operation applied to a hexagonal tiling (hextille). There are 3 regular and 8 semiregular tilings in the plane. Uniform colorings There is only one uniform coloring of a truncated hexagonal tiling. (Naming the colors by indices around a vertex: 122.) Topologically identical tilings The dodecagonal faces can be distorted into different geometries, such as: Related polyhedra and tilings Wythoff constructions from hexagonal and triangular tilings Like the uniform polyhedra there are eight uniform tilings that can be based from the regular hexagonal tiling (or the dual triangular tiling). Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms, 7 which are topologically distinct. (The truncated triangular tiling is topologically identical to the hexagonal tiling.) Symmetry mutations This tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry. Related 2-uniform tilings Two 2-uniform tilings are related by dissected the dodecagons into a central hexagonal and 6 surrounding triangles and squares. Circle packing The truncated hexagonal tiling can be used as a circle packing, placing equal diameter circles at the center of every point. Every circle is in contact with 3 other circles in the packing (kissing number). This is the lowest density packing that can be created from a uniform tiling. Triakis triangular tiling The triakis triangular tiling is a tiling of the Euclidean plane. It is an equilateral triangular tiling with each triangle divided into three obtuse triangles (angles 30-30-120) from the center point. It is labeled by face configuration V3.12.12 because each isosceles triangle face has two types of vertices: one with 3 triangles, and two with 12 triangles. Conway calls it a kisdeltille, constructed as a kis operation applied to a triangular tiling (deltille). In Japan the pattern is called asanoha for hemp leaf, although the name also applies to other triakis shapes like the triakis icosahedron and triakis octahedron. It is the dual tessellation of the truncated hexagonal tiling which has one triangle and two dodecagons at each vertex. It is one of eight edge tessellations, tessellations generated by reflections across each edge of a prototile. Related duals to uniform tilings It is one of 7 dual uniform tilings in hexagonal symmetry, including the regular duals. See also Tilings of regular polygons List of uniform tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 2.1: Regular and uniform tilings, p. 58-65) Keith Critchlow, Order in Space: A design source book, 1970, p. 69-61, Pattern E, Dual p. 77-76, pattern 1 Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, , pp. 50–56, dual p. 117 External links Euclidean tilings Hexagonal tilings Isogonal tilings Semiregular tilings Truncated tilings
Truncated hexagonal tiling
[ "Physics", "Mathematics" ]
838
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Euclidean plane geometry", "Euclidean tilings", "Planes (geometry)", "Symmetry" ]
2,864,170
https://en.wikipedia.org/wiki/Truncated%20trihexagonal%20tiling
In geometry, the truncated trihexagonal tiling is one of eight semiregular tilings of the Euclidean plane. There are one square, one hexagon, and one dodecagon on each vertex. It has Schläfli symbol of tr{3,6}. Names Uniform colorings There is only one uniform coloring of a truncated trihexagonal tiling, with faces colored by polygon sides. A 2-uniform coloring has two colors of hexagons. 3-uniform colorings can have 3 colors of dodecagons or 3 colors of squares. Related 2-uniform tilings The truncated trihexagonal tiling has three related 2-uniform tilings, one being a 2-uniform coloring of the semiregular rhombitrihexagonal tiling. The first dissects the hexagons into 6 triangles. The other two dissect the dodecagons into a central hexagon and surrounding triangles and square, in two different orientations. Circle packing The Truncated trihexagonal tiling can be used as a circle packing, placing equal diameter circles at the center of every point. Every circle is in contact with 3 other circles in the packing (kissing number). Kisrhombille tiling The kisrhombille tiling or 3-6 kisrhombille tiling is a tiling of the Euclidean plane. It is constructed by congruent 30-60-90 triangles with 4, 6, and 12 triangles meeting at each vertex. Subdividing the faces of these tilings creates the kisrhombille tiling. (Compare the disdyakis hexa-, dodeca- and triacontahedron, three Catalan solids similar to this tiling.) Construction from rhombille tiling Conway calls it a kisrhombille for his kis vertex bisector operation applied to the rhombille tiling. More specifically it can be called a 3-6 kisrhombille, to distinguish it from other similar hyperbolic tilings, like 3-7 kisrhombille. It can be seen as an equilateral hexagonal tiling with each hexagon divided into 12 triangles from the center point. (Alternately it can be seen as a bisected triangular tiling divided into 6 triangles, or as an infinite arrangement of lines in six parallel families.) It is labeled V4.6.12 because each right triangle face has three types of vertices: one with 4 triangles, one with 6 triangles, and one with 12 triangles. Symmetry The kisrhombille tiling triangles represent the fundamental domains of p6m, [6,3] (*632 orbifold notation) wallpaper group symmetry. There are a number of small index subgroups constructed from [6,3] by mirror removal and alternation. [1+,6,3] creates *333 symmetry, shown as red mirror lines. [6,3+] creates 3*3 symmetry. [6,3]+ is the rotational subgroup. The commutator subgroup is [1+,6,3+], which is 333 symmetry. A larger index 6 subgroup constructed as [6,3*], also becomes (*333), shown in blue mirror lines, and which has its own 333 rotational symmetry, index 12. Related polyhedra and tilings There are eight uniform tilings that can be based from the regular hexagonal tiling (or the dual triangular tiling). Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms, 7 which are topologically distinct. (The truncated triangular tiling is topologically identical to the hexagonal tiling.) Symmetry mutations This tiling can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedra), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling. See also Tilings of regular polygons List of uniform tilings Notes References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, Keith Critchlow, Order in Space: A design source book, 1970, p. 69-61, Pattern G, Dual p. 77-76, pattern 4 Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, , pp. 50–56 External links Euclidean tilings Isogonal tilings Semiregular tilings Truncated tilings
Truncated trihexagonal tiling
[ "Physics", "Mathematics" ]
1,000
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Euclidean plane geometry", "Euclidean tilings", "Planes (geometry)", "Symmetry" ]
2,864,244
https://en.wikipedia.org/wiki/Truncated%20square%20tiling
In geometry, the truncated square tiling is a semiregular tiling by regular polygons of the Euclidean plane with one square and two octagons on each vertex. This is the only edge-to-edge tiling by regular convex polygons which contains an octagon. It has Schläfli symbol of t{4,4}. Conway calls it a truncated quadrille, constructed as a truncation operation applied to a square tiling (quadrille). Other names used for this pattern include Mediterranean tiling and octagonal tiling, which is often represented by smaller squares, and nonregular octagons which alternate long and short edges. There are 3 regular and 8 semiregular tilings in the plane. Uniform colorings There are two distinct uniform colorings of a truncated square tiling. (Naming the colors by indices around a vertex (4.8.8): 122, 123.) Circle packing The truncated square tiling can be used as a circle packing, placing equal diameter circles at the center of every point. Every circle is in contact with 3 other circles in the packing (kissing number). Variations One variations on this pattern, often called a Mediterranean pattern, is shown in stone tiles with smaller squares and diagonally aligned with the borders. Other variations stretch the squares or octagons. The Pythagorean tiling alternates large and small squares, and may be seen as topologically identical to the truncated square tiling. The squares are rotated 45 degrees and octagons are distorted into squares with mid-edge vertices. A weaving pattern also has the same topology, with octagons flattened rectangles. Related polyhedra and tilings The truncated square tiling is topologically related as a part of sequence of uniform polyhedra and tilings with vertex figures 4.2n.2n, extending into the hyperbolic plane: The 3-dimensional bitruncated cubic honeycomb projected into the plane shows two copies of a truncated tiling. In the plane it can be represented by a compound tiling, or combined can be seen as a chamfered square tiling. Wythoff constructions from square tiling Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, all 8 forms are distinct. However treating faces identically, there are only three unique topologically forms: square tiling, truncated square tiling, snub square tiling. Related tilings in other symmetries Tetrakis square tiling The tetrakis square tiling is the tiling of the Euclidean plane dual to the truncated square tiling. It can be constructed square tiling with each square divided into four isosceles right triangles from the center point, forming an infinite arrangement of lines. It can also be formed by subdividing each square of a grid into two triangles by a diagonal, with the diagonals alternating in direction, or by overlaying two square grids, one rotated by 45 degrees from the other and scaled by a factor of . Conway calls it a kisquadrille, represented by a kis operation that adds a center point and triangles to replace the faces of a square tiling (quadrille). It is also called the Union Jack lattice because of the resemblance to the UK flag of the triangles surrounding its degree-8 vertices. See also Tilings of regular polygons List of uniform tilings Percolation threshold References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 2.1: Regular and uniform tilings, p. 58-65) Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, , pp. 50–56 External links Euclidean tilings Isogonal tilings Semiregular tilings Square tilings Truncated tilings
Truncated square tiling
[ "Physics", "Mathematics" ]
805
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Euclidean plane geometry", "Euclidean tilings", "Planes (geometry)", "Symmetry" ]
31,534,486
https://en.wikipedia.org/wiki/Advances%20in%20High%20Energy%20Physics
Advances in High Energy Physics is a peer-reviewed open-access scientific journal publishing research on high energy physics. It is published by Hindawi Publishing Corporation. The journal was established in 2007 and publishes original research articles as well as review articles in all fields of high energy physics. The journal is dedicated to both theoretical and experimental research." It is part of the SCOAP3 initiative. References External links English-language journals Hindawi Publishing Corporation academic journals Academic journals established in 2007 Particle physics journals
Advances in High Energy Physics
[ "Physics" ]
103
[ "Particle physics stubs", "Particle physics", "Particle physics journals" ]
31,540,040
https://en.wikipedia.org/wiki/Regulation%20of%20genetic%20engineering
The regulation of genetic engineering varies widely by country. Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence as the starting point when assessing safety, while many countries such as those in the European Union, Brazil and China authorize GMO cultivation on a case-by-case basis. Many countries allow the import of GM food with authorization, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation, but no GM products are yet produced (Japan, South Korea). Most countries that do not allow for GMO cultivation do permit research. Most (85%) of the world's GMO crops are grown in the Americas (North and South). One of the key issues concerning regulators is whether GM products should be labeled. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. A study investigating voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%. In Canada and the US labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. There is no evidence to support the idea that the consumption of approved GM food has a detrimental effect on human health. Some scientists and advocacy groups, such as Greenpeace and World Wildlife Fund, have however called for additional and more rigorous testing for GM food. History The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The first use of Recombinant DNA (rDNA) technology had just been successfully accomplished by Stanley Cohen and Herbert Boyer two years previously and the scientific community recognized that as well as benefits this technology could also pose some risks. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a rDNA advisory committee. This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA)), effectively making all rDNA research tightly regulated in the US. In 1982 the Organisation for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms (GMOs) into the environment as the first transgenic plants were being developed. As the technology improved and genetically organisms moved from model organisms to potential commercial products the US established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA. The basic concepts for the safety assessment of foods derived from GMOs have been developed in close collaboration under the auspices of the OECD, the World Health Organization (WHO) and Food and Agriculture Organization (FAO). A first joint FAO/WHO consultation in 1990 resulted in the publication of the report ‘Strategies for Assessing the Safety of Foods Produced by Biotechnology’ in 1991. Building on that, an international consensus was reached by the OECD’s Group of National Experts on Safety in Biotechnology, for assessing biotechnology in general, including field testing GM crops. That Group met again in Bergen, Norway in 1992 and reached consensus on principles for evaluating the safety of GM food; its report, ‘The safety evaluation of foods derived by modern technology – concepts and principles’ was published in 1993. That report recommends conducting the safety assessment of a GM food on a case-by-case basis through comparison to an existing food with a long history of safe use. This basic concept has been refined in subsequent workshops and consultations organized by the OECD, WHO, and FAO, and the OECD in particular has taken the lead in acquiring data and developing standards for conventional foods to be used in assessing substantial equivalence. The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. It is an international treaty that governs the transfer, handling, and use of genetically modified (GM) organisms. It is focused on movement of GMOs between countries and has been called a de facto trade agreement. One hundred and seventy-two countries are members of the Protocol and many use it as a reference point for their own regulations. Also in 2003 the Codex Alimentarius Commission of the FAO/WHO adopted a set of "Principles and Guidelines on foods derived from biotechnology" to help countries coordinate and standardize regulation of GM food to help ensure public safety and facilitate international trade. and updated its guidelines for import and export of food in 2004, The European Union first introduced laws requiring GMOs to be labelled in 1997. In 2013, Connecticut became the first state to enact a labeling law in the US, although it would not take effect until other states followed suit. In the laboratory Institutions that conduct certain types of scientific research must obtain permission from government authorities and ethical committees before they conduct any experiments. Universities and research institutes generally have a special committee that is responsible for approving any experiments that involve genetic engineering. Many experiments also need permission from a national regulatory group or legislation. All staff must be trained in the use of GMOs and in some laboratories a biological control safety officer is appointed. All laboratories must gain approval from their regulatory agency to work with GMOs and all experiments must be documented. As of 2008 there have been no major accidents with GMOs in the lab. The legislation covering GMOS was initially covered by adapting existing regulations in place for chemicals or other purposes, with many countries later developing specific policies aimed at genetic engineering. These are often derived from regulations and guidelines in place for the non-GMO version of the organism, although they are more severe. In many countries now the regulations are diverging, even though many of the risks and procedures are similar. Sometimes even different agencies are responsible, notably in the Netherlands where the Ministry of the Environment covers GMOs and the Ministry of Social Affairs covers the human pathogens they are derived from. There is a near universal system for assessing the relative risks associated with GMOs and other agents to laboratory staff and the community. They are then assigned to one of four risk categories based on their virulence, the severity of disease, the mode of transmission, and the availability of preventive measures or treatments. There are some differences in how these categories are defined, such as the World Health Organisation (WHO) including dangers to animals and the environment in their assessments. When there are varying levels of virulence the regulators base their classification on the highest. Accordingly there are four biosafety levels that a laboratory can fall into, ranging from level 1 (which is suitable for working with agents not associated with disease) to level 4 (working with life threatening agents). Different countries use different nomenclature to describe the levels and can have different requirements for what can be done at each level. In Europe the use of living GMOs are regulated by the European Directive on the contained use of genetically modified microorganisms (GMMs). The regulations require risk assessments before use of any contained GMOs is started and assurances that the correct controls are in place. It provides the minimal standards for using GMMs, with individual countries allowed to enforce stronger controls. In the UK the Genetically Modified Organisms (Contained Use) Regulations 2014 provides the framework researchers must follow when using GMOs. Other legislation may be applicable depending on what research is carried out. For workplace safety these include the Health and Safety at Work Act 1974, the Management of Health and Safety at Work Regulations 1999, the Carriage of Dangerous Goods legislation and the Control of Substances Hazardous to Health Regulations 2002. Environmental risks are covered by Section 108(1) of the Environmental Protection Act 1990 and The Genetically Modified Organisms (Risk assessment) (Records and Exemptions) Regulations 1996. In the US the National Institute of Health (NIH) classifies GMOs into four risk groups. Risk group one is not associated with any diseases, risk group 2 is associated with diseases that are not serious, risk group 3 is associated with serious diseases where treatments are available and risk group 4 is for serious diseases with no known treatments. In 1992 the Occupational Safety and Health Administration determined that its current legislation already adequately covers the safety of laboratory workers using GMOs. Australia has an exempt dealing for genetically modified organisms that only pose a low risk. These include systems using standard laboratory strains as the hosts, recombinant DNA that does not code for a vertebrate toxin or is not derived from a micro-organism that can cause disease in humans. Exempt dealings usually do not require approval from the national regulator. GMOs that pose a low risk if certain management practices are complied with are classified as notifiable low risk dealings. The final classification is for any uses of GMOs that do not meet the previous criteria. These are known as licensed dealings and include cloning any genes that code for vertebrate toxins or using hosts that are capable of causing disease in humans. Licensed dealings require the approval of the national regulator. Work with exempt GMOs do not need to be carried out in certified laboratories. All others must be contained in a Physical Containment level 1 (PC1) or Physical Containment level 2 (PC2) laboratories. Laboratory work with GMOs classified as low risk, which include knockout mice, are carried out in PC1 lab. This is the case for modifications that do not confer an advantage to the animal or doesn't secrete any infectious agents. If a laboratory strain that is used isn't covered by exempt dealings or the inserted DNA could code for a pathogenic gene, it must be carried out in a PC2 laboratory. Release The approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology and the development and release of GMOs vary from country to country, with some of the most marked differences occurring between the United States and Europe. The United States takes on a less hands-on approach to the regulation of GMOs than in Europe, with the FDA and USDA only looking over pesticide and plant health facets of GMOs. Despite the overall global increase in the production in GMOs, the European Union has still stalled GMOs fully integrating into its food supply. This has definitely affected various countries, including the United States, when trading with the EU. European Union The European Union enacted regulatory laws in 2003 that provided possibly the most stringent GMO regulations in the world. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority (EFSA). The criteria for authorization fall in four broad categories: "safety", "freedom of choice", "labelling", and "traceability". The European Parliament's Committee on the Environmental, Public Health, and Consumer Protection pushed forward and adopted a "safety first" principle regarding the case of GMOs, calling for any negative health consequences from GMOs to be held liable. The history of the development of GM crop and GM food regulations in the EU has been challenged to develop a policy environment that is (a) efficient, (b) predicable, (c) accountable, (d) durable or (e) inter- jurisdictionally aligned. However, although the European Union has had relatively strict regulations regarding genetically modified food, Europe is now allowing newer versions of modified maize and other agricultural produce. Also, the level of GMO acceptance in the European Union varies across its countries with Spain and Portugal being more permissive of GMOs than France and the Nordic population. One notable exception however is Sweden. In this country, the government has declared that the GMO definition (according to Directive 2001/18/EC) stipulates that foreign DNA needs to be present in an organism for it to qualify as a genetically modified organisms. Organisms that have the foreign DNA removed (for example via selective breeding) therefore do not qualify as GMOs, even if gene editing has been used to make the organism. In June 2014 the European Parliament approved that individual member states are allowed to restrict or ban the growth of GM crops within their territory. Austria, France, Greece, Hungary, Germany, and Luxembourg had prohibited the growth or sale of bioengineered foods in their territory in 2015. Scotland also announced its rejection. By 2015, sixteen countries declared they want to opt out of EU-approved GM crops, including GMOs from major companies like Monsanto, Dow, Syngenta and Pioneer. United States The U.S. regulatory policy is governed by the Coordinated Framework for Regulation of Biotechnology The policy has three tenets: "(1) U.S. policy would focus on the product of genetic modification (GM) techniques, not the process itself, (2) only regulation grounded in verifiable scientific risks would be tolerated, and (3) GM products are on a continuum with existing products and, therefore, existing statutes are sufficient to review the products." For a genetically modified organism to be approved for release in the U.S., it must be assessed under the Plant Protection Act by the Animal and Plant Health Inspection Service (APHIS) agency within the USDA and may also be assessed by the FDA and the EPA, depending on the intended use of the organism. The USDA evaluate the plants potential to become weeds, the FDA reviews plants that could enter or alter the food supply, and the EPA regulates genetically modified plants with pesticide properties, as well as agrochemical residues. In 2017 a proposed rule was withdrawn by APHIS after public comment. Agricultural stakeholders especially felt it would have excessively restricted genetic engineering and even new methods of conventional plant breeding. Other countries The level of regulation in other countries lies in between Europe and the United States. Common Market for Eastern and Southern Africa (COMASA) is responsible for assessing the safety of GMOs in most of Africa, although the final decision lies with each individual country. India and China are the two largest producers of genetically modified products in Asia. The Office of Agricultural Genetic Engineering Biosafety Administration (OAGEBA) is responsible for regulation in China, while in India it is the Institutional Biosafety Committee (IBSC), Review Committee on Genetic Manipulation (RCGM) and Genetic Engineering Approval Committee (GEAC). Brazil and Argentina are the 2nd and 3rd largest producers of GM food. In Argentine assessment of GM products for release is provided by the National Agricultural Biotechnology Advisory Committee (environmental impact), the National Service of Health and Agrifood Quality (food safety) and the National Agribusiness Direction (effect on trade), with the final decision made by the Secretariat of Agriculture, Livestock, Fishery and Food. In Brazil the National Biosafety Technical Commission is responsible for assessing environmental and food safety and prepares guidelines for transport, importation and field experiments involving GM products, while the Council of Ministers evaluates the commercial and economical issues with release. Health Canada and the Canadian Food Inspection Agency are responsible for evaluating the safety and nutritional value of genetically modified foods released in Canada. License applications for the release of all genetically modified organisms in Australia is overseen by the Office of the Gene Technology Regulator, while regulation is provided by the Therapeutic Goods Administration for GM medicines or Food Standards Australia New Zealand for GM food. The individual state governments can then assess the impact of release on markets and trade and apply further legislation to control approved genetically modified products. The Australian Parliament relaxed the definition of GMOs, in 2019, to exclude certain GMOs from GMO regulation and government oversight. In Singapore, synthetic biology products are regulated as if they were genetically modified organisms under the Biological Agents and Toxins Act. For further review see Trump 2017. In Saudi Arabia's Neom project genetically engineered agriculture is legal, encouraged, and is funded by the government as an integral part of the project. Labeling One of the key issues concerning regulators is whether GM products should be labeled. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. A study investigating voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%. In Canada and the United States labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. In the US state of Oregon., voters rejected Measure 27, which would have required labeling of all genetically modified foods. Japan, Malaysia, New Zealand, and Australia require labeling so consumers can exercise choice between foods that have genetically modified, conventional or organic origins. Trade The Cartagena Protocol sets the requirements for the international trade of GMOs between countries that are signatories to it. Any shipments contain genetically modified organisms that are intended to be used as feed, food or for processing must be identified and a list of the transgenic events be available. Substantial equivalence "Substantial equivalence" is a starting point for the safety assessment for GM foods that is widely used by national and international agencies—including the Canadian Food Inspection Agency, Japan's Ministry of Health and Welfare and the U.S. Food and Drug Administration, the United Nation’s Food and Agriculture Organization, the World Health Organization and the OECD. A quote from FAO, one of the agencies that developed the concept, is useful for defining it: "Substantial equivalence embodies the concept that if a new food or food component is found to be substantially equivalent to an existing food or food component, it can be treated in the same manner with respect to safety (i.e., the food or food component can be concluded to be as safe as the conventional food or food component)". The concept of substantial equivalence also recognises the fact that existing foods often contain toxic components (usually called antinutrients) and are still able to be consumed safely—in practice there is some tolerable chemical risk taken with all foods, so a comparative method for assessing safety needs to be adopted. For instance, potatoes and tomatoes can contain toxic levels of respectively, solanine and alpha-tomatine alkaloids. To decide if a modified product is substantially equivalent, the product is tested by the manufacturer for unexpected changes in a limited set of components such as toxins, nutrients, or allergens that are present in the unmodified food. The manufacturer's data is then assessed by a regulatory agency, such as the U.S. Food and Drug Administration. That data, along with data on the genetic modification itself and resulting proteins (or lack of protein), is submitted to regulators. If regulators determine that the submitted data show no significant difference between the modified and unmodified products, then the regulators will generally not require further food safety testing. However, if the product has no natural equivalent, or shows significant differences from the unmodified food, or for other reasons that regulators may have (for instance, if a gene produces a protein that had not been a food component before), the regulators may require that further safety testing be carried out. A 2003 review in Trends in Biotechnology identified seven main parts of a standard safety test: Study of the introduced DNA and the new proteins or metabolites that it produces; Analysis of the chemical composition of the relevant plant parts, measuring nutrients, anti-nutrients as well as any natural toxins or known allergens; Assess the risk of gene transfer from the food to microorganisms in the human gut; Study the possibility that any new components in the food might be allergens; Estimate how much of a normal diet the food will make up; Estimate any toxicological or nutritional problems revealed by this data in light of data on equivalent foods; Additional animal toxicity tests if there is the possibility that the food might pose a risk. There has been discussion about applying new biochemical concepts and methods in evaluating substantial equivalence, such as metabolic profiling and protein profiling. These concepts refer, respectively, to the complete measured biochemical spectrum (total fingerprint) of compounds (metabolites) or of proteins present in a food or crop. The goal would be to compare overall the biochemical profile of a new food to an existing food to see if the new food's profile falls within the range of natural variation already exhibited by the profile of existing foods or crops. However, these techniques are not considered sufficiently evaluated, and standards have not yet been developed, to apply them. Genetically modified animals Transgenic animals have genetically modified DNA. Animals are different from plants in a variety of ways—biology, life cycles, or potential environmental impacts. GM plants and animals were being developed around the same time, but due to the complexity of their biology and inefficiency with laboratory equipment use, their appearance in the market was delayed. There are six categories that genetically engineered (GE) animals are approved for: Use for biomedical research. Smaller mammalians can be used as models in scientific research to represent other mammals. Used to develop innovative kinds of fish for environmental monitoring. Used to produce proteins that humans lack. This can be for therapeutic use, for example, treatment of diseases in other mammals. Use for investigating and finding cures for diseases. Can be used for introducing disease resistance in GM breeds. Used to create manufacturing products for industry use. Used for improving food quality. References Genetic engineering Politics and technology Regulation of biotechnologies
Regulation of genetic engineering
[ "Chemistry", "Engineering", "Biology" ]
4,513
[ "Biotechnology law", "Regulation of genetically modified organisms", "Biological engineering", "Regulation of biotechnologies", "Genetic engineering", "Molecular biology" ]
30,498,055
https://en.wikipedia.org/wiki/Univec
UniVec is a database that can be used to remove vector contamination from DNA sequences. See also Plasmid VectorDB References External links The UniVec Database. NCBI. Retrieved 7 November 2013. Biological databases Mobile genetic elements Molecular biology techniques
Univec
[ "Chemistry", "Biology" ]
53
[ "Mobile genetic elements", "Bioinformatics", "Molecular genetics", "Molecular biology techniques", "Molecular biology", "Biological databases" ]
30,498,118
https://en.wikipedia.org/wiki/UTRdb
UTRdb is a database of 5' and 3' untranslated sequences of eukaryotic mRNAs. See also Five prime untranslated region Three prime untranslated region UTRome References External links data Biological databases RNA Gene expression
UTRdb
[ "Chemistry", "Biology" ]
55
[ "Gene expression", "Bioinformatics", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Biological databases" ]
30,498,206
https://en.wikipedia.org/wiki/UTRome
UTRome is a database of three-prime untranslated regions in C. elegans developed by Marco Mangone See also untranslated region (UTR) UTRdb UTRome.org References External links http://www.UTRome.org Model organism databases RNA Gene expression
UTRome
[ "Chemistry", "Biology" ]
67
[ "Model organism databases", "Gene expression", "Model organisms", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
30,498,279
https://en.wikipedia.org/wiki/Cloudant
Cloudant is an IBM software product, which is primarily delivered as a cloud-based service. Cloudant is a non-relational, distributed database service of the same name. Cloudant is based on the Apache-backed CouchDB project and the open source BigCouch project. Cloudant's service provides integrated data management, search, and analytics engine designed for web applications. Cloudant scales databases on the CouchDB framework and provides hosting, administrative tools, analytics and commercial support for CouchDB and BigCouch. Cloudant's distributed CouchDB service is used the same way as standalone CouchDB, with the added advantage of data being redundantly distributed over multiple machines. Cloudant was acquired by IBM from the start-up company of the same name. The acquisition was announced on February 24, 2014, The acquisition was completed on March 4 of that year. By March 31, 2018, Cloudant Shared Plan will be retired and migrated to IBM Cloud. History Cloudant was founded by Alan Hoffman, Adam Kocoloski, and Michael Miller. The three met in the physics department at MIT where they worked with large data sets from experiments such as the Large Hadron Collider and the Relativistic Heavy Ion Collider. In early 2008 their ideas for fixing the "big data problem" caught the attention of Silicon Valley–based Y Combinator, which resulted in $20,000 seed funding. The company also received an early seed round of $1 million from Avalon Ventures in August 2010. Cloudant was designed for cloud computing, automatically distributing data across multiple servers in addition to scaling the database to accommodate web applications. In August 2010, Cloudant released free BigCouch under an Apache License(2.0). Cloudant offered services including support, consulting services and training. Cloudant delivered their first product in the third quarter of 2010. Cloudant has over 2500 customers for its hosted service as of January 2011. In November 2010, Cloudant was recognized as one of ‘10 Cool Open-Source Startups’ by CRN. Cloudant was regularly recognized in the local Boston startup community, named as one of the ‘Top 5 Database Startups’ and ‘Top Ten Cloud Computing Startups’ in Boston’s popular technology column by Joe Kinsella, ‘High Tech in the Hub.’. On February 24, 2014, IBM announced an agreement to acquire Cloudant. The acquisition closed in March, after which Cloudant joined IBM's Information and Analytics Group. In September, 2016, IBM Cloudant completed the donation of the BigCouch project to The Apache Software Foundation, resulting in the release of Apache CouchDB 2.0. CouchDB 2.0 incorporates many of the improvements made by Cloudant and BigCouch to the original CouchDB project, including clustering capabilities, a declarative query language and performance enhancements. Differences with CouchDB Cloudant's hosted database extends CouchDB in several ways: Chained MapReduce Views Java Language View Server allows usage of Java for CouchDB map reduce analytics. Application programming interface keys for programmatic access to CouchDB database. See also Apache Software Foundation BigCouch Big data CouchDB Cloud computing Cloud infrastructure Database-centric architecture Data structure NoSQL Real time database References External links Cloudant CouchDB BigCouch Developer Preview: Cloudant Search for CouchDB IBM Information Management Cloud computing providers Cloud infrastructure Distributed file systems IBM acquisitions IBM cloud services 2014 mergers and acquisitions
Cloudant
[ "Technology" ]
700
[ "Cloud infrastructure", "IT infrastructure" ]
30,498,690
https://en.wikipedia.org/wiki/Systematic%20layout%20planning
The systematic layout planning (SLP) - also referred to as site layout planning - is a tool used to arrange a workplace in a plant by locating areas with high frequency and logical relationships close to each other. The process permits the quickest material flow in processing the product at the lowest cost and least amount of handling. It is used in construction projects to optimize the location of temporary facilities (such as engineers' caravans, material storage, generators, etc.) during construction to minimize transportation, minimize cost, minimize travel time, and enhance safety. Levels of plant layout design There are four levels of detail in plant layout design, Site layout: shows how the building should be located in a proper way. Block layout: shows the sizes of departments in the buildings. Detailed layout: shows the arrangements of equipment and workstations in the departments. Workstation layout: shows the locations of every part of the workstation. References Industrial engineering
Systematic layout planning
[ "Engineering" ]
191
[ "Industrial engineering" ]
3,833,379
https://en.wikipedia.org/wiki/GeSbTe
GeSbTe (germanium-antimony-tellurium or GST) is a phase-change material from the group of chalcogenide glasses used in rewritable optical discs and phase-change memory applications. Its recrystallization time is 20 nanoseconds, allowing bitrates of up to 35 Mbit/s to be written and direct overwrite capability up to 106 cycles. It is suitable for land-groove recording formats. It is often used in rewritable DVDs. New phase-change memories are possible using n-doped GeSbTe semiconductor. The melting point of the alloy is about 600 °C (900 K) and the crystallization temperature is between 100 and 150 °C. During writing, the material is erased, initialized into its crystalline state, with low-intensity laser irradiation. The material heats up to its crystallization temperature, but not its melting point, and crystallizes. The information is written at the crystalline phase, by heating spots of it with short (<10 ns), high-intensity laser pulses; the material melts locally and is quickly cooled, remaining in the amorphous phase. As the amorphous phase has lower reflectivity than the crystalline phase, data can be recorded as dark spots on the crystalline background. Recently, novel liquid organogermanium precursors, such as isobutylgermane (IBGe) and tetrakis(dimethylamino)germane (TDMAGe) were developed and used in conjunction with the metalorganics of antimony and tellurium, such as tris-dimethylamino antimony (TDMASb) and di-isopropyl telluride (DIPTe) respectively, to grow GeSbTe and other chalcogenide films of very high purity by metalorganic chemical vapor deposition (MOCVD). Dimethylamino germanium trichloride (DMAGeC) is also reported as the chloride containing and superior dimethylaminogermanium precursor for Ge deposition by MOCVD. Material properties GeSbTe is a ternary compound of germanium, antimony, and tellurium, with composition GeTe-Sb2Te3. In the GeSbTe system, there is a pseudo-line as shown upon which most of the alloys lie. Moving down this pseudo-line, it can be seen that as we go from Sb2Te3 to GeTe, the melting point and glass transition temperature of the materials increase, crystallization speed decreases and data retention increases. Hence, in order to get high data transfer rate, we need to use material with fast crystallization speed such as Sb2Te3. This material is not stable because of its low activation energy. On the other hand, materials with good amorphous stability like GeTe has slow crystallization speed because of its high activation energy. In its stable state, crystalline GeSbTe has two possible configurations: hexagonal and a metastable face-centered cubic (FCC) lattice. When it is rapidly crystallized however, it was found to have a distorted rocksalt structure. GeSbTe has a glass transition temperature of around 100 °C. GeSbTe also has many vacancy defects in the lattice, of 20 to 25% depending on the specific GeSbTe compound. Hence, Te has an extra lone pair of electrons, which are important for many of the characteristics of GeSbTe. Crystal defects are also common in GeSbTe and due to these defects, an Urbach tail in the band structure is formed in these compounds. GeSbTe is generally p type and there are many electronic states in the band gap accounting for acceptor and donor like traps. GeSbTe has two stable states, crystalline and amorphous. The phase change mechanism from high resistance amorphous phase to low resistance crystalline phase in nano-timescale and threshold switching are two of the most important characteristic of GeSbTe. Applications in phase-change memory The unique characteristic that makes phase-change memory useful as a memory is the ability to effect a reversible phase change when heated or cooled, switching between stable amorphous and crystalline states. These alloys have high resistance in the amorphous state ‘0’ and are semimetals in the crystalline state ‘1’. In amorphous state, the atoms have short-range atomic order and low free electron density. The alloy also has high resistivity and activation energy. This distinguishes it from the crystalline state having low resistivity and activation energy, long-range atomic order and high free electron density. When used in phase-change memory, use of a short, high amplitude electric pulse such that the material reaches melting point and rapidly quenched changes the material from crystalline phase to amorphous phase is widely termed as RESET current and use of a relatively longer, low amplitude electric pulse such that the material reaches only the crystallization point and given time to crystallize allowing phase change from amorphous to crystalline is known as SET current. The early devices were slow, power consuming and broke down easily due to the large currents. Therefore, it did not succeed as SRAM and flash memory took over. In the 1980s though, the discovery of germanium-antimony-tellurium (GeSbTe) meant that phase-change memory now needed less time and power to function. This resulted in the success of the rewriteable optical disk and created renewed interest in the phase-change memory. The advances in lithography also meant that previously excessive programming current has now become much smaller as the volume of GeSbTe that changes phase is reduced. Phase-change memory has many near ideal memory qualities such as non-volatility, fast switching speed, high endurance of more than 1013 read –write cycles, non-destructive read, direct overwriting and long data retention time of more than 10 years. The one advantage that distinguishes it from other next generation non-volatile memory like magnetic random access memory (MRAM) is the unique scaling advantage of having better performance with smaller sizes. The limit to which phase-change memory can be scaled is hence limited by lithography at least until 45 nm. Thus, it offers the biggest potential of achieving ultra-high memory density cells that can be commercialized. Though phase-change memory offers much promise, there are still certain technical problems that need to be solved before it can reach ultra-high density and commercialized. The most important challenge for phase-change memory is to reduce the programming current to the level that is compatible with the minimum MOS transistor drive current for high-density integration. Currently, the programming current in phase-change memory is substantially high. This high current limits the memory density of the phase-change memory cells as the current supplied by the transistor is not sufficient due to their high current requirement. Hence, the unique scaling advantage of phase-change memory cannot be fully utilized. The typical phase-change memory device design is shown. It has layers including the top electrode, GST, the GeSbTe layer, BEC, the bottom electrode and the dielectric layers. The programmable volume is the GeSbTe volume that is in contact with the bottom electrode. This is the part that can be scaled down with lithography. The thermal time constant of the device is also important. The thermal time constant must be fast enough for GeSbTe to cool rapidly into the amorphous state during RESET but slow enough to allow crystallization to occur during SET state. The thermal time constant depends on the design and material the cell is built. To read, a low current pulse is applied to the device. A small current ensures the material does not heat up. Information stored is read out by measuring the resistance of the device. Threshold switching Threshold switching occurs when GeSbTe goes from a high resistive state to a conductive state at the threshold field of about 56 V/um. This can be seen from the current-voltage (IV) plot, where current is very low in the amorphous state at low voltage until threshold voltage is reached. Current increases rapidly after the voltage snapback. The material is now in the amorphous "ON" state, where the material is still amorphous, but in a pseudo-crystalline electric state. In crystalline state, the IV characteristics is ohmic. There had been debate on whether threshold switching was an electrical or thermal process. There were suggestions that the exponential increase in current at threshold voltage must have been due to generation of carriers that vary exponentially with voltage such as impact ionization or tunneling. Nano-timescale phase change Recently, much research has focused on the material analysis of the phase-change material in an attempt to explain the high speed phase change of GeSbTe. Using EXAFS, it was found that the most matching model for crystalline GeSbTe is a distorted rocksalt lattice and for amorphous a tetrahedral structure. The small change in configuration from distorted rocksalt to tetrahedral suggests that nano-timescale phase change is possible as the major covalent bonds are intact and only the weaker bonds are broken. Using the most possible crystalline and amorphous local structures for GeSbTe, the fact that density of crystalline GeSbTe is less than 10% larger than amorphous GeSbTe, and the fact that free energies of both amorphous and crystalline GeSbTe have to be around the same magnitude, it was hypothesized from density functional theory simulations that the most stable amorphous state was the spinel structure, where Ge occupies tetrahedral positions and Sb and Te occupy octahedral positions, as the ground state energy was the lowest of all the possible configurations. By means of Car-Parrinello molecular dynamics simulations this conjecture have been theoretically confirmed. Nucleation-domination versus growth-domination Another similar material is AgInSbTe. It offers higher linear density, but has lower overwrite cycles by 1-2 orders of magnitude. It is used in groove-only recording formats, often in rewritable CDs. AgInSbTe is known as a growth-dominated material while GeSbTe is known as a nucleation-dominated material. In GeSbTe, the nucleation process of crystallization is long with many small crystalline nuclei being formed before a short growth process where the numerous small crystals are joined. In AgInSbTe, there are only a few nuclei formed in the nucleation stage and these nuclei grow bigger in the longer growth stage such that they eventually form one crystal. References Alloys Chalcogenides DVD Germanium compounds Non-oxide glasses Optical materials Antimony compounds Tellurium compounds
GeSbTe
[ "Physics", "Chemistry" ]
2,196
[ "Materials", "Optical materials", "Chemical mixtures", "Alloys", "Matter" ]
3,834,517
https://en.wikipedia.org/wiki/AgInSbTe
AgInSbTe, or silver-indium-antimony-tellurium, is a phase change material from the group of chalcogenide glasses, used in rewritable optical discs (such as rewritable CDs) and phase-change memory applications. It is a quaternary compound of silver, indium, antimony, and tellurium. During writing, the material is first erased, initialized into its crystalline state, with long, lower-intensity laser irradiation. The material heats up to its crystallization temperature, but not up to its melting point, and crystallizes in a metastable face-centered cubic structure. Then the information is written on the crystalline phase, by heating spots of it with short (<10 ns), high-intensity laser pulses; the material locally melts and is quickly cooled, remaining in the amorphous phase. As the amorphous phase has lower reflectivity than the crystalline phase, the bitstream can be recorded as "dark" amorphous spots on the crystalline background. At low linear velocities, clusters of crystalline material can exist in the amorphous spots. Another similar material is GeSbTe, offering a lower linear density, but with higher overwrite cycles by 1-2 orders of magnitude. It is used in pit-and-groove recording formats, often in rewritable DVDs. References Optical materials Non-oxide glasses Compact disc Chalcogenides Silver compounds Indium compounds Antimony compounds Tellurium compounds
AgInSbTe
[ "Physics", "Chemistry" ]
315
[ "Alloy stubs", "Materials", "Optical materials", "Alloys", "Matter" ]
3,838,003
https://en.wikipedia.org/wiki/Seismic%20anisotropy
Seismic anisotropy is the directional dependence of the velocity of seismic waves in a medium (rock) within the Earth. Description A material is said to be anisotropic if the value of one or more of its properties varies with direction. Anisotropy differs from the property called heterogeneity in that anisotropy is the variation in values with direction at a point while heterogeneity is the variation in values between two or more points. Seismic anisotropy can be defined as the dependence of seismic velocity on direction or upon angle. General anisotropy is described by a 4th order elasticity tensor with 21 independent elements. However, in practice observational studies are unable to distinguish all 21 elements, and anisotropy is usually simplified. In the simplest form, there are two main types of anisotropy, both of them are called transverse isotropy (it is called transverse isotropy because there is isotropy in either the horizontal or vertical plane) or polar anisotropy. The difference between them is in their axis of symmetry, which is an axis of rotational invariance such that if we rotate the formation about the axis, the material is still indistinguishable from what it was before. The symmetry axis is usually associated with regional stress or gravity. Vertical transverse isotropy (VTI), transverse isotropy with a vertical axis of symmetry, is associated with layering and shale and is found where gravity is the dominant factor. Horizontal transverse isotropy (HTI), transverse isotropy with a horizontal axis of symmetry, is associated with cracks and fractures and is found where regional stress is the dominant factor. The transverse anisotropic matrix has the same form as the isotropic matrix, except that it has five non-zero values distributed among 12 non-zero elements. Transverse isotropy is sometimes called transverse anisotropy or anisotropy with hexagonal symmetry. In many cases the axis of symmetry will be neither horizontal nor vertical, in which case it is often called "tilted". History of the recognition of anisotropy Anisotropy was first recognised in the 19th century following the theory of Elastic wave propagation. George Green (1838) and Lord Kelvin (1856) took anisotropy into account in their articles on wave propagation. Anisotropy entered seismology in the late 19th century and was introduced by Maurycy Rudzki. From 1898 till his death in 1916, Rudzki attempted to advance the theory of anisotropy, he attempted to determine the wavefront of a transversely isotropic medium (TI) in 1898 and in 1912 and 1913 he wrote on surface waves in transversely isotropic half space and on Fermat's principle in anisotropic media respectively. With all these, the advancement of anisotropy was still slow and in the first 30 years (1920–1950) of exploration seismology only a few papers were written on the subject. More work was done by several scientists such as Helbig (1956) who observed while doing seismic work on Devonian schists that velocities along the foliation were about 20% higher than those across the foliation. However the appreciation of anisotropy increased with the proposition of a new model for the generation of anisotropy in an originally isotropic background and a new exploration concept by Crampin (1987). One of the main points by Crampin was that the polarization of three component shear waves carries unique information about the internal structure of the rock through which they pass, and that shear wave splitting may contain information about the distribution of crack orientations. With these new developments and the acquisition of better and new types of data such as three component 3D seismic data, which clearly show the effects of shear wave splitting, and wide azimuth 3D data which show the effects of azimuthal anisotropy, and the availability of more powerful computers, anisotropy began to have great impact in exploration seismology in the past three decades. Concept of seismic anisotropy Since the understanding of seismic anisotropy is closely tied to the shear wave splitting, this section begins with a discussion of shear wave splitting. Shear waves have been observed to split into two or more fixed polarizations which can propagate in the particular ray direction when entering an anisotropic medium. These split phases propagate with different polarizations and velocities. Crampin (1984) amongst others gives evidence that many rocks are anisotropic for shear wave propagation. In addition, shear wave splitting is almost routinely observed in three-component VSPs. Such shear wave splitting can be directly analyzed only on three component geophones recording either in the subsurface, or within the effective shear window at the free surface if there are no near surface low-velocity layers. Observation of these shear waves show that measuring the orientation and polarization of the first arrival and the delay between these split shear waves reveal the orientation of cracks and the crack density . This is particularly important in reservoir characterization. In a linearly elastic material, which can be described by Hooke's law as one in which each component of stress is dependent on every component of strain, the following relationship exists: where is the stress, is the elastic moduli or stiffness constant, and is the strain. The elastic modulus matrix for an anisotropic case is The above is the elastic modulus for a vertical transverse isotropic medium (VTI), which is the usual case. The elastic modulus for a horizontal transverse isotropic medium (HTI) is: For an anisotropic medium, the directional dependence of the three phase velocities can be written by applying the elastic moduli in the wave equation is; The direction dependent wave speeds for elastic waves through the material can be found by using the Christoffel equation and are given by where is the angle between the axis of symmetry and the wave propagation direction, is mass density and the are elements of the elastic stiffness matrix. The Thomsen parameters are used to simplify these expressions and make them easier to understand. Seismic anisotropy has been observed to be weak, and Thomsen (1986) rewrote the velocities above in terms of their deviation from the vertical velocities as follows; where are the P and S wave velocities in the direction of the axis of symmetry () (in geophysics, this is usually, but not always, the vertical direction). Note that may be further linearized, but this does not lead to further simplification. The approximate expressions for the wave velocities are simple enough to be physically interpreted, and sufficiently accurate for most geophysical applications. These expressions are also useful in some contexts where the anisotropy is not weak. The Thomsen parameters are anisotropic and are three non-dimensional combinations which reduce to zero in isotropic cases, and are defined as Origin of anisotropy Anisotropy has been reported to occur in the Earth's three main layers: the crust, mantle and the core. The origin of seismic anisotropy is non-unique, a range of phenomena may cause Earth materials to display seismic anisotropy. The anisotropy may be strongly dependent on wavelength if it is due to the average properties of aligned or partially aligned heterogeneity. A solid has intrinsic anisotropy when it is homogeneously and sinuously anisotropic down to the smallest particle size, which may be due to crystalline anisotropy. Relevant crystallographic anisotropy can be found in the upper mantle. When an otherwise isotropic rock contains a distribution of dry or liquid-filled cracks which have preferred orientation it is named crack induced anisotropy. The presence of aligned cracks, open or filled with some different material, is an important mechanism at shallow depth, in the crust. It is well known that the small-scale, or microstructural, factors include (e.g. Kern & Wenk 1985; Mainprice et al. 2003): (1) crystal lattice preferred orientation (LPO) of constituent mineral phases; (2) variations in spatial distribution of grains and minerals; (3) grain morphology and (4) aligned fractures, cracks and pores, and the nature of their infilling material (e.g. clays, hydrocarbons, water, etc.). Because of the overall microstructural control on seismic anisotropy, it follows that anisotropy can be diagnostic for specific rock types. Here, we consider whether seismic anisotropy can be used as an indicator of specific sedimentary lithologies within the Earth's crust. In sedimentary rocks, anisotropy develops during and after deposition. For anisotropy to develop, there needs to be some degree of homogeneity or uniformity from point to point in the deposited clastics. During deposition, anisotropy is caused by the periodic layering associated with changes in sediment type which produces materials of different grain size, and also by the directionality of the transporting medium which tends to order the grains under gravity by grain sorting. Fracturing and some diagenetic processes such as compaction and dewatering of clays, and alteration etc. are post depositional processes that can cause anisotropy. The importance of anisotropy in hydrocarbon exploration and production In the past two decades, the seismic anisotropy has dramatically been gaining attention from academic and industry, due to advances in anisotropy parameter estimation, the transition from post stack imaging to pre stack depth migration, and the wider offset and azimuthal coverage of 3D surveys. Currently, many seismic processing and inversion methods utilize anisotropic models, thus providing a significant enhancement over the seismic imaging quality and resolution. The integration of anisotropy velocity model with seismic imaging has reduced uncertainty on internal and bounding-fault positions, thus greatly reduce the risk of investment decision based heavily on seismic interpretation. In addition, the establishment of correlation between anisotropy parameters, fracture orientation, and density, lead to practical reservoir characterization techniques. The acquisition of such information, fracture spatial distribution and density, the drainage area of each producing well can be dramatically increased if taking the fractures into account during the drilling decision process. The increased drainage area per well will result in fewer wells, greatly reducing the drilling cost of exploration and production (E&P) projects. The application of the anisotropy in petroleum exploration and production Among several applications of seismic anisotropy, the following are the most important: anisotropic parameter estimation, prestack depth anisotropy migration, and fracture characterization based on anisotropy velocity models. Anisotropy parameter estimation The anisotropy parameter is most fundamental to all other anisotropy application in E&P area. In the early days of seismic petroleum exploration, the geophysicists were already aware of the anisotropy-induced distortion in P-wave imaging (the major of petroleum exploration seismic surveys). Although the anisotropy-induced distortion is less significant since the poststack processing of narrow-azimuth data is not sensitive to velocity. The advancement of seismic anisotropy is largely contributed by the Thomsen's work on anisotropy notation and also by the discovery of the P-wave time-process parameter . These fundamental works enable to parametrize the transverse isotropic (TI) models with only three parameters, while there are five full independent stiff tensor element in transverse isotropic (VTI or HTI) models. This simplification made the measurement of seismic anisotropy a plausible approach. Most anisotropy parameter estimation work is based on shale and silts, which may be due to the fact that shale and silts are the most abundant sedimentary rocks in the Earth's crust. Also in the context of petroleum geology, organic shale is the source rock as well as seal rocks that trap oil and gas. In seismic exploration, shales represent the majority of the wave propagation medium overlying the petroleum reservoir. In conclusion, seismic properties of shale are important for both exploration and reservoir management. Seismic velocity anisotropy in shale can be estimated from several methods, including deviated-well sonic logs, walkway VSP, and core measurement. These methods have their own advantages and disadvantages: the walkway VSP method suffers from scaling issues, and core measure is impractical for shale, since shale is hard to be cored during drilling. Walkway VSP The Walkway VSP array several seismic surface sources at different offset from the well. Meanwhile, a vertical receiver array with constant interval between receivers is mounted in a vertical well. The sound arrival times between multiple surface sources and receivers at multiple depths are recorded during measurement. These arrival times are used to derive the anisotropy parameter based on the following equations where is the arrival time from source with offset, is the arrival time of zero offset, is NMO velocity, is Thompson anisotropy parameter. Core measurement Another technique used to estimate the anisotropy parameter is directly measure them from the core which is extracted through a special hollow drill bit during drill process. Since coring a sample will generate large extra cost, only limited number of core samples can be obtained for each well. Thus the anisotropy parameter obtained through core measurement technique only represent the anisotropy property of rock near the borehole at just several specific depth, rending this technique often provides little help on the field seismic survey application. The measurements on each shale plug require at least one week. From the context of this article, wave propagation in a vertically transverse medium can be described with five elastic constants, and ratios among these parameters define the rock anisotropy. This anisotropy parameter can be obtained in the laboratory by measuring the velocity travel speed with transducer ultrasonic systems at variable saturation and pressure conditions. Usually, three directions of wave propagation on core samples are the minimum requirement to estimate the five elastic coefficients of the stiffness tensor. Each direction in core plug measurement yields three velocities (one P and two S). The variation of wave propagation direction can be achieved by either cutting three samples at 0°, 45° and 90° from the cores or by using one core plug with transducers attached at these three angles. Since most shales are very friable and fissured, it is often difficult to cut shale core plug. Its edges break off easily. Thus the cutting sample method can only be used for hard, competent rocks. Another way to get the wave propagation velocity at three directions is to arrange the ultrasonic transducer onto several specific location of the core sampler. This method avoids the difficulties encounter during the cutting of shale core sample. It also reduces the time of measurement by two thirds since three pairs of ultrasonic transducer work at the same time. Once the velocities at three directions are measured by one of the above two methods, the five independent elastic constants are given by the following equations: The P-wave anisotropy of a VTI medium can be described by using Thomsen's parameters . The quantifies the velocity difference for wave propagation along and perpendicular to the symmetry axis, while controls the P-wave propagation for angles near the symmetry axis. Deviated well sonic log The last technique can be used to measure the seismic anisotropy is related to the sonic logging information of a deviated well. In a deviated well, the wave propagation velocity is higher than the wave propagation velocity in a vertical well at the same depth. This difference in velocity between deviated well and vertical well reflects the anisotropy parameters of the rocks near the borehole. The detail of this technique will be shown on an example of this report. Anisotropic prestack depth migration In the situation of complex geology, e.g. faulting, folding, fracturing, salt bodies, and unconformities, pre-stack migration (PreSM) is used due to better resolution under such complex geology. In PreSM, all traces are migrated before being moved to zero-offset. As a result, much more information is used, which results in a much better image, along with the fact that PreSM honours velocity changes more accurately than post-stack migration. The PreSM is extremely sensitive to the accuracy of the velocity field. Thus the inadequacy of isotropic velocity models is not suitable for the pre stack depth migration. P-wave anisotropic prestack depth migration (APSDM) can produce a seismic image that is very accurate in depth and space. As a result, unlike isotropic PSDM, it is consistent with well data and provides an ideal input for reservoir characterization studies. However, this accuracy can only be achieved if correct anisotropy parameters are used. These parameters cannot be estimated from seismic data alone. They can only be determined with confidence through analysis of a variety of geoscientific material – borehole data and geological history. During recent years, the industry has started to see the practical use of anisotropy in seismic imaging. We show case studies that illustrate this integration of the geosciences. We show that much better accuracy is being achieved. The logical conclusion is that, this integrated approach should extend the use of anisotropic depth imaging from complex geology only, to routine application on all reservoirs. Fracture characterization After considering applications of anisotropy that improved seismic imaging, two approaches for exploiting anisotropy for the analysis of fractures in the formation are worthy of discussing. Ones uses azimuthal variations in the amplitude versus offset (AVO) signature when the wave is reflected from the top or base of an anisotropic material, and a second exploits the polarizing effect that the fractures have on a transmitted shear-wave. In both cases, the individual fractures are below the resolving power of the seismic signal and it is the cumulative effect of the fracturing that is recorded. Based on the idea behind them, both approaches can be divided into two steps. The first step is to get the anisotropy parameters from seismic signals, and the second steps is to retreat the information of fractures from anisotropy parameters based on the fracture induce anisotropy model. Fractures-azimuthal variations Aligned subseismic-scale fracturing can produce seismic anisotropy (i.e., seismic velocity varies with direction) and leads to measurable directional differences in traveltimes and reflectivity. If the fractures are vertically aligned, they will produce azimuthal anisotropy (the simplest case being horizontal transverse isotropy, or HTI) such that reflectivity of an interface depends on azimuth as well as offset. If either of the media bounding the interface is azimuthally anisotropic, the AVO will have an azimuthal dependence. The P-P wave reflection coefficient have the following relation with the azimuthal if anisotropy exist in the layers: where is the azimuth from data acquisition grid, the terms are coefficients describing anisotropy parameter. Fractures: shear-wave splitting The behavior of shear waves as they pass through anisotropic media has been recognized for many years, with laboratory and field observations demonstrating how the shear wave splits into two polarized components with their planes aligned parallel and perpendicular to the anisotropy. For a fractured medium, the faster shear wave is generally aligned with the strike direction and the time delay between the split shear waves related to the fracture density and path length traveled. For layered medium, the shear wave polarized parallel to the layering arrives first. Examples of the application of anisotropy Example of anisotropy in petroleum E&P Two examples will be discussed in there to show the anisotropy application in Petroleum E&P area. The first related to anisotropy parameter estimation via deviated well sonic logging tool. And the second example reflects the image quality improvement by PreStack Depth Migration technology. Example of deviated well sonic logging In this case, the sonic velocity in a deviated well is obtained by dipole sonic logging tool . The formation is mostly composed of shale. In order to use the TI model, several assumptions are made: Rock should be in normally pressured regime. Rock should have similar burial history. Satisfying the above conditions, the following equation hold for a TI model: Where is the deviated angle of the well, and , are anisotropy parameter. The following plot shows typical velocity distribution vs density in a deviated well. The color of each data point represents the frequency of this data point. The red color means a high frequency while the blue color represents a low frequency. The black line shows a typical velocity trend without the effect of anisotropy. Since the existence of anisotropy effect, the sound velocity is higher than the trend line. From the well logging data, the velocity vs plot can be drawn. On the basis of this plot, a no liner regression will give us an estimate of and . The following plot show the non-linear regression and its result. Put the estimated and into the following equation, the correct can be obtained. By doing the above correction calculation, the corrected is plot vs density in the following plot. As be seen in the plot, most of the data point falls on the trend line. It validate the correctness of the estimate of anisotropy parameter. Example of prestack depth migration Imaging In this case, the operator conducted several seismic surveys on a gas field in the north sea over the period of 1993-1998 . The early survey does not take anisotropy into account, while the later survey employs the PreStack Depth Migration imaging. This PSDM was done on a commercial seismic package developed by Total. The following two plots clearly reveal the resolution improvement of the PSDM method. The top plot is a convention 3D survey without anisotropy effect. The bottom plot used PSDM method. As can be seen in the bottom plot, more small structure features are revealed due to the reduce of error and improved resolution. Limitations of seismic anisotropy Seismic anisotropy relies on shear waves, shear waves carry rich information which can sometimes impede its utilization. Shear waves survey for anisotropy requires multi component (usually 3 component) geophones which are oriented at angles, these are more expensive than the widely used vertical oriented single component geophones. However, while expensive 3 component seismometers are much more powerful in their ability to collect valuable information about the Earth that vertical component seismometers simply cannot. While seismic waves do attenuate, large earthquakes (moment magnitude > 5) have the ability to produce observable shear waves. The second law of thermodynamics ensures a higher attenuation of shear wave reflected energy, this tends to impede the utilization of shear wave information for smaller earthquakes. Crustal anisotropy In the Earth's crust, anisotropy may be caused by preferentially aligned joints or microcracks, by layered bedding in sedimentary formations, or by highly foliated metamorphic rocks. Crustal anisotropy resulting from aligned cracks can be used to determine the state of stress in the crust, since in many cases, cracks are preferentially aligned with their flat faces oriented in the direction of minimum compressive stress. In active tectonic areas, such as near faults and volcanoes, anisotropy can be used to look for changes in preferred orientation of cracks that may indicate a rotation of the stress field. Both seismic P-waves and S-waves may exhibit anisotropy. For both, the anisotropy may appear as a (continuous) dependence of velocity upon the direction of propagation. For S-waves, it may also appear as a (discrete) dependence of velocity upon the direction of polarization. For a given direction of propagation in any homogeneous medium, only two polarization directions are allowed, with other polarizations decomposing trigonometrically into these two. Hence, shear waves naturally "split" into separate arrivals with these two polarizations; in optics this is called birefringence. Crustal anisotropy is very important in the production of oil reservoirs, as the seismically fast directions can indicate preferred directions of fluid flow. In crustal geophysics, the anisotropy is usually weak; this enables a simplification of the expressions for seismic velocities and reflectivities, as functions of propagation (and polarization) direction. In the simplest geophysically plausible case, that of polar anisotropy, the analysis is most conveniently done in terms of Thomsen Parameters. Mantle anisotropy In the mantle, anisotropy is normally associated with crystals (mainly olivine) aligned with the mantle flow direction called lattice preferred orientation (LPO). Due to their elongate crystalline structure, olivine crystals tend to align with the flow due to mantle convection or small scale convection. Anisotropy has long been used to argue whether plate tectonics is driven from below by mantle convection or from above by the plates, i.e. slab pull and ridge push. The favored methods for detecting seismic anisotropy are shear wave splitting, seismic tomography of surface waves and body waves, and converted-wave scattering in the context of a receiver function. In shear-wave splitting, the S wave splits into two orthogonal polarizations, corresponding to the fastest and slowest wavespeeds in that medium for that propagation direction. The period range for mantle splitting studies is typically 5-25-sec. In seismic tomography, one must have a spatial distribution of seismic sources (earthquakes or man-made blasts) to generate waves at multiple wave-propagation azimuths through a 3-D medium. For receiver functions, the P-to-S converted wave displays harmonic variation with earthquake back azimuth when the material at depth is anisotopic. This method allows determination of layers of anisotropic material at depth beneath a station. In the transition zone, wadsleyite and/or ringwoodite could be aligned in LPO. Below the transition zone, the three main minerals, periclase, silicate perovskite (bridgmanite), and post-perovskite are all anisotropic and could be generating anisotropy observed in the D" region (a couple hundred kilometer thick layer about the core-mantle boundary). References Sources Helbig, K., Thomsen, L., 75-plus years of anisotropy in exploration and reservoir seismics: A historical review of concepts and methods: Geophysics. VOL. 70, No. 6 (November–December 2005): p. 9ND–23ND http://www.geo.arizona.edu/geo5xx/geo596f/Readings/Helbig%20and%20Thomsen,%202005,%20historical%20review%20anisotropy%201.pdf Crampin, S., 1984, Evaluation of anisotropy by shear wave splitting: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 23–33. Ikelle, L.T., Amundsen, L., Introduction to Petroleum Seismology, Investigations in Geophysics series No.12. Thomsen, L., 1986, Weak elastic anisotropy: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 34–46 Anderson et al., Oilfield Anisotropy: Its Origins and Electrical Characteristics: Oil field review, 48–56. https://www.slb.com/~/media/Files/resources/oilfield_review/ors94/1094/p48_56.pdf Thomsen, L., : Geophysics, 51, 1954–1966, Weak elastic anisotropy. Tsvankin, I., : Geophysics, 62, 1292-1309.1997, Anisotropic parameters and P-wave velocity for orthorhombic media. Tsvankin, I., Seismic signatures and analysis of reflection data in anisotropic media: Elsevier Science Publ, 2001,. Stephen A. H. and J-Michael K. GEOPHYSICS, VOL. 68, NO. 4, P1150–1160. Fracture characterization at Valhall: Application of P-wave amplitude variation with offset and azimuth (AVOA) analysis to a 3D ocean-bottom data set Tushar P. and Robert V. SPE 146668. Improved Reservoir Characterization through Estimation of Velocity Anisotropy in Shales. Jeffrey S., Rob R., Jean A., et al. www.cgg.com/technicalDocuments/cggv_0000000409.pdf Reducing Structural Uncertainties Through Anisotropic Prestack Depth Imaging: Examples from the Elgin/Franklin/Glenelg HP/HT Fields Area, Central North Sea Helbig, K., 1984, Shear waves – what they are and how they are and how they can be used: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 5–22. External links http://www1.gly.bris.ac.uk/~wookey/MMA/index.htm https://web.archive.org/web/20050909171919/http://geophysics.asu.edu/anisotropy/ http://www.geo.arizona.edu/geo5xx/geo596f/Readings/Helbig%20and%20Thomsen,%202005,%20historical%20review%20anisotropy%201.pdf https://www.slb.com/~/media/Files/resources/oilfield_review/ors94/1094/p48_56.pdf Elasticity (physics) Petroleum geology Geophysics
Seismic anisotropy
[ "Physics", "Chemistry", "Materials_science" ]
6,318
[ "Physical phenomena", "Applied and interdisciplinary physics", "Elasticity (physics)", "Deformation (mechanics)", "Petroleum", "Geophysics", "Petroleum geology", "Physical properties" ]
22,897,257
https://en.wikipedia.org/wiki/Facility%20for%20Rare%20Isotope%20Beams
The Facility for Rare Isotope Beams (FRIB) is a scientific user facility for nuclear science, funded by the U.S. Department of Energy Office of Science (DOE-SC), Michigan State University (MSU), and the State of Michigan. Michigan State University contributed an additional $212 million in various ways, including the land. MSU established and operates FRIB as a user facility for the Office of Nuclear Physics in the U.S. Department of Energy Office of Science. At FRIB, scientists research the properties of rare isotopes to advance knowledge in the areas of nuclear physics, nuclear astrophysics, fundamental interactions of nuclei, and real-world applications of rare isotopes. Construction of the FRIB conventional facilities began in spring 2014 and was completed in 2017. Technical construction started in the fall of 2014 and was completed in January 2022. The total project cost was $730M with project completion in June 2022. FRIB will provide researchers with the technical capabilities to study the properties of rare isotopes (that is, short-lived atomic nuclei not normally found on Earth). Real-world applications of the research include materials science, nuclear medicine, and the fundamental understanding of nuclear material important to nuclear weapons stockpile stewardship. More than 20 working groups specializing in experimental equipment and scientific topics have been organized through the FRIB Users Organization. The FRIB will be capable of expanding the known Chart of the Nuclides from some approximately 3000 identified isotopes to over 6000 potentially identifiable isotopes. It will accelerate beams of known isotopes through a matrix which will disrupt the nuclei, forming a variety of unusual isotopes of short half-life. These will be 'filtered' by directing away the undesired charge/mass isotopes by a magnetic field, leaving a small beam of the desired novel isotope for study. Such beam can also target other known isotopes, fusing with the target, to create still further unknown isotopes, for further study. This will allow expansion of the Chart of the Nuclides towards its outer sides, the so-called Nuclear drip line. It will also allow expansion of the Chart towards heavier isotopes, towards the Island of stability and beyond. The establishment of a Facility for Rare Isotope Beams (FRIB) is the first recommendation in the 2012 National Academies Decadal Study of Nuclear Physics: Nuclear Physics: Exploring the Heart of the Matter. The priority for completion is listed in the 2015 Long Range Plan for Nuclear Science: Implementing Reaching for the Horizon by the DOE/NSF Nuclear Science Advisory Committee. The facility has a robust Health Physics program under the umbrella of the university's Environmental Health and Safety department. Developments On December 11, 2008, the DOE-SC announced the selection of Michigan State University to design and establish FRIB. The project earned Critical Decision 1 (CD-1) approval in September 2010 which established a preferred alternative and the associated established cost and schedule ranges. On August 1, 2013, DOE-SC approved the project baseline (CD-2) and the start of civil construction (CD-3a), pending a notice to proceed. Civil construction could not start under the continuing appropriations resolution, which disallowed new construction starts. On February 25, 2014, the board of the Michigan Strategic Fund met at Michigan State University and approved nearly $91 million to support the construction of FRIB. The FRIB marked the official start of civil construction with a groundbreaking ceremony March 17, 2014. In attendance were representatives from the Michigan delegation, State of Michigan, Michigan State University, and the U.S. Department of Energy Office of Science. Technical construction started in October 2014, following a CD-3b approval by DOE-SC. In March 2017, FRIB achieved beneficial occupancy of civil construction, and technical installation activities escalated as a result. In February 2019, FRIB accelerated beams through the first 15 (of 46 total) cryomodules to 10 percent of FRIB's final beam energy. In August 2019, the radio-frequency quadrupole (RFQ) was conditioned above 100 kW, the CW power needed to achieve the FRIB mission goal of accelerating uranium beams. The RFQ prepares the beam for further acceleration in the linac. Construction on two MSU-funded building additions was substantially completed in January 2020. The Cryogenic Assembly Building will be used for cryomodule maintenance and to perform cryogenic-engineering research. The High Rigidity Spectrometer and Isotope Harvesting Vault will house isotope-harvesting research equipment and provide space for experiments. In March 2020, FRIB accelerated an argon-36 beam through 37 of 46 superconducting cryomodules to 204 MeV/nucleon or 57 percent of the speed of light. In September 2020, DOE designated FRIB as a DOE-SC User Facility. U.S. Secretary of Energy Dan Brouillette announced the designation at a special ceremony held outdoors at MSU, under a tent adjacent to FRIB. On February 24, 2021, the FRIB announced that 82 proposals requesting 9,784 hours of beam time and six letters of intent were submitted, covering 16 of the 17 National Academies benchmarks for FRIB, in response to their first call for proposals. These proposals represent FRIB's international user community of more than 1,500 scientists. Respondents include 597 individual scientists, 354 of whom are from the United States. They represent 130 institutions in 30 countries and 26 U.S. states. On January 25, 2022, the FRIB project team delivered the first heavy-ion beam to the focal plane of the FRIB fragment separator, marking technical completion of the FRIB project. On February 1–2, 2022, a review by the DOE-SC Office of Project Assessment reviewed FRIB's readiness for project completion and recommended that FRIB is ready for Critical Decision 4 (Approve Project Completion). Michigan State University announced a ribbon-cutting ceremony for May 2, 2022. On June 22, 2022, the first experiment at FRIB, which studied the beta decay of calcium-48 fragments that are so unstable that they only exist for mere fractions of a second, concluded successfully. FRIB's first scientific user experiment had participation from Argonne National Laboratory, Brookhaven National Laboratory, Florida State University, FRIB, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Louisiana State University, Los Alamos National Laboratory, Mississippi State University, Oak Ridge National Laboratory, and the University of Tennessee. On November 14, 2022, the results of the first experiment were published in Physical Review Letters. Notes See also Radioactive Isotope Beam Factory Facility for Antiproton and Ion Research Grand Accélérateur National d'Ions Lourds On-Line Isotope Mass Separator Rare Isotope Science Project References Archived from the original on March 15, 2016. External links Facility for Rare Isotope Beams FRIB Users Organization Particle accelerators Isotopes Nuclear chemistry Nuclear physics Michigan State University Michigan State University campus
Facility for Rare Isotope Beams
[ "Physics", "Chemistry" ]
1,418
[ "Nuclear chemistry", "nan", "Isotopes", "Nuclear physics" ]
22,898,551
https://en.wikipedia.org/wiki/Thermal%20transport%20in%20nanostructures
The transport of heat in solids involves both electrons and vibrations of the atoms (phonons). When the solid is perfectly ordered over hundreds of thousands of atoms, this transport obeys established physics. However, when the size of the ordered regions decreases new physics can arise, thermal transport in nanostructures. In some cases heat transport is more effective, in others it is not. The effect of the limited length of structure In general two carrier types can contribute to thermal conductivity - electrons and phonons. In nanostructures phonons usually dominate and the phonon properties of the structure become of a particular importance for thermal conductivity. These phonon properties include: phonon group velocity, phonon scattering mechanisms, heat capacity, Grüneisen parameter. Unlike bulk materials, nanoscale devices have thermal properties which are complicated by boundary effects due to small size. It has been shown that in some cases phonon-boundary scattering effects dominate the thermal conduction processes, reducing thermal conductivity. Depending on the nanostructure size, the phonon mean free path values (Λ) may be comparable or larger than the object size, . When is larger than the phonon mean free path, Umklapp scattering process limits thermal conductivity (regime of diffusive thermal conductivity). When is comparable to or smaller than the mean free path (which is of the order 1 μm for carbon nanostructures), the continuous energy model used for bulk materials no longer applies and nonlocal and nonequilibrium aspects to heat transfer also need to be considered. In this case phonons in defectless structure could propagate without scattering and thermal conductivity becomes ballistic (similar to ballistic conductivity). More severe changes in thermal behavior are observed when the feature size shrinks further down to the wavelength of phonons. Nanowires Thermal conductivity measurements The first measurement of thermal conductivity in silicon nanowires was published in 2003. Two important features were pointed out: 1) The measured thermal conductivities are significantly lower than that of the bulk Si and, as the wire diameter is decreased, the corresponding thermal conductivity is reduced. 2) As the wire diameter is reduced, the phonon boundary scattering dominates over phonon–phonon Umklapp scattering, which decreases the thermal conductivity with an increase in temperature. For 56 nm and 115 nm wires k ~ T3 dependence was observed, while for 37 nm wire k ~ T2 dependence and for 22 nm wire k ~ T dependence were observed. Chen et al. has shown that the one-dimensional cross-over for 20 nm Si nanowire occurs around 8K, while the phenomenon was observed for temperature values greater than 20K. Therefore, the reason of such behaviour is not in the confinement experienced by phonons so that three-dimensional structures display two-dimensional or one-dimensional behavior. Theoretical models for nanowires Different phonon modes contribution to thermal conductivity Assuming that Boltzmann transport equation is valid, thermal conductivity can be written as: where C is the heat capacity, vg is the group velocity and is the relaxation time. Note that this assumption breaks down when the dimensions of the system are comparable to or smaller than the wavelength of the phonons responsible for thermal transport. In our case, phonon wavelengths are generally in the 1 nm range and the nanowires under consideration are within tens of nanometers range, the assumption is valid. Different phonon mode contributions to heat conduction can be extracted from analysis of the experimental data for silicon nanowires of different diameters to extract the C·vg product for analysis. It was shown that all phonon modes contributing to thermal transport are excited well below the Si Debye temperature (645 K). From the thermal conductivity equation, one can write the product C·vg for each isotropic phonon branch i. where and is the phonon phase velocity, which is less sensitive to phonon dispersions than the group velocity vg. Many models of phonon thermal transport ignores the effects of transverse acoustic phonons (TA) at high frequency due to their small group velocity. (Optical phonon contributions are also ignored for the same reason.) However, upper branch of TA phonons have non-zero group velocity at the Brillouin zone boundary along the Γ-Κ direction and, in fact, behave similarly to the longitudinal acoustic phonons (LA) and can contribute to the heat transport. Then, the possible phonon modes contributing to heat conduction are both LA and TA phonons at low and high frequencies. Using the corresponding dispersion curves, the C·vg product can then be calculated and fitted to the experimental data. The best fit was found when contribution of high-frequency TA phonons is accounted as 70% of the product at room temperature. The remaining 30% is contributed by the LA and TA phonons at low-frequency. Using complete phonon dispersions Thermal conductivity in nanowires can be computed based on complete phonon dispersions instead of the linearlized dispersion relations commonly used to calculate thermal conductivity in bulk materials. Assuming the phonon transport is diffusive and Boltzmann transport equation (BTE) is valid, nanowire thermal conductance G(T) can be defined as: where the variable α represents discrete quantum numbers associated with sub-bands found in one-dimensional phonon dispersion relations, fB represents the Bose-Einstein distribution, vz is the phonon velocity in the z direction and λ is the phonon relaxation length along the direction of the wire length. Thermal conductivity is then expressed as: where S is the cross sectional area of the wire, az is the lattice constant. It was shown that, using this formula and atomistically computed phonon dispersions (with interatomic potentials developed in ), it is possible to predictively calculate lattice thermal conductivity curves for nanowires, in good agreement with experiments. On the other hand, it was not possible to obtain correct results with the approximated Callaway formula. These results are expected to apply to ”nanowhiskers” for which phonon confinement effects are unimportant. Si nanowires wider than ~35 nm are within this category. Very thin nanowires For large diameter nanowires, theoretical models assuming the nanowire diameters are comparable to the mean free path and that the mean free path is independent of phonon frequency have been able to closely match the experimental results. But for very thin nanowires whose dimensions are comparable to the dominant phonon wavelength, a new model is required. The study in has shown that in such cases, the phonon-boundary scattering is dependent on frequency. The new mean free path is then should be used: Here, l is the mean free path (same as Λ). The parameter h is length scale associated with the disordered region, d is the diameter, N(ω) is number of modes at frequency ω, and B is a constant related to the disorder region. Thermal conductance is then calculated using the Landauer formula: Carbon nanotubes As nanoscale graphitic structures, carbon nanotubes are of great interest for their thermal properties. The low-temperature specific heat and thermal conductivity show direct evidence of 1-D quantization of the phonon band structure. Modeling of the low-temperature specific heat allows determination of the on-tube phonon velocity, the splitting of phonon subbands on a single tube, and the interaction between neighboring tubes in a bundle. Thermal conductivity measurements Measurements show a single-wall carbon nanotubes (SWNTs) room-temperature thermal conductivity about 3500 W/(m·K), and over 3000 W/(m·K) for individual multiwalled carbon nanotubes (MWNTs). It is difficult to replicate these properties on the macroscale due to imperfect contact between individual CNTs, and so tangible objects from CNTs such as films or fibres have reached only up to 1500 W/(m·K) so far. Addition of nanotubes to epoxy resin can double the thermal conductivity for a loading of only 1%, showing that nanotube composite materials may be useful for thermal management applications. Theoretical models for nanotubes Thermal conductivity in CNT is mainly due to phonons rather than electrons so the Wiedemann–Franz law is not applicable. In general, the thermal conductivity is a tensor quality, but for this discussion, it is only important to consider the diagonal elements: where C is the specific heat, and vz and are the group velocity and relaxation time of a given phonon state. At temperatures far below the Debye temperature, the relaxation time is determined by scattering of fixed impurities, defects, sample boundaries, etc. and is roughly constant. Therefore, in ordinary materials, the low-temperature thermal conductivity has the same temperature dependence as the specific heat. However, in anisotropic materials, this relationship does not strictly hold. Because the contribution of each state is weighted by the scattering time and the square of the velocity, the thermal conductivity preferentially samples states with large velocity and scattering time. For instance, in graphite, the thermal conductivity parallel to the basal planes is only weakly dependent on the interlayer phonons. In SWNT bundles, it is likely that k(T) depends only on the on-tube phonons, rather than the intertube modes. Thermal conductivity is of particular interest in low-dimensional systems. For CNT, represented as 1-D ballistic electronic channel, the electronic conductance is quantized, with a universal value of Similarly, for a single ballistic 1-D channel, the thermal conductance is independent of materials parameters, and there exists a quantum of thermal conductance, which is linear in temperature: Possible conditions for observation of this quantum were examined by Rego and Kirczenow. In 1999, Keith Schwab, Erik Henriksen, John Worlock, and Michael Roukes carried out a series of experimental measurements that enabled first observation of the thermal conductance quantum. The measurements employed suspended nanostructures coupled to sensitive dc SQUID measurement devices. In 2008, a colorized electron micrograph of one of the Caltech devices was acquired for the permanent collection of the Museum of Modern Art in New York. At high temperatures, three-phonon Umklapp scattering begins to limit the phonon relaxation time. Therefore, the phonon thermal conductivity displays a peak and decreases with increasing temperature. Umklapp scattering requires production of a phonon beyond the Brillouin zone boundary; because of the high Debye temperature of diamond and graphite, the peak in the thermal conductivity of these materials is near 100 K, significantly higher than for most other materials. In less crystalline forms of graphite, such as carbon fibers, the peak in k(T) occurs at higher temperatures, because defect scattering remains dominant over Umklapp scattering to higher temperature. In low-dimensional systems, it is difficult to conserve both energy and momentum for Umklapp processes, and so it may be possible that Umklapp scattering is suppressed in nanotubes relative to 2-D or 3-D forms of carbon. Berber et al. have calculated the phonon thermal conductivity of isolated nanotubes. The value k(T) peaks near 100 K, and then decreases with increasing temperature. The value of k(T) at the peak (37,000 W/(m·K)) is comparable to the highest thermal conductivity ever measured (41,000 W/(m·K) for an isotopically pure diamond sample at 104 K). Even at room temperature, the thermal conductivity is quite high (6600 W/(m·K)), exceeding the reported room-temperature thermal conductivity of isotopically pure diamond by almost a factor of 2. In graphite, the interlayer interactions quench the thermal conductivity by nearly 1 order of magnitude . It is likely that the same process occurs in nanotube bundles . Thus it is significant that the coupling between tubes in bundles is weaker than expected . It may be that this weak coupling, which is problematic for mechanical applications of nanotubes, is an advantage for thermal applications. Phonon density of states for nanotubes The phonon density of states is to calculated through band structure of isolated nanotubes, which is studied in Saito et al. and Sanchez-Portal et al. When a graphene sheet is ‘‘rolled’’ into a nanotube, the 2-D band structure folds into a large number of 1-D subbands. In a (10,10) tube, for instance, the six phonon bands (three acoustic and three optical) of the graphene sheet become 66 separate 1-D subbands. A direct result of this folding is that the nanotube density of states has a number of sharp peaks due to 1-D van Hove singularities, which are absent in graphene and graphite. Despite the presence of these singularities, the overall density of states is similar at high energies, so that the high temperature specific heat should be roughly equal as well. This is to be expected: the high-energy phonons are more reflective of carbon–carbon bonding than the geometry of the graphene sheet. Thin films Thin films are prevalent in the micro and nanoelectronics industry for the fabrication of sensors, actuators and transistors; thus, thermal transport properties affect the performance and reliability of many structures such as transistors, solid-state lasers, sensors, and actuators. Although these devices are traditionally made from bulk crystalline material (silicon), they often contain thin films of oxides, polysilicon, metal, as well as superlattices such as thin-film stacks of GaAs/AlGaAs for lasers. Single-crystal thin films Silicon-on-insulator (SOI) films with silicon thicknesses of 0.05 μm to 10 μm above a buried silicon dioxide layer are increasingly popular for semiconductor devices due to the increased dielectric isolation associated with SOI/ SOI wafers contain a thin-layer of silicon on an oxide layer and a thin-film of single-crystal silicon, which reduces the effective thermal conductivity of the material by up to 50% as compared to bulk silicon, due to phonon-interface scattering and defects and dislocations in the crystalline structure. Previous studies by Asheghi et al., show a similar trend. Other studies of thin-films show similar thermal effects . Superlattices Thermal properties associated with superlattices are critical in the development of semiconductor lasers. Heat conduction of superlattices is less understood than homogeneous thin films. It is theorized that superlattices have a lower thermal conductivity due to impurities from lattice mismatches and at the heterojunctions. Phonon-interface scattering at heterojunctions needs to be considered in this case; fully elastic scattering underestimates the heat conduction, while fully inelastic scattering overestimates the heat conduction. For example, a Si/Ge thin-film superlattice has a greater decrease in thermal conductivity than an AlAs/GaAs film stack due to increased lattice mismatch. A simple estimate of heat conduction of superlattices is: where C1 and C2 are the corresponding heat capacity of film1 and film2 respectively, v1 and v2 are the acoustic propagation velocities in film1 and film2, and d1 and d2 are the thicknesses of film1 and film2. This model neglects scattering within the layers and assumes fully diffuse, inelastic scattering. Polycrystalline films Polycrystalline films are common in semiconductor devices, as the gate electrode of a field-effect transistor is often made of polycrystalline silicon. If the polysilicon grain sizes are small, internal scattering from grain boundaries can overwhelm the effects of film-boundary scattering. Also, grain boundaries contain more impurities, which result in impurity scattering. Likewise, disordered or amorphous films will experience a severe reduction of thermal conductivity, since the small grain size results in numerous grain-boundary scattering effects. Different deposition methods of amorphous films will result in differences in impurities and grain sizes. The simplest approach to modeling phonon scattering at grain boundaries is to increase the scattering rate by introducing this equation: where B is a dimensionless parameter that correlates with the phonon reflection coefficient at the grain boundaries, dG is the characteristic grain size, and v is the phonon velocity through the material. A more formal approach to estimating the scattering rate is: where vG is the dimensionless grain-boundary scattering strength, defined as Here is the cross-section of a grain-boundary area, and νj is the density of the grain boundary area. Measuring thermal conductivity of thin films There are two approaches to experimentally determine the thermal conductivity of thin films. The goal of experimental metrology of thermal conductivity of thin films is to attain an accurate thermal measurement without disturbing the properties of the thin-film. Electrical heating is used for thin films which have a lower thermal conductivity than the substrate; it is fairly accurate in measuring out-of-plane conductivity. Often, a resistive heater and thermistor is fabricated on the sample film using a highly conductive metal, such as aluminium. The most straightforward approach would be to apply a steady-state current and measure the change in temperature of adjacent thermistors. A more versatile approach uses an AC signal applied to the electrodes. The third harmonic of the AC signal reveals heating and temperature fluctuations of the material. Laser heating is a non-contact metrology method, which uses picosecond and nanosecond laser pulses to deliver thermal energy to the substrate. Laser heating uses a pump-probe mechanism; the pump beam introduces energy to the thin-film, as the probe beam picks up the characteristics of how the energy propagates through the film. Laser heating is advantageous because the energy delivered to the film can be precisely controlled; furthermore, the short heating duration decouples the thermal conductivity of the thin film from the substrate . References Nanotechnology
Thermal transport in nanostructures
[ "Materials_science", "Engineering" ]
3,852
[ "Nanotechnology", "Materials science" ]
22,900,287
https://en.wikipedia.org/wiki/ECAMI
ECAMI (Empresa de Comunicaciones, S.A.) is a renewable energy business based in Nicaragua, focusing on solar photovoltaics, wind power and hydroelectric system. History ECAMI was founded in 1982 by Luis Lacayo Lacayo, to supply radio communications equipment in rural areas of Nicaragua where infrastructure had been destroyed during the prolonged civil conflict and revolution. Photovoltaics (PV) were the ideal way of powering this equipment, because there was no grid electricity. Many other opportunities for PV became apparent to Lacayo, like home lighting, battery charging, water pumping and refrigeration. Over time, the provision of renewable energy systems became the main activity of ECAMI. Work ECAMI routinely supplies and installs solar-homes PV systems in rural areas. ECAMI designs and installs PV-powered mini-grids to provide power for homes, hotels, museums and planned health centers in small communities. Underground distribution systems connect all the users to the supply, with individual current limits to each facility. ECAMI installs PV supply systems for mobile phone masts, with considerable savings in fuel diesel. In Managua, six hotels have been supplied with solar water heating systems by ECAMI. One with 50 m2 of panel area supplies 100 rooms each of which had previously required a 6 kW immersion heater, another with 16 m2 of panels supplies 40 rooms. About 150 domestic solar water heaters have also been installed. ECAMI supplies and installs small wind turbines of between 400 W and 5 kW output, and can also install hydroelectric systems. Impact Renewable energy systems installed by ECAMI have decreased the use of CO2 emitting fuels for more than 100,000 people in Nicaragua. ECAMI's systems provide longer hours of emergency service in health centers, the installations of water pumps that bring drinking water to distant communities, access to satellite internet, land irrigation, and longer and more efficient working hours. Memberships ECAMI is a GVEP (Global Village Energy Partnership] partner. Other memberships include International Solar Energy Society (ISES), ANPPER Nicaraguan Association of Renewable Energy Promoters and Products, and ANPPER Nicaraguan Association of Renewable Energy Promoters and Products. ECAMI has work agreements with similar foreign companies, including Curin Corporation (United States), Isratec (Guatemala) Energy and Systems (Canada). Awards On June 11, 2009, in London, Charles, Prince of Wales presented Max Lacayo the Ashden Energy Enterprise Award for ECAMI's achievements, particularly for the installation of high-quality photovoltaic systems in rural and off-grid areas. The Ashden Awards are an internationally recognised yardstick for excellence in the field of sustainable energy. References External links ECAMI home page Companies established in 1982 Companies of Nicaragua Nicaraguan brands Photovoltaics manufacturers Renewable energy technology companies
ECAMI
[ "Engineering" ]
578
[ "Photovoltaics manufacturers", "Engineering companies" ]
22,902,373
https://en.wikipedia.org/wiki/Interfacial%20thermal%20resistance
Interfacial thermal resistance, also known as thermal boundary resistance, or Kapitza resistance, is a measure of resistance to thermal flow at the interface between two materials. While these terms may be used interchangeably, Kapitza resistance technically refers to an atomically perfect, flat interface whereas thermal boundary resistance is a more broad term. This thermal resistance differs from contact resistance (not to be confused with electrical contact resistance) because it exists even at atomically perfect interfaces. Owing to differences in electronic and vibrational properties in different materials, when an energy carrier (phonon or electron, depending on the material) attempts to traverse the interface, it will scatter at the interface. The probability of transmission after scattering will depend on the available energy states on side 1 and side 2 of the interface. Assuming a constant thermal flux is applied across an interface, this interfacial thermal resistance will lead to a finite temperature discontinuity at the interface. From an extension of Fourier's law, we can write where is the applied flux, is the observed temperature drop, is the thermal boundary resistance, and is its inverse, or thermal boundary conductance. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials. This is even more critical for nanoscale systems where interfaces could significantly affect the properties relative to bulk materials. Low thermal resistance at interfaces is technologically important for applications where very high heat dissipation is necessary. This is of particular concern to the development of microelectronic semiconductor devices as defined by the International Technology Roadmap for Semiconductors in 2004 where an 8 nm feature size device is projected to generate up to 100000 W/cm2 and would need efficient heat dissipation of an anticipated die level heat flux of 1000 W/cm2 which is an order of magnitude higher than current devices. On the other hand, applications requiring good thermal isolation such as jet engine turbines would benefit from interfaces with high thermal resistance. This would also require material interfaces which are stable at very high temperature. Examples are metal-ceramic composites which are currently used for these applications. High thermal resistance can also be achieved with multilayer systems. As stated above, thermal boundary resistance is due to carrier scattering at an interface. The type of carrier scattered will depend on the materials governing the interfaces. For example, at a metal-metal interface, electron scattering effects will dominate thermal boundary resistance, as electrons are the primary thermal energy carriers in metals. Two widely used predictive models are the acoustic mismatch model (AMM) and the diffuse mismatch model (DMM). The AMM assumes a geometrically perfect interface and phonon transport across it is entirely elastic, treating phonons as waves in a continuum. On the other hand, the DMM assumes scattering at the interface is diffusive, which is accurate for interfaces with characteristic roughness at elevated temperatures. Molecular dynamics (MD) simulations are a powerful tool to investigate interfacial thermal resistance. Recent MD studies have demonstrated that the solid-liquid interfacial thermal resistance is reduced on nanostructured solid surfaces by enhancing the solid-liquid interaction energy per unit area, and reducing the difference in vibrational density of states between solid and liquid. Theoretical models The primary model that has historically described Kapitza resistance is the phonon gas model. Within this model are the acoustic mismatch and diffuse mismatch models (AMM and DMM respectively). For both models the interface is assumed to behave exactly as the bulk on either side of the interface (e.g. bulk phonon dispersions, velocities, etc.), with hybrid vibrational modes and the phonons that occupy them being completely neglected. In addition, the AMM and DMM models are based only on elastic phonon transport, usually ignoring electrical contributions, although it is possible to take electron contributions into account within the phonon gas model. The AMM and DMM models should apply for interfaces where at least one of the materials is electrically insulating. The thermal resistance then results from the transfer of phonons across the interface. Energy is transferred when higher energy phonons which exist in higher density in the hotter material propagate to the cooler material, which in turn transmits lower energy phonons, creating a net energy flux. According to the AMM and DMM models, a crucial factor in determining the thermal resistance at an interface is the overlap of phonon states. Specifically, the models completely disregard the effects of inelastic scattering and multiple phonon interactions. For example, the models only allow for a phonon occupying a particular mode frequency to interact with another phonon occupying a mode of exactly the same frequency. In reality, however, this is not the case and the interaction probability of two phonons can be calculated using perturbation theory (quantum mechanics). As an example within the AMM and DMM models, given two materials A and B, if material A has a low population (or no population) of phonons with certain k value, there will be very few phonons of that wavevector (or equivalently, frequency) to propagate from A to B. Likewise, due to the principle of detailed balance, AMM and DMM predict that very few phonons of that wavevector will propagate in the opposite direction, from B to A, even if material B has a large population of phonons with that wavevector. Thus as the overlap between phonon dispersions is small, there are fewer modes to allow for heat transfer in the material, giving at a high thermal interfacial resistance relative to materials with a high degree of overlap. Neither model is very effective for predicting the thermal interface resistance (with the exception of very low temperature), but rather for most materials they act as upper and lower limits for real behavior. The AMM and DMM differ in the conditions they require for propagation across the interface, because the models differ greatly in their treatment of scattering at the interface. In AMM the interface is assumed to be perfect, resulting in no scattering, thus phonons propagate elastically across the interface. The wavevectors that propagate across the interface are determined by conservation of momentum. In DMM, the opposite extreme is assumed, a perfectly scattering interface. In this case the wavevectors that propagate across the interface are random and independent of incident phonons on the interface. For both models the detailed balance must still be obeyed. For both models the basic equations of the phonon gas model apply. The flux of energy from one material to the other in one dimension is just: where is the group velocity which is approximated to be the speed of sound in the material for the AMM and DMM models, is the number of phonons at a given wavevector, E is the energy, and α is the probability of transmission across the interface. The net flux is thus the difference of the energy fluxes: Since both fluxes are dependent on T1 and T2, the relationship between the flux and the temperature difference can be used to determine the thermal interface resistance based on: where A is the area of the interface. These basic equations form the basis for both models. n is determined based on the dispersion relation for the materials (for example, the Debye model) and Bose–Einstein statistics. Energy is given simply by the De Broglie Wavelength Equation: where . The main difference between the two models is the transmission probability, whose determination is more complicated. In each case it is determined by the basic assumptions that form the respective models. The assumption of elastic scattering makes it more difficult for phonons to transmit across the interface, resulting in lower probabilities. As a result, the acoustic mismatch model typically represents an upper limit for thermal interface resistance, while the diffuse mismatch model represents the lower limit. Examples Liquid helium interfaces The presence of thermal interface resistance, corresponding to a discontinuous temperature across an interface was first proposed from studies of liquid helium in 1936. While this idea was first proposed in 1936, it wasn't until 1941 when Pyotr Kapitsa (Peter Kapitza) carried out the first systematic study of thermal interface behavior in liquid helium. The first major model for heat transfer at interfaces was the acoustic mismatch model which predicted a T−3 temperature dependence on the interfacial resistance, but this failed to properly model the thermal conductance of helium interfaces by as much as two orders of magnitude. Another surprising behavior of the thermal resistance was observed in the pressure dependence. Since the speed of sound is a strong function of temperature in liquid helium, the acoustic mismatch model predicts a strong pressure dependence of the interfacial resistance. Studies around 1960 surprisingly showed that the interfacial resistance was nearly independent of pressure, suggesting that other mechanisms were dominant. The acoustic mismatch theory predicted a very high thermal resistance (low thermal conductance) at solid-helium interfaces. This is problematic for researchers working at ultra-cold temperatures because it greatly impedes cooling rates at low temperatures such as in dilution refrigerators. Fortunately such a large thermal resistance was not observed due to many mechanisms which promoted phonon transport. In liquid helium, Van der Waals forces actually work to solidify the first few monolayers against a solid. This boundary layer functions much like an anti-reflection coating in optics, so that phonons which would typically be reflected from the interface actually would transmit across the interface. This also helps to understand the pressure independence of the thermal conductance. The final dominant mechanism to anomalously low thermal resistance of liquid helium interfaces is the effect of surface roughness, which is not accounted for in the acoustic mismatch model. For a more detailed theoretical model of this aspect see the paper by A. Khater and J. Szeftel. Like electromagnetic waves which produce surface plasmons on rough surfaces, phonons can also induce surface waves. When these waves eventually scatter, they provide another mechanism for heat to transfer across the interface. Similarly, phonons are also capable of producing evanescent waves in a total internal reflection geometry. As a result, when these waves are scattered in the solid, additional heat is transferred from the helium beyond the prediction of the acoustic mismatch theory. For a more complete review on this topic see the review by Swartz. Notable room temperature thermal conductance In general there are two types of heat carriers in materials: phonons and electrons. The free electron gas found in metals is a very good conductor of heat and dominates thermal conductivity. All materials though exhibit heat transfer by phonon transport so heat flows even in dielectric materials such as silica. Interfacial thermal conductance is a measure of how efficiently heat carriers flow from one material to another. The lowest room temperature thermal conductance measurement to date is the Bi/Hydrogen-terminated diamond with a thermal conductance of 8.5 MW m−2 K−1. As a metal, bismuth contains many electrons which serve as the primary heat carriers. Diamond on the other hand is a very good electrical insulator (although it has a very high thermal conductivity) and so electron transport between the materials is nil. Further, these materials have very different lattice parameters so phonons do not efficiently couple across the interface. Finally, the Debye temperature between the materials is significantly different. As a result, bismuth, which has a low Debye temperature, has many phonons at low frequencies. Diamond on the other hand has a very high Debye temperature and most of its heat-carrying phonons are at frequencies much higher than are present in bismuth. Increasing in thermal conductance, most phonon mediated interfaces (dielectric-dielectric and metal-dielectric) have thermal conductances between 80 and 300 MW m−2 K−1. The largest phonon mediated thermal conductance measured to date is between TiN (Titanium Nitride) and MgO. These systems have very similar lattice structures and Debye temperatures. While there are no free electrons to enhance the thermal conductance of the interface, the similar physical properties of the two crystals facilitate a very efficient phonon transmission between the two materials. At the highest end of the spectrum, one of the highest thermal conductances measured is between aluminum and copper. At room temperature, the Al-Cu interface has a conductance of 4 GW m−2 K−1. The high thermal conductance of the interface should not be unexpected given the high electrical conductivity of both materials. Interfacial resistance in carbon nanotubes The superior thermal conductivity of Carbon nanotubes makes it an excellent candidate for making composite materials. But interfacial resistance impacts the effective thermal conductivity. This area is not well studied and only a few studies have been done to understand the basic mechanism of this resistance. References Heat transfer Heat conduction
Interfacial thermal resistance
[ "Physics", "Chemistry" ]
2,698
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics", "Heat conduction" ]
24,410,049
https://en.wikipedia.org/wiki/Laplace%E2%80%93Carson%20transform
In mathematics, the Laplace–Carson transform, named after Pierre Simon Laplace and John Renshaw Carson, is an integral transform with significant applications in the field of physics and engineering, particularly in the field of railway engineering. Definition Let be a function and a complex variable. The Laplace–Carson transform is defined as: The inverse Laplace–Carson transform is: where is a real-valued constant, refers to the imaginary axis, which indicates the integral is carried out along a straight line parallel to the imaginary axis lying to the right of all the singularities of the following expression: See also Laplace transform References Integral transforms Differential equations Fourier analysis Transforms
Laplace–Carson transform
[ "Mathematics" ]
133
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Differential equations", "Equations", "Mathematical relations", "Transforms" ]
24,415,026
https://en.wikipedia.org/wiki/Hopkinson%20and%20Imperial%20Chemical%20Industries%20Professor%20of%20Applied%20Thermodynamics
The Hopkinson and Imperial Chemical Industries Professorship of Applied Thermodynamics at the University of Cambridge was established on 10 February 1950, largely from the endowment fund of the proposed Hopkinson Professorship in Thermodynamics and a gift from ICI Limited of £50,000, less tax, spread over the seven years from 1949 to 1955. The professorship is assigned primarily to the Faculty of Engineering. The chair is named in honour of John Hopkinson, whose widow originally endowed a lectureship in thermodynamics in the hope that it would eventually be upgraded to a professorship. List of Hopkinson and Imperial Chemical Industries Professors of Applied Thermodynamics 1951 - 1980 Sir William Rede Hawthorne 1980 - 1983 John Arthur Shercliff 1985 - 1997 Kenneth Noel Corbett Bray 1998 - 2015 John Bernard Young 2015–present Epaminondas Mastorakos References Engineering education in the United Kingdom Imperial Chemical Industries Applied Thermodynamics, Hopkinson and Imperial Chemical Industries School of Technology, University of Cambridge Applied Thermodynamics, Hopkinson and Imperial Chemical Industries 1950 establishments in the United Kingdom
Hopkinson and Imperial Chemical Industries Professor of Applied Thermodynamics
[ "Physics", "Chemistry" ]
225
[ "Professorships in thermodynamics", "Thermodynamics" ]
24,417,538
https://en.wikipedia.org/wiki/Multiscale%20Electrophysiology%20Format
Multiscale Electrophysiology Format (MEF) was developed to handle the large amounts of data produced by large-scale electrophysiology in human and animal subjects. MEF can store any time series data up to 24 bits in length, and employs lossless range encoded difference compression. Subject identifying information in the file header can be encrypted using 128-bit AES encryption in order to comply with HIPAA requirements for patient privacy when transmitting data across an open network. Compressed data is stored in independent blocks to allow direct access to the data, facilitate parallel processing and limit the effects of potential damage to files. Data fidelity is ensured by a 32-bit cyclic redundancy check in each compressed data block using the Koopman polynomial (0xEB31D82E), which has a Hamming distance of from 4 to 114 kbits. A formal specification and source code are available online. MEF_import is an EEGLAB plugin to import MEF data into EEGLAB. See also Range encoding AES encryption CRC-32 MED Format official website References Sources Martin, GNN. Range encoding: an algorithm for removing redundancy from a digitised message. Video & Data Recoding Conference, Southampton, 1979. Koopman, P. 32-Bit Cyclic Redundancy Codes for Internet Applications. The International Conference on Dependable Systems and Networks (June 2002). 459. Electrophysiology Neurophysiology Neurotechnology Bioinformatics Health standards Computer file formats
Multiscale Electrophysiology Format
[ "Engineering", "Biology" ]
317
[ "Bioinformatics", "Biological engineering" ]
20,430,653
https://en.wikipedia.org/wiki/Chemi-ionization
Chemi-ionization is the formation of an ion through the reaction of a gas phase atom or molecule with another atom or molecule when the collision energy is below the energy required to ionize the reagents. The reaction may involve a reagent in an excited state or may result in the formation of a new chemical bond. Chemi-ionization can proceed through the Penning, associative, dissociative or rearrangement ionization reactions. Includes reactions that produce a free electron or a pair of ions (positive and negative). This process is helpful in mass spectrometry because it creates unique bands that can be used to identify molecules. This process is extremely common in nature as it is considered the primary initial reaction in flames. Definitions In the literature, the term "chemi-ionization" is used inconsistently. Berry broadly defined chemi-ionization as "processes that lead to the formation of free charges, electrons and ions under the conditions of chemical reactions". Fontijn defined chemi-ionization more narrowly as reactions "in which the number of elementary charge carriers is increased as a direct result of the formation of new chemical bonds". Fontijn explicitly specified that the number of charge carriers increases, but Berry's definition includes the Penning ionization. In the 1977 review of ionization in collisions of atomic particles at low kinetic energies, Leonas and Kalinin stated that the ionization processes in which collisional energies are below the ionization potentials are called chemi-ionization processes. The IUPAC defined chemi-ionization in the context of mass spectrometry as "ionization of an atom or molecule by interaction with another internally excited atom or molecule". The IUPAC definition includes only reactions that involve an atom or a molecule in an excited state. Also, IUPAC mentioned that chemi-ionization includes reactions in which chemical bonds are not changed. The older IUPAC definition (1973) didn't require the interaction to be with an atom or a molecule in an excited state, but mentioned that it is generally excited. Also, the older definition stated that the ionization is the result of a collision, while the new definition refers to the ionization of one of the interacting species. Energy requirements A certain amount of energy, which may be large enough, is required to remove an electron from an atom or a molecule in its ground state. In chemi-ionization processes, the energy consumed by the ionization must be stored in atoms or molecules in a form of potencial energy or can be obtained from an accompanying exothermic chemical change (for example, from a formation of a new chemical bond). In atoms or molecules, the energy can be stored in the form of an excited state. In molecules, it can alternatively be stored in the form of vibrational excitation. In exothermic chemical reactions, the released energy can be acquired by the molecule in the form of internal vibrational excitation and then cause ionization if the released energy is large enough. Reactions Chemi-ionization reactions include: A^\ast{} + B{} -> A{} + B^{+}{} + e^{-}{} (Penning\ ionization)A{} + B{} -> AB^{+}{} + e^{-}{} (associative\ ionization)A{} + B{} -> A^{+}{} + B^{-}{} (ionization\ by\ electron\ transfer)A{} + BC{} -> AB^{+}{} + C{} + e^{-}{} (rearrangement\ ionization)A{} + BC{} -> A{} + B{} + C^{+}{} + e^{-}{} (dissociative\ ionization) Reactions involving a reagent in an excited state Chemi-ionization can be represented by G^\ast{} + M -> M^{+\bullet}{} + e^-{} + G where G is the excited state species (indicated by the super-scripted asterisk), and M is the species that is ionized by the loss of an electron to form the radical cation (indicated by the super-scripted "plus-dot"). Astrophysical implications Chemi-ionization has been postulated to occur in the hydrogen rich atmospheres surrounding stars. This type of reaction would lead to many more excited hydrogen atoms than some models account for. This affects our ability to determine the proper optical qualities of solar atmospheres with modeling. In flames The most common example of chemi-ionization occurs in hydrocarbon flame. The reaction can be represented as O + CH -> HCO+ + e^- This reaction is present in any hydrocarbon flame and can account for deviation in the amount of expected ions from thermodynamic equilibrium. History The term chemi-ionization was coined by Hartwell F. Calcote in 1948 in the Third Symposium on Combustion and Flame, and Explosion Phenomena. The Symposium performed much of the early investigation into this phenomenon in the 1950s. The majority of the research on this topic was performed in the 1960s and '70s. It is currently seen in many different ionization techniques used for mass spectrometry. See also Penning ionization Associative ionization Charge-exchange ionization Auger effect References Bibliography Ion source
Chemi-ionization
[ "Physics" ]
1,153
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)" ]
20,433,319
https://en.wikipedia.org/wiki/Materiomics
Materiomics is the holistic study of material systems. Materiomics examines links between physicochemical material properties and material characteristics and function. The focus of materiomics is system functionality and behavior, rather than a piecewise collection of properties, a paradigm similar to systems biology. While typically applied to complex biological systems and biomaterials, materiomics is equally applicable to non-biological systems. Materiomics investigates the material properties of natural and synthetic materials by examining fundamental links between processes, structures and properties at multiple scales, from nano to macro, by using systematic experimental, theoretical or computational methods. The term has been independently proposed with slightly different definitions in 2004 by T. Akita et al. (AIST/Japan), in 2008 by Markus J. Buehler (MIT/USA), and Clemens van Blitterswijk, Jan de Boer and Hemant Unadkat (University of Twente/The Netherlands) in analogy to genomics, the study of an organism's entire genome. Similarly, materiomics refers to the study of the processes, structures and properties of materials from a fundamental, systematic perspective by incorporating all relevant scales, from nano to macro, in the synthesis and function of materials and structures. The integrated view of these interactions at all scales is referred to as a material's materiome. New techniques for evaluating materials at the tissue level, such as reference point indentation (RPI) and raman spectroscopy are lending insight into the nature of these highly complex, functional relationships. Materiomics is related to proteomics, where the difference is the focus on material properties, stability, failure and mechanistic insight into multi-scale phenomena. See also Genomics Bionanotechnology Universality-diversity paradigm Notes Other References : Buehler, M.J., Materiomics: Materials Science of Biological Protein Materials, from Nano to Macro. The A to Z of Materials. (February, 2010). Going nature one better (MIT News Release, October 22, 2010). M.J. Buehler, Tu(r)ning weakness to strength, Vol. 5(5), pp. 379–383, 2010. D.I. Spivak, T. Giesa, E. Wood, M.J. Buehler, Category Theoretic Analysis of Hierarchical Protein Materials and Social Networks, PLoS ONE, Vol. 6(9), pp. e23911, 2011. T. Giesa, D. Spivak, M.J. Buehler, Reoccurring Patterns in Hierarchical Protein Materials and Music: The Power of Analogies, BioNanoScience, Vol. 1(4), pp. 153–161, 2011, Materiomics: High-Throughput Screening of Biomaterial Properties Biomateriomics Master of Materiomics: study programme at Hasselt University, Flanders, Belgium Doumen, S., Baeten, D., Notermans, J., Denolf, K., Vandewal, K., Vandamme, D., Nesladek, M., Graulus, G.-J., Vanpoucke, D.E.P., Van Bael, M., Vanderzande, D., & Hardy, A. (2023). De ontwikkeling van een interdisciplinair futureproof curriculum in de bètawetenschappen [The development of an interdisciplinary futureproof curriculum in the beta sciences]. TH&MA Hoger Onderwijs, 30(3), 31-37. Genomics Materials science Nanotechnology
Materiomics
[ "Physics", "Materials_science", "Engineering" ]
767
[ "Nanotechnology", "Applied and interdisciplinary physics", "Materials science", "nan" ]
20,433,613
https://en.wikipedia.org/wiki/Universality%E2%80%93diversity%20paradigm
The universality–diversity paradigm (UDP) is the analysis of biological materials based on the universality and diversity of its fundamental structural elements and functional mechanisms. The analysis of biological systems based on this classification has been a cornerstone of modern biology. Example: proteins For example, proteins constitute the elementary building blocks of a vast variety of biological materials such as cells, spider silk or bone, where they create extremely robust, multi-functional materials by self-organization of structures over many length- and time scales, from nano to macro. Some of the structural features are commonly found in many different tissues, that is, they are highly conserved. Examples of such universal building blocks include alpha-helices, beta-sheets or tropocollagen molecules. In contrast, other features are highly specific to tissue types, such as particular filament assemblies, beta-sheet nanocrystals in spider silk or tendon fascicles. This coexistence of universality and diversity is an overarching feature in biological materials and a crucial component of materiomics. It might provide guidelines for bioinspired and biomimetic material development, where this concept is translated into the use of inorganic or hybrid organic-inorganic building blocks. See also Bionics Materiomics Nanotechnology Phylogenetics References Going nature one better (MIT News Release, October 22, 2010). S. Cranford, M. Buehler, Materiomics: biological protein materials, from nano to macro, Nanotechnology, Science and Applications, Vol. 3, pp. 127–148, 2010. S. Cranford, M.J. Buehler, Biomateriomics, 2012 (Springer, New York). Biology Materials science Biological matter Natural materials
Universality–diversity paradigm
[ "Physics", "Materials_science", "Engineering" ]
359
[ "Applied and interdisciplinary physics", "Natural materials", "Materials science", "Materials", "nan", "Matter" ]
20,436,586
https://en.wikipedia.org/wiki/Sergei%20Tyablikov
Sergei Vladimirovich Tyablikov (; September 7, 1921 – March 17, 1968) was a Soviet theoretical physicist known for his significant contributions to statistical mechanics, solid-state physics, and for the development of the double-time Green function's formalism. Biography Tyablikov was born in Klin, Russia. In 1944 he graduated from the Faculty of Physics at the Moscow State University (MSU) and started his postgraduate study with Anatoly Vlasov and later with Nikolay Bogoliubov at the Department of Theoretical Physics. In 1947 he obtained PhD degree (Candidate of Sciences) with PhD Thesis on the subject of crystallization theory and was appointed to the Steklov Institute of Mathematics, where he continued to work for the rest of his life. In 1954 he defended at the MSU his doctoral dissertation "Studies of the Polaron Theory" and obtained the degree of Doktor nauk (Doctor of Science, similar to Habilitation). Since 1962 he was the Head of the Division of Statistical Mechanics in the Steklov Institute of Mathematics. In the period 1966-1968, Sergei Tyablikov also worked at the Joint Institute for Nuclear Research, where he was the first Head of the Statistical Mechanics and Theory of Condensed Matter Group at the Laboratory of Theoretical Physics. Research work During postgraduate study in 1944—1947 he worked on theory of crystallization, where he applied such methods as diagonalization of bilinear forms in Bose or Fermi operators, etc., which later became a common tool for theoretical physicists. After finishing PhD he started to work on the problem of a particle interacting with a quantum field. This problem is directly related to polaron theory, the effect of impurities on the energy spectrum of superfluids, and other problems in condensed matter physics. He was involved in the development of operator form of perturbation theory, approximate second quantization, adiabatic approximation for systems with translational invariance, and other theoretical physics methods which play an important role in the theory of many-particle systems. Since 1948 in collaboration with Nikolay Bogoliubov he started to work on quantum theory of ferromagnetism and antiferromagnetism. In 1948 they developed a consistent theoretical polar model of metals. Later Tyablikov developed the first consistent quantum theory of magnetic anisotropy. His particularly important contribution to antiferromagnetism was in the development of the method of quantum temperature Green's functions. In 1959, Sergei Tyablikov and Nikolay Bogoliubov published the paper  which strongly influenced the development of the many-body physics and specifically the quantum theory of magnetism. He also co-authored with V.L. Bonch-Bruevich the book The Green Function Method in Statistical Mechanics, the first book with a consistent exposition of the method of Green's functions. Publications Books Bonch-Bruevich V. L., Tyablikov S. V. (1962): The Green Function Method in Statistical Mechanics. North Holland Publishing Co. Tyablikov S. V. (1995): Methods in the Quantum Theory of Magnetism. (Translated to English) Springer; 1st edition. . . Selected papers References Sergei Vladimirovich Tyablikov Soviet Physics Uspekhi 11(4), 606—607 (January–February 1969). Biography of S. V. Tyablikov (1921-1968) at the Joint Institute for Nuclear Research. 1921 births 1968 deaths People from Klin Quantum physicists Soviet physicists Moscow State University alumni Theoretical physicists
Sergei Tyablikov
[ "Physics" ]
734
[ "Theoretical physics", "Theoretical physicists", "Quantum mechanics", "Quantum physicists" ]
20,437,320
https://en.wikipedia.org/wiki/Definite%20quadratic%20form
In mathematics, a definite quadratic form is a quadratic form over some real vector space that has the same sign (always positive or always negative) for every non-zero vector of . According to that sign, the quadratic form is called positive-definite or negative-definite. A semidefinite (or semi-definite) quadratic form is defined in much the same way, except that "always positive" and "always negative" are replaced by "never negative" and "never positive", respectively. In other words, it may take on zero values for some non-zero vectors of . An indefinite quadratic form takes on both positive and negative values and is called an isotropic quadratic form. More generally, these definitions apply to any vector space over an ordered field. Associated symmetric bilinear form Quadratic forms correspond one-to-one to symmetric bilinear forms over the same space. A symmetric bilinear form is also described as definite, semidefinite, etc. according to its associated quadratic form. A quadratic form and its associated symmetric bilinear form are related by the following equations: The latter formula arises from expanding Examples As an example, let , and consider the quadratic form where and and are constants. If and the quadratic form is positive-definite, so Q evaluates to a positive number whenever If one of the constants is positive and the other is 0, then is positive semidefinite and always evaluates to either 0 or a positive number. If and or vice versa, then is indefinite and sometimes evaluates to a positive number and sometimes to a negative number. If and the quadratic form is negative-definite and always evaluates to a negative number whenever And if one of the constants is negative and the other is 0, then is negative semidefinite and always evaluates to either 0 or a negative number. In general a quadratic form in two variables will also involve a cross-product term in ·: This quadratic form is positive-definite if and negative-definite if and and indefinite if It is positive or negative semidefinite if with the sign of the semidefiniteness coinciding with the sign of This bivariate quadratic form appears in the context of conic sections centered on the origin. If the general quadratic form above is equated to 0, the resulting equation is that of an ellipse if the quadratic form is positive or negative-definite, a hyperbola if it is indefinite, and a parabola if The square of the Euclidean norm in -dimensional space, the most commonly used measure of distance, is In two dimensions this means that the distance between two points is the square root of the sum of the squared distances along the axis and the axis. Matrix form A quadratic form can be written in terms of matrices as where is any ×1 Cartesian vector in which at least one element is not 0; is an symmetric matrix; and superscript denotes a matrix transpose. If is diagonal this is equivalent to a non-matrix form containing solely terms involving squared variables; but if has any non-zero off-diagonal elements, the non-matrix form will also contain some terms involving products of two different variables. Positive or negative-definiteness or semi-definiteness, or indefiniteness, of this quadratic form is equivalent to the same property of , which can be checked by considering all eigenvalues of or by checking the signs of all of its principal minors. Optimization Definite quadratic forms lend themselves readily to optimization problems. Suppose the matrix quadratic form is augmented with linear terms, as where is an ×1 vector of constants. The first-order conditions for a maximum or minimum are found by setting the matrix derivative to the zero vector: giving assuming is nonsingular. If the quadratic form, and hence , is positive-definite, the second-order conditions for a minimum are met at this point. If the quadratic form is negative-definite, the second-order conditions for a maximum are met. An important example of such an optimization arises in multiple regression, in which a vector of estimated parameters is sought which minimizes the sum of squared deviations from a perfect fit within the dataset. See also Isotropic quadratic form Positive-definite function Positive-definite matrix Polarization identity Notes References . Quadratic forms Linear algebra
Definite quadratic form
[ "Mathematics" ]
900
[ "Algebra", "Linear algebra", "Quadratic forms", "Number theory" ]
20,443,048
https://en.wikipedia.org/wiki/BSRIA
BSRIA (it takes its name from the initial letters of the Building Services Research and Information Association) is a UK-based testing, instrumentation, research and consultancy organisation, providing specialist services in construction and building services engineering. It is a not-for-profit, member-based association, with over 650 member companies; related services are delivered by a trading company, BSRIA Limited. Any profits made are invested in its research programme, producing best practice guidance. BSRIA is a full member of the Construction Industry Council. Structure BSRIA had a turnover of £11.8 million in 2010/11. It employs over 180 people at its UK head office in Bracknell as well as regionally based engineers in the UK and offices in France, Spain, Germany, China, Japan, Brazil and North America. BSRIA's mission is "to enable the building services and construction industries and their clients to enhance the value of the built environment, by improving the quality of their products and services, the efficiency of their provision and the effectiveness of their operation." History of BSRIA BSRIA was formed in 1955 as the Heating and Ventilating Research Council, later to become the Heating and Ventilating Research Association. As the industry became increasingly linked with other services so its research association and professional body saw the need to widen their remit. In 1975 the 'building services' scope was adopted, marked by the formation of the Building Services Research and Information Association, commonly shortened to BSRIA, and, in 1976, the formation of the Chartered Institution of Building Services, renamed the Chartered Institution of Building Services Engineers (CIBSE) in 1985. As the Association's activities developed to meet the needs of an integrated construction industry and to provide more than just research and information, the full name became less relevant. When new government rules required it to split research and other activities into two companies, BSRIA started formal use of the abbreviation. Trading activities, including research, are now managed through a trading company, BSRIA Limited, which is a wholly owned subsidiary of the Building Services Research and Information Association, which is a company limited by guarantee. Thus, members – largely companies active in designing and delivering building services – join the Building Services Research and Information Association, and services are provided by BSRIA Limited. Timeline history 1955 (29 Dec) – The inaugural General Meeting of the Heating and Ventilating Research Council held at the Institution of Mechanical Engineers. The Chairman (C.S.K. Benham) summed up the proceedings by " Thank you, Gentlemen. Now we exist " 1956 – With almost 200 members, 3 research staff are appointed to work in rented premises at British Coal Utilisation Research Association, Leatherhead 1959 – The Heating and Ventilating Research Association (HVRA) was incorporated to take over the assets, liabilities and undertaking of the Heating and Ventilating Research Council, which was dissolved 1964 – Laboratories double in size. 1967 – Test Division established. 1973 – Establishment of Member Services and Technical Division to reflect 'customer / contractor ' ethos. The 'Application Guide' is launched to reflect members interest in the application of research. 1975 – 'Building Services Research and Information Association' name formally adopted. Instrument Hire service established with 120 hirings. First Statistics Bulletin published. 1986–87 – "Research Clubs" established to match DoE funds for projects. BEMS (Building Energy Management Systems) Centre established as autonomous centre of expertise. Air Infiltration Centre renamed to Air Infiltration and Ventilation Centre to reflect widening role. 1989 – EuroCentre established to help industry take advantage of the single European market. 1993 – Second site established at Crowthorne giving 50% more space. New radiator test room for testing to new European standard. 2000 – The Association establishes a trading company – BSRIA Ltd – to undertake all the trading activities, including research. 2003 – Acquisition of new offices adjacent to existing laboratories to accommodate staff relocated from Crowthorne. 2006 – Offices set up in France, Spain and Germany. 2008 – New subsidiary, BSRIA Construction Consulting (Beijing), established. Acquisition of market research business 'Proplan' to develop market intelligence in controls, fire protection and security. 2011 – New subsidiary, BSRIA Cert established to provide independent certification of products and services. Founding members The following companies were the founding members of BSRIA who remain as members now (original company names updated to current): BRE Chartered Institution of Building Services Engineers (CIBSE) Comyn Ching & Co. (Solray) Ltd Crown House Technologies Ltd EMCOR Group (UK) Plc Faber Maunsell (now AECOM's UK subsidiary) Flakt Woods Ltd Gratte Brothers Ltd Haden Young (part of Balfour Beatty) Harry Taylor Hoare Lea Honeywell Control Systems Ltd Inviron Ltd Jacobs Babtie Lennox Europe C H Lindsey & Son London South Bank University MJN Colston Ltd Pearce Buckle (Design Engineers) Ltd R W Gregory LLP Roger Preston & Partners Rosser & Russell Building Services Ltd Skanska TPS WSP Group plc BSRIA now has over 600 corporate members. References External links BSRIA Modern Building Services – related website Heating and Ventilating – related website Video clips Air tightness testing Bracknell Building engineering organizations Construction trade groups based in the United Kingdom Engineering research institutes Heating, ventilation, and air conditioning Organisations based in Berkshire Organizations established in 1955 Science and technology in Berkshire 1955 establishments in the United Kingdom
BSRIA
[ "Engineering" ]
1,105
[ "Building engineering", "Building engineering organizations", "Engineering research institutes" ]
20,445,291
https://en.wikipedia.org/wiki/Potassium%20spatial%20buffering
Potassium spatial buffering is a mechanism for the regulation of extracellular potassium concentration by astrocytes. Other mechanisms for astrocytic potassium clearance are carrier-operated or channel-operated potassium chloride uptake. The repolarization of neurons tends to raise potassium concentration in the extracellular fluid. If a significant rise occurs, it will interfere with neuronal signaling by depolarizing neurons. Astrocytes have large numbers of potassium ion channels facilitating the removal of potassium ions from the extracellular fluid. They are taken up at one region of the astrocyte and then distributed throughout the cytoplasm of the cell, and further to its neighbors via gap junctions. This keeps extracellular potassium at levels that prevent interference with the normal propagation of an action potential. Potassium spatial buffering Glial cells, once believed to have a passive role in CNS, are active regulators of numerous functions in the brain, including clearance of the neurotransmitter from the synapses, guidance during neuronal migration, control of neuronal synaptic transmission, and maintaining an ideal ionic environment for active communications between neurons in central nervous system. Neurons are surrounded by extracellular fluid rich in sodium ions and poor in potassium ions. The concentrations of these ions are reversed inside the cells. Due to the difference in concentration, there is a chemical gradient across the cell membrane, which leads to sodium influx and potassium efflux. When the action potential takes place, a considerable change in extracellular potassium concentration occurs due to the limited volume of the CNS extracellular space. The change in potassium concentration in the extracellular space impacts a variety of neuronal processes, such as maintenance of membrane potential, activation and inactivation of voltage gated channels, synaptic transmission, and electrogenic transport of neurotransmitters. Change of extracellular potassium concentration of from 3mM can affect neural activity. Therefore, there are diverse cellular mechanisms for tight control of potassium ions, the most widely accepted mechanism being K+ spatial buffering mechanism. Orkand and his colleagues who first theorized spatial buffering stated “if a Glial cell becomes depolarized by K+ that has accumulated in the clefts, the resulting current carries K+ inward in the high [K+] region and out again, through electrically coupled Glial cells in low [K+] regions” In the model presented by Orkand and his colleagues, glial cells intake and traverse potassium ions from region of high concentrations to region of low concentration maintaining potassium concentration to be low in extracellular space. Glial cells are well suited for transportation of potassium ions since it has unusually high permeability to potassium ions and traverse long distance by its elongated shape or by being coupled to one another. Potassium regulatory mechanisms Potassium buffering can be broadly categorized into two categories: Potassium uptake and Potassium spatial buffering. For potassium uptake, excess potassium ions are temporarily taken into glial cells through transporters, or potassium channels. In order to preserve electroneutrality, potassium influxes into glial cells are accompanied by influx of chlorine or efflux of sodium. It is expected that when potassium accumulates within glial cells, water influx and swelling occurs. For potassium spatial buffering, functionally coupled glial cells with high potassium permeability transfer potassium ions from regions of elevated potassium concentration to regions of lower potassium concentration. The potassium current is driven by the difference in glial syncytium membrane potential and local potassium equilibrium potential. When one region of potassium concentration increases, there is a net driving force causing potassium to flow into the glial cells. The entry of potassium causes a local depolarization that propagates electrotonically through the glial cell network which causes net driving force of potassium out of the glial cells. This process causes dispersion of local potassium with little net gain of potassium ions within the glial cells, which in turn prevents swelling. Glial cell depolarization caused by neuronal activity releases potassium onto bloodstream, which was once widely hypothesized to be cause of vessel relaxation, was found to have little effect on neurovascular coupling. Despite the efficiency of potassium spatial buffering mechanisms, in certain regions of CNS, potassium buffering seems more dependent on active uptake mechanisms rather than spatial buffering. Therefore, the exact role of glial potassium spatial buffering in the various regions of our brain still remains uncertain. Kir channel The high permeability of glial cell membranes to potassium ions is a result of expression of high densities of potassium-selective channels with high open-probability at resting membrane potentials. Kir channels, potassium inward-rectifying channels, allow passage of potassium ions inward much more readily than outward. They also display a variable conductance that positively correlates with extracellular potassium concentration: the higher the potassium concentration outside the cell, the higher the conductance. Kir channels are categorized into seven major subfamilies, Kir1 to Kir7, with a variety of gating mechanisms. Kir3 and Kir6 are primarily activated by intracellular G-proteins. Because they have a relatively low open-probability compared to the other families, they have little impact on potassium buffering. Kir1 and Kir7 are mainly expressed in epithelial cells, such as those in the kidney, choroid plexus, or retinal pigment epithelium, and have no impact on spatial buffering. Kir2, however, are expressed in brain neurons and glial cells. Kir4 and Kir5 are, along with Kir2, located in Muller glia and play important roles in potassium siphoning. There are some discrepancies among studies on expression of these channels in the stated locations. Panglial syncytium The panglial syncytium is a large network of interconnected glial cells, which are extensively linked by gap junctions. The panglial syncytium spreads through central nervous system where it provides metabolic and osmotic support, as well as ionic regulation of myelinated axons in white matter tracts. The three types of macroglial cells within network of panglial syncytium are astrocytes, oligodendrocytes, and ependymocytes. Originally it was believed that there was homologous gap junction between oligodendrocytes. It was later found through untrastructural analysis that gap junctions do not directly link adjacent oligodendrocytes, rather it gap junctions with adjacent astrocytes, providing secondary pathway to nearby oligodendrocytes. With direct gap junction between myelin sheaths to surrounding astrocytes, excess potassium and osmotic water directly enters astrocyte syncytium, where it passively spreads downstream to astrocyte endfeet processes at capillaries and the glia limitans. Potassium siphoning Potassium spatial buffering that occurs in the retina is called potassium siphoning, where the Muller cell is the principal glial cell type. Muller cells have important role in retinal physiology. It maintains retinal cell metabolism and are critical in maintaining potassium homeostasis in extracellular space during neuronal activity. Like cells responsible for spatial buffering, Muller cells are distinctively permeable to potassium ions through Kir channels. Like other glial cells, the high selectivity of Muller cell membranes to potassium ions is due to the high density of Kir channels. Potassium conductance is unevenly distributed in Muller cells. By focally increasing potassium ions along amphibian Muller cells and recording the resulting depolarization, the observed potassium conductance was concentrated in the endfoot process of 94% of the total potassium conductance localized to the small subcellular domain. The observation lead to hypothesis that excess potassium in extracellular space is “siphoned” by the Muller cells to the vitreous humor. Potassium siphoning is a specialized form of spatial buffering mechanisms where large reservoir of potassium ions is emptied into vitreous humor. Similar distribution pattern of Kir channels could be found in amphibians. History Existence of potassium siphoning was first reported in 1966 study by Orkand et al. In the study, optic nerve of Necturus was dissected to document the long-distance movement of potassium after the nerve stimulation. Following the low frequency stimulation of .5 Hz at the retinal end of the dissected optic nerve, depolarization 1-2mV was measured at astrocytes at the opposite end of the nerve bundle, which was up to several millimeters from the electrode. With higher frequency stimulation, higher plateau of depolarization was observed. Therefore, they hypothesized that the potassium released to extracellular compartment during axonal activity entered and depolarized nearby astrocytes, where it was transported away by unfamiliar mechanism, which caused depolarization on astrocytes distant from site of stimulation. The proposed model was actually inappropriate since at the time neither gap junctions nor syncytium among glial cells were known, and optic nerve of Necturus are unmyelinated, which means that potassium efflux occurred directly into the periaxonal extracellular space, where potassium ions in extracellular space would be directly absorbed into the abundant astrocytes around axons. Diseases In patients with Tuberous Sclerosis Complex (TSC), abnormalities occur in astrocyte, which leads to pathogenesis of neurological dysfunction in this disease. TSC is a multisystem genetic disease with mutation in either TSC1 or TSC2 gene. It results in disabling neurological symptoms such as mental retardation, autism, and seizures. Glial cells have important physiological roles of regulating neuronal excitability and preventing epilepsy. Astrocytes maintain homeostasis of excitatory substances, such as extracellular potassium, by immediate uptake through specific potassium channels and sodium potassium pumps. It is also regulated by potassium spatial buffering via astrocyte networks where astrocytes are coupled through gap junctions. Mutations in TSC1 or TSC2 gene often results in decreased expression of the astrocytic connexin protein, Cx43. With impairment in gap junction coupling between astrocytes, myriad of abnormalities in potassium buffering occurs which results in increased extracellular potassium concentration and may predispose to neuronal hyperexcitability and seizures. According to a study done on animal model, connexin43-deficient mice showed decreased threshold for the generation of epileptiform events. The study also demonstrated role of gap junction in accelerating potassium clearance, limiting potassium accumulation during neuronal firing, and relocating potassium concentrations. Demyelinating Diseases of the central nervous system, such as Neuromyelitis Optica, often leads to molecular components of the panglial syncytium being compromised, which leads to blocking of potassium spatial buffering. Without mechanism of potassium buffering, potassium induced osmotic swelling of myelin occurs where myelins are destroyed and axonal salutatory conduction ceases. References Cellular processes Central nervous system Glial cells Human cells
Potassium spatial buffering
[ "Biology" ]
2,313
[ "Cellular processes" ]
1,417,264
https://en.wikipedia.org/wiki/Dication
A dication is any cation, of general formula X2+, formed by the removal of two electrons from a neutral species. Diatomic dications corresponding to stable neutral species (e.g. formed by removal of two electrons from H2) often decay quickly into two singly charged particles (H+), due to the loss of electrons in bonding molecular orbitals. Energy levels of diatomic dications can be studied with good resolution by measuring the yield of pairs of zero-kinetic-energy electrons from double photoionization of a molecule as a function of the photoionizing wavelength (threshold photoelectrons coincidence spectroscopy – TPEsCO). The dication is kinetically stable. An example of a stable diatomic dication which is not formed by oxidation of a neutral diatomic molecule is the dimercury dication . An example of a polyatomic dication is , formed by oxidation of S8 and unstable with respect to further oxidation over time to form SO2. Many organic dications can be detected in mass spectrometry for example (a complex) and the acetylene dication . The adamantyl dication has been synthesized. Divalent metals Some metals are commonly found in the form of dications when in the form of salts, or dissolved in water. Examples include the alkaline earth metals (Be2+, Mg2+, Ca2+, Sr2+, Ba2+, Ra2+); later 3d transition metals (V2+, Cr2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+); group 12 elements (Zn2+, Cd2+, Hg2+); and the heavy members of the carbon group (Sn2+, Pb2+). Presence in space Multiply-charged atoms are quite common in the Solar system in the so-called Solar wind. Among these, the most abundant dication is He2+. However, molecular dications, in particular CO22+, have never been observed so far though predicted to be present for instance at Mars. Indeed, this ion by means of its symmetry and strong double bounds is more stable (longer lifetime) than other dications. In 2020, the molecular dication CO22+ has been confirmed to be present in the atmosphere of Mars and around Comet 67P. References Cations
Dication
[ "Physics", "Chemistry" ]
507
[ "Cations", "Ions", "Matter" ]
1,417,355
https://en.wikipedia.org/wiki/Massively%20parallel%20quantum%20chemistry
Massively Parallel Quantum Chemistry (MPQC) is an ab initio computational chemistry software program. Three features distinguish it from other quantum chemistry programs such as Gaussian and GAMESS: it is open-source, has an object-oriented design, and is created from the beginning as a parallel processing program. It is available in Ubuntu and Debian. MPQC provides implementations for a number of important methods for calculating electronic structure, including Hartree–Fock, Møller–Plesset perturbation theory (including its explicitly correlated linear R12 versions), and density functional theory. See also List of quantum chemistry and solid state physics software References External links MPQC Homepage Computational chemistry software Free software programmed in C++ Free chemistry software Chemistry software for Linux
Massively parallel quantum chemistry
[ "Chemistry" ]
164
[ "Free chemistry software", "Computational chemistry software", "Chemistry software", "Computational chemistry", "Chemistry software for Linux" ]
1,418,413
https://en.wikipedia.org/wiki/Phugoid
In aviation, a phugoid or fugoid () is an aircraft motion in which the vehicle pitches up and climbs, and then pitches down and descends, accompanied by speeding up and slowing down as it goes "downhill" and "uphill". This is one of the basic flight dynamics modes of an aircraft (others include short period, roll subsidence, dutch roll, and spiral divergence). Detailed description The phugoid has a nearly constant angle of attack but varying pitch, caused by a repeated exchange of airspeed and altitude. It can be excited by an elevator singlet (a short, sharp deflection followed by a return to the centered position) resulting in a pitch increase with no change in trim from the cruise condition. As speed decays, the nose drops below the horizon. Speed increases, and the nose climbs above the horizon. Periods can vary from under 30 seconds for light aircraft to minutes for larger aircraft. Microlight aircraft typically show a phugoid period of 15–25 seconds, and it has been suggested that birds and model airplanes show convergence between the phugoid and short period modes. A classical model for the phugoid period can be simplified to about (0.85 times the speed in knots) seconds, but this only really works for larger aircraft. Phugoids are often demonstrated to student pilots as an example of the speed stability of the aircraft and the importance of proper trimming. When it occurs, it is considered a nuisance, and in lighter airplanes (typically showing a shorter period) it can be a cause of pilot-induced oscillation. The phugoid, for moderate amplitude, occurs at an effectively constant angle of attack, although in practice the angle of attack actually varies by a few tenths of a degree. This means that the stalling angle of attack is never exceeded, and it is possible (in the <1g section of the cycle) to fly at speeds below the known stalling speed. Free flight models with badly unstable phugoid typically stall or loop, depending on thrust. An unstable or divergent phugoid is caused, mainly, by a large difference between the incidence angles of the wing and tail. A stable, decreasing phugoid can be attained by building a smaller stabilizer on a longer tail, or, at the expense of pitch and yaw "static" stability, by shifting the center of gravity to the rear. Aerodynamically efficient aircraft typically have low phugoid damping. The term "phugoid" was coined by Frederick W. Lanchester, the British aerodynamicist who first characterized the phenomenon. He derived the word from the Greek words and to mean "flight-like" but recognized the diminished appropriateness of the derivation given that meant flight in the sense of "escape" (as in the word "fugitive") rather than vehicle flight. Aviation accidents In 1972, an Aero Transporti Italiani Fokker F-27 Friendship, en route from Rome Fiumicino to Foggia, climbing through 13,500 feet, entered an area of poor weather with local thunderstorm activity. At almost 15,000 feet the aircraft suddenly lost 1,200 feet of altitude and its speed dropped. It developed phugoid oscillations from which the pilots could not recover. The aircraft struck the ground at a speed of 340 knots, causing the death of the three crew members and all fifteen passengers. In the 1975 Tan Son Nhut C-5 accident, USAF C-5 68-0218 with flight controls damaged by failure of the rear cargo/pressure door, encountered phugoid oscillations while the crew was attempting a return to base and crash-landed in a rice paddy adjacent to the airport. Of the 328 people on board, 153 died, making it the deadliest accident involving a US military aircraft. In 1985, Japan Air Lines Flight 123 lost all hydraulic controls after its vertical stabiliser blew off due to an aft pressure bulkhead failure, and went into phugoid motion. While the crew were able to maintain near-level flight through the use of engine power, the plane lost height over a mountain range northwest of Tokyo before crashing into Mount Takamagahara. With 520 deaths, it remains the deadliest single-aircraft disaster in history. In 1989, United Airlines Flight 232 suffered an uncontained engine failure in the #2 (tail) engine, which caused total hydraulic system failure. The crew flew the aircraft with throttle only. Suppressing the phugoid tendency was particularly difficult. The pilots reached Sioux Gateway Airport but crashed during the landing attempt. All four cockpit crewmembers (one an assisting DC-10 captain on the flight as a passenger) and a majority of the passengers survived. Another aircraft that lost all hydraulics and experienced phugoid was a DHL operated Airbus A300B4 that was hit by a surface-to-air missile fired by Iraqi militants in the 2003 Baghdad DHL attempted shootdown incident. This was the first time that a crew landed an air transport aircraft safely by only adjusting engine thrust. The 2003 crash of the Helios solar-powered aircraft was precipitated by reacting to an inappropriately diagnosed phugoid oscillation that ultimately made the aircraft structure exceed design loads. Chesley "Sully" Sullenberger, Captain of US Airways Flight 1549 that ditched in the Hudson River on January 15, 2009, said in a Google talk that the landing could have been less violent had the anti-phugoid software installed on the Airbus A320-214 not prevented him from manually getting maximum lift during the four seconds before water impact. See also Index of aviation articles Maneuvering Characteristics Augmentation System Projectile motion References External links Analysis of phugoid motion Aerodynamics Flight control systems
Phugoid
[ "Chemistry", "Engineering" ]
1,180
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
1,418,996
https://en.wikipedia.org/wiki/Grog%20%28clay%29
Grog, also known as firesand and chamotte, is a raw material usually made from crushed and ground potsherds, reintroduced into crude clay to temper it before making ceramic ware. It has a high percentage of silica and alumina. It is normally available as a powder or chippings, and is an important ingredient in Coade stone. Production It can be produced by firing selected fire clays to high temperatures before grinding and screening to specific particle sizes. An alternate method of production uses pitchers. The particle size distribution is generally coarser in size than the other raw materials used to prepare clay bodies. It tends to be porous and have low density. Properties Grog is composed of 40% minimum alumina, 30% minimum silica, 4% maximum iron(III) oxide, up to 2% calcium oxide and magnesium oxide combined. Its melting point is approximately . Its water absorption is maximum 7%. Its thermal expansion coefficient is 5.2 mm/m and thermal conductivity is 0.8 W/(m·K) at 100 °C and 1.0 W/(m·K) at 1000 °C. It is not easily wetted by steel. Applications Grog is used in pottery and sculpture to add a gritty, rustic texture called "tooth"; it reduces shrinkage and aids even drying. This prevents defects such as cracking, crows feet, patterning, and lamination. The coarse particles open the green clay body to allow gases to escape. Grog adds structural strength to hand-built and thrown pottery during shaping, although it can diminish fired strength. The finer the particles, the closer the clay bond, and the denser and stronger the fired product. "The strength in the dry state increases with grog down as fine as that passing the 100-mesh sieve, but decreases with material passing the 200-mesh sieve." About 20% grog is added to crude clay (in the dry form) before mixing with water. Adding grog to clay serves two primary functions: 1) It helps prevent cracking of the clay when the ceramic piece is being worked and when it dries, by reducing its plasticity; 2) it protects the ceramic piece from thermal shock while firing, particularly, at the sudden rise or lowering of temperature, and which, if not added, can cause breakage. Substitutes for grog used in pottery are dried and sifted horse manure, or sand collected from dry riverbeds (which has been sifted through a screen), or finely ground schist. Others make use of volcanic ash. Some natural clays already contain an admixture of some "natural temper," for which cause the potters who make use of the clay do not add any temper of their own. In Middle and South Europe, grog is used to create fire-resistant chamotte type bricks and mortar for construction of fireplaces, old-style and industrial furnaces, and as component of high temperature application sealants and adhesives. A typical example of domestic use is a pizza stone made from chamotte. Because the stone can absorb heat, you can bake pizza or bread on the stone in a regular domestic oven. The advantage is supposed to be a more even heat. A normal commercial domestic oven cools down when the door is opened. The stone however remains hot, creating a more even bake. Another advantage is the fact that the stone can absorb some moisture making for a drier bake. Archaeology In archaeology, "grog" is crushed, fired pottery of any type that is added as a temper to unfired clay. Several pottery types from the European Bronze Age are typologized on the basis of their grog inclusions. The practice of adding grog to clay as a temper was widespread throughout many cultures and is mentioned in the writings of Hai Gaon (939–1038), who wrote in his commentary on the Mishnah, compiled in 189 CE: "ḥarsit [= grog], that which they grind [of potsherds] and make therewith clay is called [in Hebrew] ḥarsit." See also Grogg References . . External links What is Grog in Pottery? Ceramic materials Refractory materials Silicates
Grog (clay)
[ "Physics", "Engineering" ]
882
[ "Refractory materials", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
1,419,800
https://en.wikipedia.org/wiki/Cryogenic%20Dark%20Matter%20Search
The Cryogenic Dark Matter Search (CDMS) is a series of experiments designed to directly detect particle dark matter in the form of Weakly Interacting Massive Particles (or WIMPs). Using an array of semiconductor detectors at millikelvin temperatures, CDMS has at times set the most sensitive limits on the interactions of WIMP dark matter with terrestrial materials (as of 2018, CDMS limits are not the most sensitive). The first experiment, CDMS I, was run in a tunnel under the Stanford University campus. It was followed by CDMS II experiment in the Soudan Mine. The most recent experiment, SuperCDMS (or SuperCDMS Soudan), was located deep underground in the Soudan Mine in northern Minnesota and collected data from 2011 through 2015. The series of experiments continues with SuperCDMS SNOLAB, an experiment located at the SNOLAB facility near Sudbury, Ontario, in Canada that started construction in 2018 and is expected to start data taking in early 2020s. Background Observations of the large-scale structure of the universe show that matter is aggregated into very large structures that have not had time to form under the force of their own self-gravitation. It is generally believed that some form of missing mass is responsible for increasing the gravitational force at these scales, although this mass has not been directly observed. This is a problem; normal matter in space will heat up until it gives off light, so if this missing mass exists, it is generally assumed to be in a form that is not commonly observed on earth. A number of proposed candidates for the missing mass have been put forward over time. Early candidates included heavy baryons that would have had to be created in the Big Bang, but more recent work on nucleosynthesis seems to have ruled most of these out. Another candidate are new types of particles known as weakly interacting massive particles, or "WIMP"s. As the name implies, WIMPs interact weakly with normal matter, which explains why they are not easily visible. Detecting WIMPs thus presents a problem; if the WIMPs are very weakly interacting, detecting them will be extremely difficult. Detectors like CDMS and similar experiments measure huge numbers of interactions within their detector volume in order to find the extremely rare WIMP events. Detection technology The CDMS detectors measure the ionization and phonons produced by every particle interaction in their germanium and silicon crystal substrates. These two measurements determine the energy deposited in the crystal in each interaction, but also give information about what kind of particle caused the event. The ratio of ionization signal to phonon signal differs for particle interactions with atomic electrons ("electron recoils") and atomic nuclei ("nuclear recoils"). The vast majority of background particle interactions are electron recoils, while WIMPs (and neutrons) are expected to produce nuclear recoils. This allows WIMP-scattering events to be identified even though they are rare compared to the vast majority of unwanted background interactions. From supersymmetry, the probability of a spin-independent interaction between a WIMP and a nucleus would be related to the number of nucleons in the nucleus. Thus, a WIMP would be more likely to interact with a germanium detector than a silicon detector, since germanium is a much heavier element. Neutrons would be able to interact with both silicon and germanium detectors with similar probability. By comparing rates of interactions between silicon and germanium detectors, CDMS is able to determine the probability of interactions being caused by neutrons. CDMS detectors are disks of germanium or silicon, cooled to millikelvin temperatures by a dilution refrigerator. The extremely low temperatures are needed to limit thermal noise which would otherwise obscure the phonon signals of particle interactions. Phonon detection is accomplished with superconduction transition edge sensors (TESs) read out by SQUID amplifiers, while ionization signals are read out using a FET amplifier. CDMS detectors also provide data on the phonon pulse shape which is crucial in rejecting near-surface background events. History Bolometric detection of neutrinos with semiconductors at low temperature was first proposed by Blas Cabrera, Lawrence M. Krauss, and Frank Wilczek, and a similar method was proposed for WIMP detection by Mark Goodman and Edward Witten. CDMS I collected WIMP search data in a shallow underground site (called SUF, Stanford Underground Facility) at Stanford University 1998–2002. CDMS II operated (with collaboration from the University of Minnesota) in the Soudan Mine from 2003 to 2009 (data taking 2006–2008). The newest experiment, SuperCDMS (or SuperCDMS Soudan), with interleaved electrodes, more mass, and even better background rejection was taking data at Soudan 2011–2015. The series of experiments continue with SuperCDMS SNOLAB, currently (2018) under construction in SNOLAB and to be completed in the early 2020s. The series of experiments also includes the CDMSlite experiment which used SuperCDMS detectors at Soudan in an operating mode (called CDMSlite-mode) that was meant to be sensitive specifically to low-mass WIMPs. As the CDMS-experiment has multiple different detector technologies in use, in particular, 2 types of detectors based on germanium or silicon, respectively, the experiments derived from some specific configuration of the CDMS-experiment detectors and different data-sets thus collected are sometimes given names like CDMS Ge, CDMS Si, CDMS II Si et cetera. Results On December 17, 2009, the collaboration announced the possible detection of two candidate WIMPs, one on August 8, 2007, and the other on October 27, 2007. Due to the low number of events, the team could exclude false positives from background noise such as neutron collisions. It is estimated that such noise would produce two or more events 25% of the time. Polythene absorbers were fitted to reduce any neutron background. A 2011 analysis with lower energy thresholds, looked for evidence for low-mass WIMPs (M < 9 GeV). Their limits rule out hints claimed by a new germanium experiment called CoGeNT and the long-standing DAMA/NaI, DAMA/LIBRA annual modulation result. Further analysis of data in Physical Review Letters May 2013, revealed 3 WIMP detections with an expected background of 0.7, with masses expected from WIMPs, including neutralinos. There is a 0.19% chance that these are anomalous background noise, giving the result a 99.8% (3 sigmas) confidence level. Whilst not conclusive evidence for WIMPs this provides strong weight to the theories. This signal was observed by the CDMS II-experiment and it is called the CDMS Si-signal (sometimes the experiment is also called CDMS Si) because it was observed by the silicon detectors. SuperCDMS search results from October 2012 to June 2013 were published in June 2014, finding 11 events in the signal region for WIMP mass less than 30 GeV, and set an upper limit for spin-independent cross section disfavoring a recent CoGeNT low mass signal. SuperCDMS SNOLAB A second generation of SuperCDMS is planned for SNOLAB. This is expanded from SuperCDMS Soudan in every way: The individual detector discs are 100 mm/3.9″ diameter × 33.3 mm/1.3″ thick, 225% the volume of the 76.2 mm/3″ diameter × 25.4 mm/1″ thick discs in Soudan. There are more of them, with room for 31 "towers" of six discs each, although operation will begin with only four towers. The detector is better shielded, by both its deeper location in SNOLAB, and greater attention to radiopurity in construction. The increase in detector mass is not quite as large, because about 25% of the detectors will be made of silicon, which only weights 44% as much. Filling all 31 towers at this ratio would result in about 222 kg Although the project has suffered repeated delays (earlier plans hoped for construction to begin in 2014 and 2016), it remains active, with space allocated in SNOLAB and a scheduled construction start in early 2018. The construction of SuperCDMS at SNOLAB started in 2018 with beginning of operations in early 2020s. The project budget at the time was US$34 million. In May 2021, the SuperCDMS SNOLAB detector was under construction, with early science (or prototyping, or preliminary studies) ongoing with prototype/testing hardware, both at the SNOLAB location and at other locations. The full detector was expected ready for science data taking at the end of 2023, and the science operations to last 4 years (with two separate runs) 2023-2027, with possible extensions and developments beyond 2027. In May 2022, SuperCDMS SNOLAB detector installation was in progress, with a plan to start commissioning run in 2023. First science run with full detector payload in early 2024 and first result in early 2025. In June 2023, SuperCDMS SNOLAB installation was in full swing. Commissioning was expected to start in 2024. GEODM proposal A third generation of SuperCDMS is envisioned, although still in the early planning phase. GEODM (GErmanium Observatory for Dark Matter), with roughly 1500 kg of detector mass, has expressed interest in the SNOLAB "Cryopit" location. Increasing the detector mass only makes the detector more sensitive if the unwanted background detections do not increase as well, thus each generation must be cleaner and better shielded than the one before. The purpose of building in ten-fold stages like this is to develop the necessary shielding techniques before finalizing the GEODM design. References External links SuperCDMS web site Experiments for dark matter search Fermilab experiments
Cryogenic Dark Matter Search
[ "Physics" ]
2,037
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
1,420,406
https://en.wikipedia.org/wiki/Sex%20linkage
Sex linked describes the sex-specific reading patterns of inheritance and presentation when a gene mutation (allele) is present on a sex chromosome (allosome) rather than a non-sex chromosome (autosome). In humans, these are termed X-linked recessive, X-linked dominant and Y-linked. The inheritance and presentation of all three differ depending on the sex of both the parent and the child. This makes them characteristically different from autosomal dominance and recessiveness. There are many more X-linked conditions than Y-linked conditions, since humans have several times as many genes on the X chromosome than the Y chromosome. Only females are able to be carriers for X-linked conditions; males will always be affected by any X-linked condition, since they have no second X chromosome with a healthy copy of the gene. As such, X-linked recessive conditions affect males much more commonly than females. In X-linked recessive inheritance, a son born to a carrier mother and an unaffected father has a 50% chance of being affected, while a daughter has a 50% chance of being a carrier, however a fraction of carriers may display a milder (or even full) form of the condition due to a phenomenon known as skewed X-inactivation, in which the normal process of inactivating half of the female body's X chromosomes preferably targets a certain parent's X chromosome (the father's in this case). If the father is affected, the son will not be affected, as he does not inherit the father's X chromosome, but the daughter will always be a carrier (and may occasionally present with symptoms due to aforementioned skewed X-inactivation). In X-linked dominant inheritance, a son or daughter born to an affected mother and an unaffected father both have a 50% chance of being affected (though a few X-linked dominant conditions are embryonic lethal for the son, making them appear to only occur in females). If the father is affected, the son will always be unaffected, but the daughter will always be affected. A Y-linked condition will only be inherited from father to son and will always affect every generation. The inheritance patterns are different in animals that use sex-determination systems other than XY. In the ZW sex-determination system used by birds, the mammalian pattern is reversed, since the male is the homogametic sex (ZZ) and the female is heterogametic (ZW). In classical genetics, a mating experiment called a reciprocal cross is performed to test if an animal's trait is sex-linked. X-linked dominant inheritance Each child of a mother affected with an X-linked dominant trait has a 50% chance of inheriting the mutation and thus being affected with the disorder. If only the father is affected, 100% of the daughters will be affected, since they inherit their father's X chromosome, and 0% of the sons will be affected, since they inherit their father's Y chromosome. There are fewer X-linked dominant conditions than X-linked recessive, because dominance in X-linkage requires the condition to present in females with only a fraction of the reduction in gene expression of autosomal dominance, since roughly half (or as many as 90% in some cases) of a particular parent's X chromosomes are inactivated in females. Examples Alport syndrome Coffin–Lowry syndrome (CLS) Fragile X syndrome Idiopathic hypoparathyroidism Incontinentia pigmenti Rett syndrome (RS) Vitamin D resistant rickets (X-linked hypophosphatemia) X-linked recessive inheritance Females possessing one X-linked recessive mutation are considered carriers and will generally not manifest clinical symptoms of the disorder, although differences in X chromosome inactivation can lead to varying degrees of clinical expression in carrier females since some cells will express one X allele and some will express the other. All males possessing an X-linked recessive mutation will be affected, since males have only a single X chromosome and therefore have only one copy of X-linked genes. All offspring of a carrier female have a 50% chance of inheriting the mutation if the father does not carry the recessive allele. All female children of an affected father will be carriers (assuming the mother is not affected or a carrier), as daughters possess their father's X chromosome. If the mother is not a carrier, no male children of an affected father will be affected, as males only inherit their father's Y chromosome. The incidence of X-linked recessive conditions in females is the square of that in males: for example, if 1 in 20 males in a human population are red–green color blind, then 1 in 400 females in the population are expected to be color-blind (1/20)*(1/20). Examples Aarskog–Scott syndrome Adrenoleukodystrophy (ALD) Bruton's agammaglobulinemia Color blindness Complete androgen insensitivity syndrome Congenital aqueductal stenosis (hydrocephalus) Duchenne muscular dystrophy Fabry disease Glucose-6-phosphate dehydrogenase deficiency Haemophilia A and B Hunter syndrome Inherited nephrogenic diabetes insipidus Menkes disease (kinky hair syndrome) Ornithine carbamoyltransferase deficiency Wiskott–Aldrich syndrome Y-linked Various failures in the SRY genes Sex-linked traits in other animals White eyes in Drosophila melanogaster flies was one of the earliest sex-linked genes discovered. Fur color in domestic cats: the gene that causes orange pigment is on the X chromosome; thus a Calico or tortoiseshell cat, with both black (or gray) and orange pigment, is nearly always female. The first sex-linked gene ever discovered was the "lacticolor" X-linked recessive gene in the moth Abraxas grossulariata by Leonard Doncaster. Related terms It is important to distinguish between sex-linked characters, which are controlled by genes on sex chromosomes, and two other categories. Sex-influenced traits Sex-influenced or sex-conditioned traits are phenotypes affected by whether they appear in a male or female body. Even in a homozygous dominant or recessive female the condition may not be expressed fully. Example: baldness in humans. Sex-limited traits These are characters only expressed in one sex. They may be caused by genes on either autosomal or sex chromosomes. Examples: female sterility in Drosophila; and many polymorphic characters in insects, especially in relation to mimicry. Closely linked genes on autosomes called "supergenes" are often responsible for the latter. See also X-linked dominant inheritance X-linked recessive inheritance References Genetics linkage
Sex linkage
[ "Biology" ]
1,419
[ "Genetics", "Sex" ]
28,974,107
https://en.wikipedia.org/wiki/Nano-JASMINE
The Nano-Japan Astrometry Satellite Mission for Infrared Exploration (Nano-JASMINE) is an astrometric microsatellite developed by the National Astronomical Observatory of Japan, with contributions by the University of Tokyo's Intelligent Space Systems Laboratory (ISSL). , the satellite was planned for launch together with CHEOPS (Characterizing Exoplanets Satellite) in 2019. However, this launch took place in December 2019 without Nano-JASMINE as one of the three piggyback payloads. Some sources named 2022 as the launch year of the satellite. By 2023, the launch had been cancelled and the satellite is now displayed in Kakamigahara Air and Space Museum. Spacecraft Nano-JASMINE is a microsatellite measuring and weighing approximately . It carries a small, Ritchey–Chrétien telescope that will make observations in the infrared spectrum, allowing for easier observation toward the centre of the Milky Way. Its exterior is covered with Gallium arsenide (GaAs) solar cells providing approximately 20 watts of power. Due to limited bandwidth, Nano-JASMINE will employ a Star Image Extractor (SIE) for onboard raw image processing that will extract and transmit only specific object data. Overview Nano-JASMINE is Japan's first and the world's third astrometric survey spacecraft, following Hipparcos (1989) and Gaia (2013), both launched by the European Space Agency (ESA). It is the pathfinder in a planned series of three spacecraft of increasing size and capability; the second is (originally and officially still called "Small-JASMINE") with a telescope, and the third is JASMINE with an telescope. The spacecraft is designed to have an astrometric accuracy (2–3 mas (milli-arcsecond) for stars brighter than 7.5 magnitude) comparable to Hipparcos (1 mas). Nano-JASMINE should be able to detect approximately four times the number of stars as Hipparcos. Given the time difference between these missions, combining the data sets of Nano-JASMINE and Hipparcos will constrain the positions of stars whose current positions are poorly known owing to uncertainty in their motion since being measured by Hipparcos, and should provide an order-of-magnitude increase in the accuracy of proper motion measurements (approximately 0.1 mas/year; 0.2 mas/year for stars brighter than 9 magnitude). Nano-JASMINE had been scheduled for launch aboard a Tsyklon-4 launch vehicle from the Brazilian Space Agency's Alcântara Launch Center (CLA). The launch was originally contracted for August 2011, but was delayed to the November 2013 to March 2014 time frame. Various issues have held back its launch, first due to delays in both the construction of the launch site and development of the launch vehicle, and later due to Brazil backing out of the Tsyklon-4 partnership with Ukraine leading to the rocket's indefinite hold. In March 2015, talks to arrange a flight for Nano-JASMINE began between NAOJ and ESA. It was to be launched as a piggyback payload with CHEOPS on a Soyuz launch vehicle in 2019. As of late 2020, the launch of Nano-JASMINE was scheduled for 2022. In 2023, the launch was cancelled and the satellite was put on permanent display. Nano-JASMINE will be succeeded by a larger spacecraft, JASMINE (formerly "Small-JASMINE"), which is planned to be launched in 2028 by an Epsilon launch vehicle. References External links JASMINE project website at JASMINE-Galaxy.org Nano-JASMINE website by the University of Tokyo Space astrometry missions Parallax Proposed spacecraft Satellites of Japan Space telescopes 2020s in spaceflight
Nano-JASMINE
[ "Astronomy" ]
746
[ "Space telescopes", "Space astrometry missions" ]
28,975,107
https://en.wikipedia.org/wiki/Eurocopter%20X%C2%B3
The Eurocopter X³ (X-Cubed) is a retired experimental high-speed compound helicopter developed by Airbus Helicopters (formerly Eurocopter). A technology demonstration platform for "high-speed, long-range hybrid helicopter" or H³ concept, the X³ achieved in level flight on 7 June 2013, setting an unofficial helicopter speed record. In June 2014, it was placed in a French air museum in the village of Saint-Victoret. Design and development Technology The X³ demonstrator is based on the Eurocopter AS365 Dauphin helicopter, with the addition of short span wings each fitted with a tractor propeller, having a different pitch to counter the torque effect of the main rotor. Conventional helicopters use tail rotors to counter the torque effect. The tractor propellers are gear driven from the two main turboshaft engines which also drive the five-bladed main rotor system, taken from a Eurocopter EC155. Test pilots describe the X³ flight as smooth, but the X³ does not have passive or active anti-vibration systems and can fly without stability augmentation systems, unlike the Sikorsky X2. The helicopter is designed to prove the concept of a high-speed helicopter which depends on slowing the rotor speed (by 15%) to avoid drag from the advancing blade tip, and to avoid retreating blade stall by unloading the rotor while a small wing provides 40–80% lift instead. The X³ can hover with a pitch attitude between minus 10 and plus 15 degrees. Its bank range is 40 degrees in hover, and is capable of flying at bank angles of 120 to 140 degrees. During testing the aircraft demonstrated a rate of climb of 5,500 feet per minute and high-G turn rates of 2Gs at 210 knots. Performance The X³ first flew on 6 September 2010 from French Direction générale de l'armement facility at Istres-Le Tubé Air Base. On 12 May 2011 the X³ demonstrated a speed of while using less than 80 percent of available power. In May 2012, it was announced that the Eurocopter X³ development team had received the American Helicopter Society's Howard Hughes Award for 2012. Eurocopter demonstrated the X³ in the United States during the summer of 2012, the aircraft logging 55 flight hours, with a number of commercial and military operators being given the opportunity to fly the aircraft. With an aerodynamic fairing installed on the rotor head, the X³ demonstrated a speed of in level flight and in a shallow dive on 7 June 2013, beating the Sikorsky X2's unofficial record set in September 2010, and thus becoming the world's fastest non-jet augmented compound helicopter. Variants Eurocopter suggested that a production H³ application could appear as soon as 2020. The company had also previously expressed an interest in offering an H³ technology based solution for the United States' Future Vertical Lift program, with EADS North America submitting bid to build a technology demonstrator under the US Army's Joint Multi Role (JMR) program, but later withdrew due to cost and because Eurocopter might have to transfer X³ intellectual property to the US, and Eurocopter chose to focus on the Armed Aerial Scout instead. Ultimately the company was not downselected for the JMR effort, and the AAS program was cancelled. Eurocopter saw the offshore oil market and Search and rescue community as potential customers for X³ technology. An X³-based unpressurised compound helicopter called LifeRCraft is also among the projects planned under the European Union's €4 billion ($5.44 billion) Clean Sky 2 research program as one of two high-speed rotorcraft flight demonstrators. Airbus began development of the hybrid composite helicopter with a 4.6-litre V-8 piston engine in 2014, froze the design in 2016 to start building in 2017, and had plans to fly it in 2019. The X³ was moved to Musée de l’air et de l’espace in 2014 for public display. RACER The Airbus RACER (Rapid And Cost-Effective Rotorcraft) is a development revealed at the June 2017 Paris air show, final assembly was planned to start in 2019 for a 2020 first flight. Cruising up to , it aims for a 25% cost reduction per distance over a conventional helicopter. Specifications See also References External links . . . . Video X³ Video X³, Cockpit Making of . Airbus Helicopters aircraft Compound helicopters Experimental helicopters 2010s French helicopters High-wing aircraft Slowed rotor Twin-turbine helicopters Aircraft first flown in 2010
Eurocopter X³
[ "Engineering" ]
938
[ "Slowed rotor", "Aerospace engineering" ]
28,980,648
https://en.wikipedia.org/wiki/Hall%E2%80%93Higman%20theorem
In mathematical group theory, the Hall–Higman theorem, due to , describes the possibilities for the minimal polynomial of an element of prime power order for a representation of a p-solvable group. Statement Suppose that G is a p-solvable group with no normal p-subgroups, acting faithfully on a vector space over a field of characteristic p. If x is an element of order pn of G then the minimal polynomial is of the form (X − 1)r for some r ≤ pn. The Hall–Higman theorem states that one of the following 3 possibilities holds: r = pn p is a Fermat prime and the Sylow 2-subgroups of G are non-abelian and r ≥ pn −pn−1 p = 2 and the Sylow q-subgroups of G are non-abelian for some Mersenne prime q = 2m − 1 less than 2n and r ≥ 2n − 2n−m. Examples The group SL2(F3) is 3-solvable (in fact solvable) and has an obvious 2-dimensional representation over a field of characteristic p=3, in which the elements of order 3 have minimal polynomial (X−1)2 with r=3−1. References Theorems in group theory Number theory
Hall–Higman theorem
[ "Mathematics" ]
269
[ "Discrete mathematics", "Number theory" ]
35,727,637
https://en.wikipedia.org/wiki/Rack%20phase%20difference
Rack Phase Difference (RPD) is a difference in the elevation between rack teeth of the chords of any single leg of a jackup rig with open truss-type legs. This type of jackup vessel operates with a rack and pinion drive system, as opposed to the pin-hole system found on jackups rigs with tubular legs. The legs are mostly triangular though some with rectangular designs can be found. The chords are connected via a network of bracings to reinforce the leg structure. When a jackup positions its spudcan - i.e. the shoe mounted at the bottom of the leg - onto the seabed the mass of the vessel pressing down onto the seabed will cause a reaction force. If the seabed is inclined, or the spudcan does not travel straight through the soil due to the consistency of the soil layers encountered during leg penetration, the forces acting on the chords will be unequal, which will cause a chord - and the rack attached to it - to move ever so slightly up or down. This displacement is the Rack Phase Value (RPV), and can be either positive or negative. The sum of the absolute values of the RPV is the Rack Phase Difference. This relative displacement between the racks causes additional loading on the leg members, which induces additional stresses between the rack and pinion teeth that are meshing during the jacking operation, which increases rack and pinion wear the rack teeth and the upper and lower leg guides, which increases rack wear on the tips the welded connections between bracings and chords The RPD limit should be clearly defined by the designer of the jack-up rig, and is subject to narrow design and as-built tolerances. The better the design tolerances are followed during construction, the lower the RPD will likely be during jacking operations, and the less wear the system will encounter. Excessive loading may result in failure of structural members in case the RPD reaches more than the allowable value. cracks in welds between structural members buckling of bracings shearing off of rack and pinion teeth deformation of the leg guide plate supporting structure damage on the drives (e.g. internal planetary gear damage, damage on bull gears, shearing off of shaft keys, ...) Reasons Eccentricity of Spudcan centre due to uneven ground condition Sliding of leg References Oilfield terminology Oil platforms
Rack phase difference
[ "Chemistry", "Engineering" ]
486
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
35,728,290
https://en.wikipedia.org/wiki/Algebraic%20semantics%20%28computer%20science%29
In computer science, algebraic semantics is a form of axiomatic semantics based on algebraic laws for describing and reasoning about program specifications in a formal manner. Syntax The syntax of an algebraic specification is formulated in two steps: (1) defining a formal signature of data types and operation symbols, and (2) interpreting the signature through sets and functions. Definition of a signature The signature of an algebraic specification defines its formal syntax. The word "signature" is used like the concept of "key signature" in musical notation. A signature consists of a set of data types, known as sorts, together with a family of sets, each set containing operation symbols (or simply symbols) that relate the sorts. We use to denote the set of operation symbols relating the sorts to the sort . For example, for the signature of integer stacks, we define two sorts, namely, and , and the following family of operation symbols: where denotes the empty string. Set-theoretic interpretation of signature An algebra interprets the sorts and operation symbols as sets and functions. Each sort is interpreted as a set , which is called the carrier of of sort , and each symbol in is mapped to a function , which is called an operation of . With respect to the signature of integer stacks, we interpret the sort as the set of integers, and interpret the sort as the set of integer stacks. We further interpret the family of operation symbols as the following functions: Semantics Semantics refers to the meaning or behavior. An algebraic specification provides both the meaning and behavior of the object in question. Equational axioms The semantics of an algebraic specifications is defined by axioms in the form of conditional equations. With respect to the signature of integer stacks, we have the following axioms: For any and , where "" indicates "not found". Mathematical semantics The mathematical semantics (also known as denotational semantics) of a specification refers to its mathematical meaning. The mathematical semantics of an algebraic specification is the class of all algebras that satisfy the specification. In particular, the classic approach by Goguen et al. takes the initial algebra (unique up to isomorphism) as the "most representative" model of the algebraic specification. Operational semantics The operational semantics of a specification means how to interpret it as a sequence of computational steps. We define a ground term as an algebraic expression without variables. The operational semantics of an algebraic specification refers to how ground terms can be transformed using the given equational axioms as left-to-right rewrite rules, until such terms reach their normal forms, where no more rewriting is possible. Consider the axioms for integer stacks. Let "" denote "rewrites to". Canonical property An algebraic specification is said to be confluent (also known as Church-Rosser) if the rewriting of any ground term leads to the same normal form. It is said to be terminating if the rewriting of any ground term will lead to a normal form after a finite number of steps. The algebraic specification is said to be canonical (also known as convergent) if it is both confluent and terminating. In other words, it is canonical if the rewriting of any ground term leads to a unique normal form after a finite number of steps. Given any canonical algebraic specification, the mathematical semantics agrees with the operational semantics. As a result, canonical algebraic specifications have been widely applied to address program correctness issues. For example, numerous researchers have applied such specifications to the testing of observational equivalence of objects in object-oriented programming. See Chen and Tse as a secondary source that provides a historical review of prominent research from 1981 to 2013. See also Algebraic semantics (mathematical logic) OBJ (programming language) Joseph Goguen References Formal methods Logic in computer science Formal specification languages Programming language semantics
Algebraic semantics (computer science)
[ "Mathematics", "Engineering" ]
770
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
35,728,942
https://en.wikipedia.org/wiki/Spieker%20center
In geometry, the Spieker center is a special point associated with a plane triangle. It is defined as the center of mass of the perimeter of the triangle. The Spieker center of a triangle is the center of gravity of a homogeneous wire frame in the shape of . The point is named in honor of the 19th-century German geometer Theodor Spieker. The Spieker center is a triangle center and it is listed as the point X(10) in Clark Kimberling's Encyclopedia of Triangle Centers. Location The following result can be used to locate the Spieker center of any triangle. The Spieker center of triangle is the incenter of the medial triangle of . That is, the Spieker center of is the center of the circle inscribed in the medial triangle of . This circle is known as the Spieker circle. The Spieker center is also located at the intersection of the three cleavers of triangle . A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. Each cleaver contains the center of mass of the boundary of , so the three cleavers meet at the Spieker center. To see that the incenter of the medial triangle coincides with the intersection point of the cleavers, consider a homogeneous wireframe in the shape of triangle consisting of three wires in the form of line segments having lengths . The wire frame has the same center of mass as a system of three particles of masses placed at the midpoints of the sides . The centre of mass of the particles at and is the point which divides the segment in the ratio . The line is the internal bisector of . The centre of mass of the three particle system thus lies on the internal bisector of . Similar arguments show that the center mass of the three particle system lies on the internal bisectors of and also. It follows that the center of mass of the wire frame is the point of concurrence of the internal bisectors of the angles of the triangle , which is the incenter of the medial triangle . Properties Let be the Spieker center of triangle . The trilinear coordinates of are The barycentric coordinates of are is the radical center of the three excircles. is the cleavance center of triangle is collinear with the incenter (), the centroid (), and the Nagel point () of triangle . Moreover, Thus on a suitably scaled and positioned number line, , , , and . lies on the Kiepert hyperbola. is the point of concurrence of the lines where are similar, isosceles and similarly situated triangles constructed on the sides of triangle as bases, having the common base angle References Triangle centers
Spieker center
[ "Physics", "Mathematics" ]
584
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
35,730,183
https://en.wikipedia.org/wiki/Parasitic%20mass
In a rocket, weapon, or transportation system (such as personal rapid transit), parasitic mass is the mass of all components of the system that are not considered payloads. A typical engineering objective is to drive the parasitic mass towards zero. Efficiency gains are achieved as the parasitic mass is reduced. References Aircraft weight measurements
Parasitic mass
[ "Physics", "Engineering" ]
65
[ "Aircraft weight measurements", "Mass", "Matter", "Aerospace engineering" ]
35,732,417
https://en.wikipedia.org/wiki/Evectant
In mathematical invariant theory, an evectant is a contravariant constructed from an invariant by acting on it with a differential operator called an evector. Evectants and evectors were introduced by . References Invariant theory
Evectant
[ "Physics", "Mathematics" ]
47
[ "Symmetry", "Algebra stubs", "Group actions", "Invariant theory", "Algebra" ]
35,733,284
https://en.wikipedia.org/wiki/Chirp%20mass
In astrophysics, the chirp mass of a compact binary system determines the leading-order orbital evolution of the system as a result of energy loss from emitting gravitational waves. Because the gravitational wave frequency is determined by orbital frequency, the chirp mass also determines the frequency evolution of the gravitational wave signal emitted during a binary's inspiral phase. In gravitational wave data analysis, it is easier to measure the chirp mass than the two component masses alone. Definition from component masses A two-body system with component masses and has a chirp mass of The chirp mass may also be expressed in terms of the total mass of the system and other common mass parameters: the reduced mass : the mass ratio : or the symmetric mass ratio : The symmetric mass ratio reaches its maximum value when , and thus the geometric mean of the component masses : If the two component masses are roughly similar, then the latter factor is close to so . This multiplier decreases for unequal component masses but quite slowly. E.g. for a 3:1 mass ratio it becomes , while for a 10:1 mass ratio it is Orbital evolution In general relativity, the phase evolution of a binary orbit can be computed using a post-Newtonian expansion, a perturbative expansion in powers of the orbital velocity . The first order gravitational wave frequency, , evolution is described by the differential equation , where and are the speed of light and Newton's gravitational constant, respectively. If one is able to measure both the frequency and frequency derivative of a gravitational wave signal, the chirp mass can be determined. To disentangle the individual component masses in the system one must additionally measure higher order terms in the post-Newtonian expansion. Mass-redshift degeneracy One limitation of the chirp mass is that it is affected by redshift; what is actually derived from the observed gravitational waveform is the product where is the redshift. This redshifted chirp mass is larger than the source chirp mass, and can only be converted to a source chirp mass by finding the redshift . This is usually resolved by using the observed amplitude to find the chirp mass divided by distance, and solving both equations using Hubble's law to compute the relationship between distance and redshift. Xian Chen has pointed out that this assumes non-cosmological redshifts (peculiar velocity and gravitational redshift) are negligible, and questions this assumption. If a binary pair of stellar-mass black holes merge while closely orbiting a supermassive black hole (an extreme mass ratio inspiral), the observed gravitational wave would experience significant gravitational and doppler redshift, leading to a falsely low redshift estimate, and therefore a falsely high mass. He suggests that there are plausible reasons to suspect that the SMBH's accretion disc and tidal forces would enhance the merger rate of black hole binaries near it, and the consequent falsely high mass estimates would explain the unexpectedly large masses of observed black hole mergers. (The question would be best resolved by a lower-frequency gravitational wave detector such as LISA which could observe the extreme mass ratio inspiral waveform.) See also Reduced mass Two-body problem in general relativity Note References - Gravitational-wave astronomy
Chirp mass
[ "Physics", "Astronomy" ]
674
[ "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
35,734,781
https://en.wikipedia.org/wiki/Alikhanian%E2%80%93Alikhanov%20spectrometer
The Alikhanian–Alikhanov spectrometer was a large solenoid physical instrument constructed by brothers Abraham Alikhanov and Artem Alikhanian at the Aragats scientific station in Armenia. The spectrometer was unique in the world. It had the highest amount of magnetic field (1,0x0,3x0,15 cubic meters) with the intensity up to 20 kGauss and was packed with four and five-layer proportional thin-walled counters of 4.6 mm diameter and 30–35 cm length, through which the coordinates of the trajectories of cosmic rays determined with an accuracy of about 1 mm. Spectrometer, that had a high resolution (maximum measurable pulse in the field of 20 kGauss was 150 GeV/c) was used to determine the momentum and mass of cosmic particles. Abraham Alikhanov and Artem Alikhanian believed that the spectrum of elementary particles are richer and more varied than it had been thought at that time. (By 1951 the only known hadrons were the proton, neutron and pion, and the only known leptons were electron, muon and neutrino). Experiments with spectrometer lead to the discovery of protons in cosmic rays (Alikhanian et al., 1945) and narrow air showers (Alikhanian, Asatiani, 1945). Using the Alikhanyan-Alikhanov magnetic spectrometer N. Kocharian obtained the energy spectra of muons and protons with energies up to several GeV (1952). However, only some of the many peaks in mass distributions measured at Aragats were later verified to be "real" particles and became known as π- and K-mesons. The Alikhanian-Alikhanov mass spectrometer became the prototype for hodoscopic devices that have played a major role in nuclear physics research. The memorial Alikhanian-Alikhanov magnet spectrometer is situated on Mt. Aragats, Armenia. References Armenian inventions Spectrometers
Alikhanian–Alikhanov spectrometer
[ "Physics", "Chemistry" ]
420
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
35,736,414
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20inequalities
In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the combination of probabilities about logical propositions or events logically linked together in conjunctions (AND operations) or disjunctions (OR operations) as in Boolean expressions or fault or event trees common in risk assessments, engineering design and artificial intelligence. These inequalities can be considered rules about how to bound calculations involving probabilities without assuming independence or, indeed, without making any dependence assumptions whatsoever. The Fréchet inequalities are closely related to the Boole–Bonferroni–Fréchet inequalities, and to Fréchet bounds. If are logical propositions or events, the Fréchet inequalities are Probability of a logical conjunction () Probability of a logical disjunction () where P( ) denotes the probability of an event or proposition. In the case where there are only two events, say A and B, the inequalities reduce to Probability of a logical conjunction () Probability of a logical disjunction () The inequalities bound the probabilities of the two kinds of joint events given the probabilities of the individual events. For example, if A is "has lung cancer", and B is "has mesothelioma", then A & B is "has both lung cancer and mesothelioma", and A ∨ B is "has lung cancer or mesothelioma or both diseases", and the inequalities relate the risks of these events. Note that logical conjunctions are denoted in various ways in different fields, including AND, &, ∧ and graphical AND-gates. Logical disjunctions are likewise denoted in various ways, including OR, |, ∨, and graphical OR-gates. If events are taken to be sets rather than logical propositions, the set-theoretic versions of the Fréchet inequalities are Probability of an intersection of events Probability of a union of events Numerical examples If the probability of an event A is P(A) = a = 0.7, and the probability of the event B is P(B) = b = 0.8, then the probability of the conjunction, i.e., the joint event A & B, is surely in the interval Likewise, the probability of the disjunction A ∨ B is surely in the interval These intervals are contrasted with the results obtained from the rules of probability assuming independence, where the probability of the conjunction is P(A & B) = a × b = 0.7 × 0.8 = 0.56, and the probability of the disjunction is P(A ∨ B) = a + b − a × b = 0.94. When the marginal probabilities are very small (or large), the Fréchet intervals are strongly asymmetric about the analogous results under independence. For example, suppose P(A) = 0.000002 = and P(B) = 0.000003 = . Then the Fréchet inequalities say P(A & B) is in the interval [0, ], and P(A ∨ B) is in the interval [, ]. If A and B are independent, however, the probability of A & B is which is, comparatively, very close to the lower limit (zero) of the Fréchet interval. Similarly, the probability of A ∨ B is , which is very close to the upper limit of the Fréchet interval. This is what justifies the rare-event approximation often used in reliability theory. Proofs The proofs are elementary. Recall that P(A ∨ B) = P(A) + P(B) − P(A & B), which implies P(A) + P(B) − P(A ∨ B) = P(A & B). Because all probabilities are no bigger than 1, we know P(A ∨ B) ≤ 1, which implies that P(A) + P(B) − 1 ≤ P(A & B). Because all probabilities are also positive we can similarly say 0 ≤ P(A & B), so max(0, P(A) + P(B) − 1) ≤ P(A & B). This gives the lower bound on the conjunction. To get the upper bound, recall that P(A & B) = P(A|B) P(B) = P(B|A) P(A). Because P(A|B) ≤ 1 and P(B|A) ≤ 1, we know P(A & B) ≤ P(A) and P(A & B) ≤ P(B). Therefore, P(A & B) ≤ min(P(A), P(B)), which is the upper bound. The best-possible nature of these bounds follows from observing that they are realized by some dependency between the events A and B. Comparable bounds on the disjunction are similarly derived. Extensions When the input probabilities are themselves interval ranges, the Fréchet formulas still work as a probability bounds analysis. Hailperin considered the problem of evaluating probabilistic Boolean expressions involving many events in complex conjunctions and disjunctions. Some have suggested using the inequalities in various applications of artificial intelligence and have extended the rules to account for various assumptions about the dependence among the events. The inequalities can also be generalized to other logical operations, including even modus ponens. When the input probabilities are characterized by probability distributions, analogous operations that generalize logical and arithmetic convolutions without assumptions about the dependence between the inputs can be defined based on the related notion of Fréchet bounds. Quantum Fréchet bounds Similar bounds hold also in quantum mechanics in the case of separable quantum systems and that entangled states violate these bounds. Consider a composite quantum system. In particular, we focus on a composite quantum system AB made by two finite subsystems denoted as A and B. Assume that we know the density matrix of the subsystem A, i.e., that is a trace-one positive definite matrix in (the space of Hermitian matrices of dimension ), and the density matrix of subsystem B denoted as We can think of and as the marginals of the subsystems A and B. From the knowledge of these marginals, we want to infer something about the joint in We restrict our attention to joint that are separable. A density matrix on a composite system is separable if there exist and which are mixed states of the respective subsystems such that where Otherwise is called an entangled state. For separable density matrices in the following Fréchet like bounds hold: The inequalities are matrix inequalities, denotes the tensor product and the identity matrix of dimension . It is evident that structurally the above inequalities are analogues of the classical Fréchet bounds for the logical conjunction. It is also worth to notice that when the matrices and are restricted to be diagonal, we obtain the classical Fréchet bounds. The upper bound is known in Quantum Mechanics as reduction criterion for density matrices; it was first proven by and independently formulated by. The lower bound has been obtained in that provides a Bayesian interpretation of these bounds. Numerical examples We have observed when the matrices and are all diagonal, we obtain the classical Fréchet bounds. To show that, consider again the previous numerical example: then we have: which means: It is worth to point out that entangled states violate the above Fréchet bounds. Consider for instance the entangled density matrix (which is not separable): which has marginal Entangled states are not separable and it can easily be verified that since the resulting matrices have one negative eigenvalue. Another example of violation of probabilistic bounds is provided by the famous Bell's inequality: entangled states exhibit a form of stochastic dependence stronger than the strongest classical dependence: and in fact they violate Fréchet like bounds. See also Probabilistic logic Logical conjunction Logical disjunction Fréchet bounds Boole's inequality Bonferroni inequalities Probability bounds analysis Probability of the union of pairwise independent events References Articles containing proofs Probabilistic inequalities Statistical inequalities Probability bounds analysis
Fréchet inequalities
[ "Mathematics" ]
1,787
[ "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)", "Articles containing proofs" ]
35,737,249
https://en.wikipedia.org/wiki/Poincar%C3%A9%E2%80%93Birkhoff%20theorem
In symplectic topology and dynamical systems, Poincaré–Birkhoff theorem (also known as Poincaré–Birkhoff fixed point theorem and Poincaré's last geometric theorem) states that every area-preserving, orientation-preserving homeomorphism of an annulus that rotates the two boundaries in opposite directions has at least two fixed points. History The Poincaré–Birkhoff theorem was discovered by Henri Poincaré, who published it in a 1912 paper titled "Sur un théorème de géométrie", and proved it for some special cases. The general case was proved by George D. Birkhoff in his 1913 paper titled "Proof of Poincaré's geometric theorem". References Further reading M. Brown; W. D. Neumann. "Proof of the Poincaré-Birkhoff fixed-point theorem". Michigan Math. J. Vol. 24, 1977, p. 21–31. P. Le Calvez; J. Wang. "Some remarks on the Poincaré–Birkhoff theorem". Proc. Amer. Math. Soc. Vol. 138, No.2, 2010, p. 703–715. J. Franks. "Generalizations of the Poincaré-Birkhoff Theorem", Annals of Mathematics, Second Series, Vol. 128, No. 1 (Jul., 1988), pp. 139–151. Symplectic topology Dynamical systems Fixed-point theorems
Poincaré–Birkhoff theorem
[ "Physics", "Mathematics" ]
310
[ "Theorems in mathematical analysis", "Fixed-point theorems", "Theorems in topology", "Mechanics", "Dynamical systems" ]
35,737,867
https://en.wikipedia.org/wiki/Applied%20Thermal%20Engineering
Applied Thermal Engineering is a peer-reviewed scientific journal covering all aspects of the thermal engineering of advanced processes, including process integration, intensification, and development, together with the application of thermal equipment in conventional process plants, which includes its use for heat recovery. The editor-in-chief is C.N. Markides. The journal was established in 1981 as Journal of Heat Recovery Systems and renamed to Heat Recovery Systems and CHP in 1987. It obtained its current title in 1996. According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.465. References External links Heat transfer Energy and fuel journals Engineering journals Elsevier academic journals Academic journals established in 1981 English-language journals
Applied Thermal Engineering
[ "Physics", "Chemistry", "Environmental_science" ]
144
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Environmental science journals", "Energy and fuel journals", "Thermodynamics" ]
37,083,035
https://en.wikipedia.org/wiki/Cilevirus
Cilevirus is a genus of viruses in the family Kitaviridae. Plants serve as natural hosts. There are two species: Citrus leprosis virus C and Citrus leprosis virus C2. History This genus was created in 2006 by Locali-Fabris et al in 2006. Structure Viruses in Cilevirus are non-enveloped, with bacilliform geometries. These viruses are about 50 nm wide and 150 nm long. Genomes are linear and segmented, bipartite, around 28.75kb in length. The genome is bipartite with two segments of 8745 nucleotide (RNA 1) and 4986 nucleotides (RNA 2) in length. The 5' terminals of both segments have a cap structure and have poly adenosine tails in their 3'-terminals. RNA 1 contains two open reading frames (ORFs) which encode 286 and 29 kilodalton (kDa) proteins. The 286 kDa protein is a polyprotein involved in virus replication and has four conserved domains: methyltransferase, protease, helicase and an RNA dependent RNA polymerase. RNA 2 encodes four ORFs which correspond to 15, 61, 32 and 24 kDa proteins. The 32 kDa protein is involved in cell to cell movement of the virus but the functions of the other proteins are unknown. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. The virus is transmitted via a vector (mites of the genus brevipalpus). Transmission routes are vector. Clinical This virus causes Citrus leprosis disease and is transmitted by species of the mite genus Brevipalpus (Acari: Tenuipalpidae). This disease is endemic in Brazil and has recently spread to Central America. Its spread there represents a threat to citrus industry in the United States. References External links Viralzone: Cilevirus ICTV Positive-sense single-stranded RNA viruses Riboviria Virus genera
Cilevirus
[ "Biology" ]
466
[ "Viruses", "Riboviria" ]
37,084,359
https://en.wikipedia.org/wiki/List%20of%20genetic%20codes
While there is much commonality, different parts of the tree of life use slightly different genetic codes. When translating from genome to protein, the use of the correct genetic code is essential. The mitochondrial codes are the relatively well-known examples of variation. The translation table list below follows the numbering and designation by NCBI. Four novel alternative genetic codes were discovered in bacterial genomes by Shulgina and Eddy using their codon assignment software Codetta, and validated by analysis of tRNA anticodons and identity elements; these codes are not currently adopted at NCBI, but are numbered here 34-37, and specified in the table below. The standard code The vertebrate mitochondrial code The yeast mitochondrial code The mold, protozoan, and coelenterate mitochondrial code and the mycoplasma/spiroplasma code The invertebrate mitochondrial code The ciliate, dasycladacean and hexamita nuclear code The deleted kinetoplast code; cf. table 4. deleted, cf. table 1. The echinoderm and flatworm mitochondrial code The euplotid nuclear code The bacterial, archaeal and plant plastid code The alternative yeast nuclear code The ascidian mitochondrial code The alternative flatworm mitochondrial code The Blepharisma nuclear code The chlorophycean mitochondrial code (none) (none) (none) (none) The trematode mitochondrial code The Scenedesmus obliquus mitochondrial code The Thraustochytrium mitochondrial code The Pterobranchia mitochondrial code The candidate division SR1 and gracilibacteria code The Pachysolen tannophilus nuclear code The karyorelict nuclear code The Condylostoma nuclear code The Mesodinium nuclear code The peritrich nuclear code The Blastocrithidia nuclear code The Balanophoraceae plastid code (not shown on web) The Cephalodiscidae mitochondrial code The Enterosoma code The Peptacetobacter code The Anaerococcus and Onthovivens code The Absconditabacterales code The alternative translation tables (2 to 37) involve codon reassignments that are recapitulated in the DNA and RNA codon tables. Table summary Comparison of alternative translation tables for all codons (using IUPAC amino acid codes): Notes Three translation tables have a peculiar status: Table 7 is now merged into translation table 4. Table 8 is merged to table 1; all plant chloroplast differences due to RNA edit. Table 32 is not shown on the web page, but is present in the ASN.1 format "gc.prt" release. Other mechanisms also play a part in protein biosynthesis, such as post-transcriptional modification. References See also Genetic codes: list of alternative codons External links NCBI List of Alternative Codes Further reading Codes Gene expression
List of genetic codes
[ "Chemistry", "Biology" ]
610
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
37,087,918
https://en.wikipedia.org/wiki/Electronic%20beam%20curing
Electronic Beam curing (EBC) is a surface curing process in the manufacture of high pressure laminate (HPL) boards. The process applies color to a single sheet of Kraft paper which is adhered to a HPL board in such a way that it will keep its color durably while remaining scratch-resistant. Unlike other HPL creation methods, EBC does not use heat. The EBC machine used in the curing process gives the finished HPL boards a surface that is both color-fade resistant and has a high resistance to damage. The process works by first mixing color pastes to the desired color and then applying it to a single sheet of kraft paper. This substrate is then put in the EBC machine together with a protective foil. In the machine this sheet is then shot with electrons at such high velocity that the color impregnated paper hardens almost instantly. After being stored in a temperature-controlled room for a short duration the sheets are ready to be adhered to unfinished HPL boards in a process called dry forming. References Composite materials Curing agents
Electronic beam curing
[ "Physics" ]
223
[ "Materials", "Composite materials", "Matter" ]
37,089,288
https://en.wikipedia.org/wiki/Artificial%20butter%20flavoring
Artificial butter flavoring is a flavoring used to give a food the taste and smell of butter. It may contain diacetyl, acetylpropionyl, or acetoin, three natural compounds in butter that contribute to its characteristic taste and smell. Manufacturers of margarines or similar oil-based products typically add it (along with beta carotene for the yellow color) to make the final product butter-flavored, because it would otherwise be relatively tasteless. Butter-flavoring controversy The lung disease bronchiolitis obliterans is attributed to prolonged exposure to diacetyl, e.g. in an industrial setting. Workers in several factories that manufacture artificial butter flavoring have been diagnosed with bronchiolitis obliterans, a rare and serious disease of the lungs. The disease has been called "popcorn worker's lung" or "popcorn lung" because it was first seen in former workers of a microwave popcorn factory in Missouri, but NIOSH refers to it by the more general term "flavorings-related lung disease". It has also been called "flavorings-related bronchiolitis obliterans" or diacetyl-induced bronchiolitis obliterans. People who work with flavorings that include diacetyl are at risk for flavorings-related lung disease, including those who work in popcorn factories, restaurants, other snack food factories, bakeries, candy factories, margarine and cooking spread factories, and coffee processing facilities. In the year 2000, eight cases of bronchiolitis obliterans were detected in former employees of a microwave popcorn plant. Many of these individuals had initially been misdiagnosed as having other pulmonary diseases such as COPD and asthma. NIOSH investigated the worksite and suggested that artificial butter flavoring containing diacetyl was the most likely causative agent for the cases of bronchiolitis obliterans. Follow up investigations at the plant revealed that 25% of employees had abnormal spirometry exams. The plant effectively implemented changes reducing air concentrations of diacetyl by 1 to 3 orders of magnitude in the years following. A stabilization of respiratory symptoms was seen after this point in those who had been exposed to high levels of diacetyl. However, declines in lung function as measured by spirometry continued. Other studies also found cases of bronchiolitis obliterans in workers at 4 other microwave popcorn production facilities. Additionally further studies have demonstrated a large increase in abnormal spirometry values in workers exposed to flavoring chemicals with a clear dose-response relationship. In 2006, the International Brotherhood of Teamsters and the United Food and Commercial Workers petitioned the U.S. OSHA to promulgate an emergency temporary standard to protect workers from the deleterious health effects of inhaling diacetyl vapors. The petition was followed by a letter of support signed by more than 30 prominent scientists. On January 21, 2009, OSHA issued an advance notice of proposed rulemaking for regulating exposure to diacetyl. The notice requests respondents to provide input regarding adverse health effects, methods to evaluate and monitor exposure, the training of workers. That notice also solicited input regarding exposure and health effects of acetoin, acetaldehyde, acetic acid and furfural. Two bills in the California Legislature seek to ban the use of diacetyl. In 2012, Wayne Watson, a regular microwavable popcorn consumer for years, was awarded US$7.27 million in damages from a federal jury in Denver, which decided his lung disease was caused by the chemicals in microwave popcorn and that the popcorn's manufacturer, Gilster-Mary Lee Corporation, and the grocery store that sold it should have warned him of its dangers. Regulation The European Commission has declared diacetyl is legal for use as a flavouring substance in all EU states. As a diketone, diacetyl is included in the EU's flavouring classification Flavouring Group Evaluation 11 (FGE.11). A Scientific Panel of the EU Commission evaluated six flavouring substances (not including diacetyl) from FGE.11 in 2004. As part of this study, the panel reviewed available studies on several other flavourings in FGE.11, including diacetyl. Based on the available data, the panel reiterated the finding that there were no safety concerns for diacetyl's use as a flavouring. In 2007, the European Food Safety Authority (EFSA), the EU's food safety regulatory body, stated its scientific panel on food additives and flavourings (AFC) was evaluating diacetyl along with other flavourings as part of a larger study. In 2007, the Flavor and Extract Manufacturers Association recommended reducing diacetyl in butter flavorings. Manufacturers of butter flavored popcorn including Pop Weaver, Trail's End, and ConAgra Foods (maker of Orville Redenbacher's and Act II) began removing diacetyl as an ingredient from their products. A 2010 U.S. OSHA Safety and Health Information Bulletin and companion Worker Alert recommend employers use safety measures to minimize exposure to diacetyl or its substitutes. References Flavors Butter Dairy products Cooking fats Colloids Spreads (food) Condiments
Artificial butter flavoring
[ "Physics", "Chemistry", "Materials_science" ]
1,096
[ "Chemical mixtures", "Condensed matter physics", "Colloids" ]
6,817,401
https://en.wikipedia.org/wiki/Absolute%20horizon
In general relativity, an absolute horizon is a boundary in spacetime, defined with respect to the external universe, inside which events cannot affect an external observer. Light emitted inside the horizon can never reach the observer, and anything that passes through the horizon from the observer's side is never seen again by the observer. An absolute horizon is thought of as the boundary of a black hole. In the context of black holes, the absolute horizon is generally referred to as an event horizon, though this is often used as a more general term for all types of horizons. The absolute horizon is just one type of horizon. For example, important distinctions must be made between absolute horizons and apparent horizons; the notion of a horizon in general relativity is subtle, and depends on fine distinctions. Definition An absolute horizon is only defined in an asymptotically flat spacetime – a spacetime which approaches flat space as one moves far away from any massive bodies. Examples of asymptotically flat spacetimes include Schwarzschild and Kerr black holes. The FRW universe – which is believed to be a good model for our universe – is generally not asymptotically flat. Nonetheless, we can think of an isolated object in an FRW universe as being nearly an isolated object in an asymptotically flat universe. The particular feature of asymptotic flatness which is needed is a notion of "future null infinity". This is the set of points which are approached asymptotically by null rays (light rays, for example) which can escape to infinity. This is the technical meaning of "external universe". These points are only defined in an asymptotically flat universe. An absolute horizon is defined as the past null cone of future null infinity. Nature of the absolute horizon The definition of an absolute horizon is sometimes referred to as teleological, meaning that it cannot be known where the absolute horizon is without knowing the entire evolution of the universe, including the future. This is both an advantage and a disadvantage. The advantage is that this notion of a horizon is mathematically convenient and does not depend on the observer, unlike apparent horizons, for example. The disadvantage is that it requires the full history (all the way into the future) of the spacetime to be known, thus making event horizons unsuitable for empirical tests. In the case of numerical relativity, where a spacetime is simply being evolved into the future, only a finite portion of the spacetime can be known. See also Causal structure Cauchy horizon Cosmological horizon Ergosphere Killing horizon Naked singularity Particle horizon Photon sphere Reissner–Nordström solution Schwarzschild metric References Further reading This is a popular book, aimed at the lay reader, containing good discussion of horizons and black holes. Concepts in astrophysics General relativity
Absolute horizon
[ "Physics" ]
576
[ "General relativity", "Concepts in astrophysics", "Astrophysics", "Theory of relativity" ]
6,818,518
https://en.wikipedia.org/wiki/Sustained%20Spheromak%20Physics%20Experiment
The Sustained Spheromak Physics Experiment (SSPX) is a program at Lawrence Livermore National Laboratory in the United States established to investigate spheromak plasma. A spheromak device produces a plasma in magnetohydrodynamic equilibrium mainly through self-induced plasma currents, as opposed to a tokamak device which depends on large externally generated magnetic fields. The series of experiments examines the potential for a spheromak device to contain fusion fuel. According to a 1999 abstract, The Sustained Spheromak Physics Experiment, SSPX , will study spheromak physics with particular attention to energy confinement and magnetic fluctuations in a spheromak sustained by electrostatic helicity injection. See also Magnetohydrodynamics Magnetic helicity Magnetic reconnection Turbulence References External links Science@Livermore - Press release Fusion Energy Program publications Romero-Talamas, Investigations of Spheromak plasma dynamics, Ph.D. thesis Selected abstracts: Romero-Talamas, Spheromak formation and sustainment studies Wang, Large-amplitude electron density Hooper, Sustained Spheromak Physics Experiment Lawrence Livermore National Laboratory Magnetic confinement fusion devices Plasma physics facilities
Sustained Spheromak Physics Experiment
[ "Physics", "Chemistry" ]
245
[ "Plasma physics", "Particle traps", "Plasma physics stubs", "Magnetic confinement fusion devices", "Plasma physics facilities" ]
6,822,551
https://en.wikipedia.org/wiki/Surface%20engineering
Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry, mechanical engineering, and electrical engineering (particularly in relation to semiconductor manufacturing). Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the surface phase. It acts as an interface to the surrounding environment. The bulk material in a solid is called the bulk phase. The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear, corrosion, fatigue and creep. Surface engineering involves altering the properties of the surface phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used. It provides a cost-effective material for robust design. A spectrum of topics that represent the diverse nature of the field of surface engineering includes plating technologies, nano and emerging technologies and surface engineering, characterization and testing. Applications Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, textile, petroleum, petrochemical, chemical, steel, cement, machine tools and construction industries including road surfacing. Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C3N4), graded deposits, multi-component deposits etc. The advanced materials and deposition processes including recent developments in ultra hard materials like BAM (AlMgB compound)are fully covered in a recent book[R. Chattopadhyay:Green Tribology,Green Surface Engineering and Global Warming,ASM International,USA,2014] In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market. In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, diffusion, thermal spray & welding using heat sources, such as, laser,plasma,solar beam.microwave;friction.pulsed combustion. ion, electron pulsed arc, spark, friction and induction.[Ref:R.Chattopadhyay:Advanced Thermally Assisted Surface Engineering Processes,Springer, New York, USA,2004] It is estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries. There are around 65 academic institutions world-wide engaged in surface engineering research and education. Surface cleaning techniques Surface cleaning, synonymously referred to as dry cleaning, is a mechanical cleaning technique used to reduce superficial soil, dust, grime, insect droppings, accretions, or other surface deposits. (Dry cleaning, as the term is used in paper conservation, does not employ the use of organic solvents.) Surface cleaning may be used as an independent cleaning technique, as one step (usually the first) in a more comprehensive treatment, or as a prelude to further treatments (e.g., aqueous immersion) which may cause dirt to set irreversibly in paper fibers. Purpose The purpose of surface cleaning is to reduce the potential for damage to paper artifacts by removing foreign material which can be abrasive, acidic, hygroscopic, or degradative. The decision to remove surface dirt is also for aesthetic reasons when it interferes with the visibility of the imagery or information. A decision must be made balancing the probable care of each object against the possible problems related to surface cleaning. Environmental benefits The application of surface engineering to components leads to improved lifetime (e.g., by corrosion resistance) and improved efficiency (e.g., by reducing friction) which directly reduces the emissions corresponding to those components. Applying innovative surface engineering technologies to the energy sector has the potential of reducing annual -eq emissions by up to 1.8 Gt in 2050 and 3.4 Gt in 2100. This corresponds to 7% and 8.5% annual reduction in the energy sector in 2050 and 2100, respectively. Despite those benefits, a major environmental drawback is the dissipative losses occurring throughout the life cycle of the components, and the associated environmental impacts of them. In thermal spray surface engineering applications, the majority of those dissipative losses occur at the coating stage (up to 39%), where part of the sprayed powders do not adhere to the substrate. See also References R. Chattopadhyay, ’Advanced Thermally Assisted Surface Engineering Processes’ Kluwer Academic Publishers, MA, US (now Springer, NY), 2004, , E-. R. Chattopadhyay, ’Surface Wear- Analysis, Treatment, & Prevention’, ASM-International, Materials Park, OH, US, 2001, . Sanjay Kumar Thakur and R. Gopal Krishnan, ’Advances in Applied Surface Engineering’, Research Publishing Services, Singapore, 2011, . External links Institute of Surface Chemistry and Catalysis Ulm University Engineering disciplines Building engineering Materials science
Surface engineering
[ "Physics", "Materials_science", "Engineering" ]
1,140
[ "Applied and interdisciplinary physics", "Building engineering", "Materials science", "Civil engineering", "nan", "Architecture" ]
21,447,866
https://en.wikipedia.org/wiki/Complete%20spatial%20randomness
Complete spatial randomness (CSR) describes a point process whereby point events occur within a given study area in a completely random fashion. It is synonymous with a homogeneous spatial Poisson process. Such a process is modeled using only one parameter , i.e. the density of points within the defined area. The term complete spatial randomness is commonly used in Applied Statistics in the context of examining certain point patterns, whereas in most other statistical contexts it is referred to the concept of a spatial Poisson process. Model Data in the form of a set of points, irregularly distributed within a region of space, arise in many different contexts; examples include locations of trees in a forest, of nests of birds, of nuclei in tissue, of ill people in a population at risk. We call any such data-set a spatial point pattern and refer to the locations as events, to distinguish these from arbitrary points of the region in question. The hypothesis of complete spatial randomness for a spatial point pattern asserts that the number of events in any region follows a Poisson distribution with given mean count per uniform subdivision. The events of a pattern are independently and uniformly distributed over space; in other words, the events are equally likely to occur anywhere and do not interact with each other. "Uniform" is used in the sense of following a uniform probability distribution across the study region, not in the sense of “evenly” dispersed across the study region. There are no interactions amongst the events, and the intensity of events does not vary over the plane. For example, the independence assumption would be violated if the existence of one event either encouraged or inhibited the occurrence of other events in the neighborhood. Distribution The probability of finding exactly points within the area with event density is therefore: The first moment of which, the average number of points in the area, is simply . This value is intuitive as it is the Poisson rate parameter. The probability of locating the neighbor of any given point, at some radial distance is: where is the number of dimensions, is a density-dependent parameter given by and is the gamma function, which when its argument is integer, is simply the factorial function - i.e. for integral . The expected value of can be derived via the use of the gamma function using statistical moments. The first moment is the mean distance between randomly distributed particles in dimensions. Applications The study of CSR is essential for the comparison of measured point data from experimental sources. As a statistical testing method, the test for CSR has many applications in the social sciences and in astronomical examinations. CSR is often the standard against which data sets are tested. Roughly described one approach to test the CSR hypothesis is the following: Use statistics that are a function of the distance from every event to the next nearest event. Firstly focus on a specific event and formulate a method for testing whether the event and the next nearest event are significantly close (or distant). Next consider all events and formulate a method for testing whether the average distance from every event to the next nearest event is significantly short (or long). In cases where computing test statistics analytically is difficult, numerical methods, such as the Monte Carlo method simulation are employed, by simulating a stochastic process a large number of times. References Further reading External links Improvement of Inter-event Distance Tests of Randomness in Spatial Point Processes Spatial analysis Point processes Statistical randomness Spatial processes
Complete spatial randomness
[ "Physics", "Mathematics" ]
688
[ "Point (geometry)", "Spatial analysis", "Point processes", "Space", "Spacetime" ]
30,501,764
https://en.wikipedia.org/wiki/Auriscalpium%20vulgare
Auriscalpium vulgare, commonly known as the pinecone mushroom, the cone tooth, or the ear-pick fungus, is a species of fungus in the family Auriscalpiaceae of the order Russulales. It was first described in 1753 by Carl Linnaeus, who included it as a member of the tooth fungi genus Hydnum, but British mycologist Samuel Frederick Gray recognized its uniqueness and in 1821 transferred it to the genus Auriscalpium that he created to contain it. The fruit bodies (mushrooms) grow on conifer litter or on conifer cones that may be partially or completely buried in soil. The dark brown cap of the small, spoon-shaped mushroom is covered with fine brown hairs, and reaches a diameter of up to . On the underside of the cap are a crowded array of tiny tooth-shaped protrusions ("teeth") up to 3 mm long; they are initially whitish to purplish-pink before turning brown in age. The dark brown and hairy stem, up to long and 2 mm thick, attaches to one edge of the cap. The mushroom produces a white spore print out of roughly spherical spores. High humidity is essential for optimum fruit body development, and growth is inhibited by either too much or too little light. Fruit bodies change their geotropic response three times during their development, which helps ensure that the teeth ultimately point downward for optimum spore release. The pure culture, cell division and the ultrastructure of A. vulgares hyphae and mycelia have been studied and described in search of potentially useful characters for phylogenetic analysis. When grown in culture, the fungus can be induced to produce fruit bodies under suitable conditions. The fungus is widely distributed in Europe, Central America, North America, and temperate Asia. Although common, its small size and nondescript colors lead it to be easily overlooked in the pine woods where it grows. A. vulgare is not generally considered edible, owing to its tough texture. Taxonomy The species was first described in the scientific literature by Carl Linnaeus under the name Hydnum auriscalpium in his 1753 Species Plantarum. Linnaeus placed three other tooth fungi in the genus Hydnum: H. imbricatum, H. repandum, and H. tomentosum. In 1821, Samuel Frederick Gray considered H. auriscalpium to be sufficiently distinct from the other Hydnum species to warrant the creation of a new genus, Auriscalpium, to contain it. In the process, its name was changed to Auriscalpium vulgare. Otto Kuntze and Howard James Banker later independently sought to restore Linnaeus' species name, but the resulting combination (Auriscalpium auriscalpium) is a tautonym and disallowed under the rules for botanical nomenclature (ICBN 2005 rule 23.4), and these combinations are therefore no longer validly published. Other names given to the fungus and now considered synonyms include Hydnum fechtneri, named by Josef Velenovský in 1922, and later combinations based on this name. A. vulgare is the type species of the widely distributed genus of eight species that it belongs to. Despite vast differences in appearance and morphology, A. vulgare is related to such varied taxa as the gilled fungi of Lentinus, the poroid genus Albatrellus, the coral-like Clavicorona, and fellow tooth fungus Hericium. The relationship of all of these taxa—members of the family Auriscalpiaceae of the order Russulales—has been demonstrated through molecular phylogenetics. Auriscalpium vulgare is commonly known as the "pinecone mushroom", the "cone tooth", "pine cone tooth", or the "ear-pick fungus". Gray called it the "common earpick-stool"; it was also referred to as the "fir-cone Hydnum", when it was still considered to be a member of that genus. The specific epithet vulgare means "common". The generic name Auriscalpium is Latin for "ear pick" and refers to a small, scoop-shaped instrument used to remove foreign matter from the ear. Description The fruit body of A. vulgare is fibrous when fresh and becomes stiff when dry. It is a small species rarely exceeding in height, with a cap usually smaller than an adult's fingernails: —although it has been known to reach up to . It is semicircular or kidney-shaped, flat on the lower surface and rounded on the top. The surface is at first much like the stem: covered with bristles and dark chestnut brown. It becomes smooth with maturity and can darken to the point of being almost black. The cap margin is usually buff to light brown–roughly the same color as the spines and lighter in color than the center. It becomes rolled inward (revolute) and often wavy in maturity. The spines on the underside of the cap are a few millimeters long and cylindrical down to their sharp tips. White to light brown when young, they later become covered with a white spore mass and then turn an ashy gray. Occasionally, fruit bodies are produced that lack a cap entirely. Auriscalpium vulgare usually has a single stem, but occasionally several stems arise from a thick common base. It attaches to the side of the cap and is cylindrical or slightly flattened with a bulbous base, 2–8 cm tall and 1–3 wide. Its surface is covered with hairy fibers, and its mature color is a dark chestnut brown. The cap flesh is composed of two distinct layers: a thin, compact, black-brown and hairy upper layer, and a thick, soft, white to light brown lower layer that is made of thin, thread-like filaments arranged in a roughly parallel fashion. The stem is similarly divided, with a thin, dark and hairy cortical layer covered by hairs, which encircles inner ochre-colored flesh. A drop of potassium hydroxide applied to the surface of the mushroom will cause it to instantly stain black. The mushroom has no distinct taste or odor, is generally considered inedible because of its toughness and diminutive size. An 1887 textbook noted, however, that it was "commonly eaten in France and Italy". Microscopic characteristics Spore deposits are white. Viewed under a light microscope, the spores appear hyaline (translucent), covered with minute wart-like bumps, and are spherical or nearly so, with dimensions of 4.6–5.5 by 4–5 μm. They are amyloid (reacting to Melzer's reagent) and cyanophilous (staining in methyl blue). The basidia (spore-bearing cells of the hymenium) are four-spored with basal clamps, and measure 15–24 by 3–4 μm, and sterigmata (extensions of the basidia that bear the spores) are swollen at the base and roughly 3 μm long. The hyphal system is dimitic, comprising both generative (undifferentiated) and skeletal (structural) hyphae. The thin-walled generative hyphae are hyaline, and have clamp connections; the thick-walled skeletal hyphae are thicker overall and lack such connections. The cortex (the tougher outer layer of flesh) is made of parallel unbranched generative hyphae that are brown, thick-walled, clumped together, and frequently clamped. The internal flesh is made of interwoven generative and skeletal hyphae. Gloeoplerous hyphae (containing oily or granular contents) are also present, protruding into the hymenium as club-like or sharp-pointed gloeocystidia. The hyphae of basidiomycetous fungi are partitioned by cross-walls called septa, and these septa have pores that permit the passage of cytoplasm or protoplasm between adjacent hyphal compartments. In an effort to determine ultrastructural characters useful for systematic and phylogenetic analyses of the Agaricomycotina, Gail Celio and colleagues used electron microscopy to examine both the structure of the septal pore, and nuclear division in A. vulgare. They determined that septa found in hyphae of the hymenium have bell-shaped pore "caps" with multiple perforations. Each cap extends along the length of the septum, along with a zone surrounding the pore that is free of organelles. Due to the scarcity of similar data from other Agaricomycotina species, it is unknown whether the extended septal pore cap margins of A. vulgare are phylogenetically informative. Regarding nuclear division, the process of metaphase I of meiosis is similar to the metaphase of mitosis. Spherical spindle pole bodies containing electron-opaque inclusions are set within gaps on opposite ends of the nuclear membrane. This membrane has occasional gaps but is largely continuous. Fragments of endoplasmic reticulum occur near the spindle pole bodies, but do not form a cap. Fruit body development Fruit body primordia first appear between the scales of the cones, and require 9 to 35 days to reach their final height. They consist of an inner core of thin-walled generative hyphae enclosed by an outer coat of skeletal hyphae. Immature fruit bodies are white and delicate, but gradually become brown as they mature. Because the cap is grown from the stem tip after it bends, cap development interrupts stem growth, and this shift to centrifugal growth (that is, growth outward from the stem) results in the typical kidney-shaped or semicircular cap. Although the fruit body takes at least 9 days to mature, spores production begins within 48–72 hours of the start of cap growth. Spines start out as minute protuberances on the part of the stem adjoining the undersurface of the cap. As the cap enlarges, these spines are spread horizontally, and more protuberances are formed, which elongate vertically downwards. When grown in favorable conditions of high water availability and humidity, the fruit body can proliferate by growing additional (secondary) fruit bodies on all parts of its upper and lower surfaces. These secondary growths typically number between four and seven; some may be aborted as the nutrients from the pine cone substrate are depleted, resulting in stems lacking caps. In one instance, a complete secondary proliferation was noted (i.e., growing from a primary proliferation) that developed completely so as to produce viable spores. Humidity is a limiting factor for optimum fruit body development. Removal of incompletely mature laboratory-grown specimens from a relative humidity (R.H.) of over 98% to one of 65–75% causes the fruit bodies to brown and stop growing. When transferred to an even lower R.H. of about 50%, the stems quickly begin to collapse. Light also affects fruit body development: both continuous illumination and complete darkness inhibit growth. When a stem is developing, the fungus is negatively geotropic, so that if the axis of the stem is tilted by 90 degrees, it will return to a vertical position within 24 hours. The extending hyphae that form the cap are themselves diageotropic—they will grow at right angles to the direction of gravity. Finally, the spines are positively geotropic, and will re-orient themselves to point downward if the mushroom orientation changes. Because the second (cap formation) and third (spine formation) geotropic responses overlap, there is a brief period where two different geotropic responses are operating simultaneously. These geotropic transitions help ensure that the final alignment results in optimum spore dispersal. Similar species Similar species include Strobilurius trullisatus, which also fruits on Douglas-fir cones. Baeospora myosura fruits on spruce cones, and Mycena purpureofusca on pine cones. Habitat and distribution Auriscalpium vulgare is a saprobic species. Its mushrooms grow solitary or clustered on fallen pine cones, especially those that are fully or partially buried. It typically favors Scots Pine (Pinus sylvestris), but has also been reported on spruce cones, and in California grows primarily on Douglas-fir cones. One author noted finding the mushroom on spruce needles on top of squirrel dens where cone bracts were present in the forest floor. In a study conducted in the Laojun Mountain region of Yunnan, China, A. vulgare was found to be one of the most dominant species collected from mixed forest at an altitude of . A study on the effect of slash and burn practices in northeast India showed that the fungus prefers to fruit on burned cones of the Khasi Pine, and that the number of fruit bodies on unburned cones increases with cone girth. The fungus is widely distributed in Europe, Central and North America, temperate Asia, and Turkey. In North America, its range extends from Canada to the Trans-Mexican Volcanic Belt south of Mexico City. The mushroom is common, appearing in the summer and autumn, although it is easily overlooked because of its small size and nondescript coloration. A. vulgare is the only representative of its genus in temperate areas of the Northern Hemisphere. Growth in culture Auriscalpium vulgare can be grown in pure culture on agar-containing plates supplemented with nutrients. The colonies that grow are white to pale cream, and cover the agar surface within six weeks from the initial inoculation. The mycelium is made of bent-over hyphae, without any aerial hyphae (hyphae that extend above the surface of the agar). Typically, two indistinct zones develop at about 6 mm and 15 mm from the initial inoculum spot, with each zone roughly 4 mm wide. The zones appear somewhat lighter in color because the hyphae are more closely packed and form crystalline substances that deposit into the agar. The mature mycelium consists of thin-walled, densely packed hyphae that are 1.5–3.2 μm in diameter. They are often gnarled or somewhat spiral (subhelicoid), and frequently branched at an angle of about 45°, with a clamp at the base of the branch. They contain amorphous granules that appear refractive when viewed under phase contrast microscopy, and their walls are often encrusted with tiny granules. Gloeocystidia (thin-walled cystidia with refractive, frequently granular contents) are common; they measure 50–85 by 6.5–8.5 μm, and are club-shaped (sometimes elongated), thin-walled, and often have one or two lobes with rounded tips. Containing foamy and pale yellow contents, they are a refractive yellow color under phase contrast. Initially they are erect but they soon fall under their own weight to lie on the agar surface. Crystalline deposits are abundant as small, randomly scattered plate-like or star-like crystals. Fruiting begins about six weeks after the initial inoculation on the agar plate, but only when portions of fruit bodies (spines or stem sections) are used as the inoculum to initiate growth; the use of mycelium as the inoculum precludes subsequent fruiting. Mature fruit bodies grow very close to the initial site of inoculation—within 3 mm—and take about 60 days to mature after they first start to form. Edibility The mushroom is generally considered inedible because of its toughness and diminutive size. A 1887 textbook claims that it was "commonly eaten in France and Italy". References External links AFTOL Images and details of ultrastructural characters Russulales Inedible fungi Fungi described in 1753 Taxa named by Carl Linnaeus Fungi of Asia Fungi of Central America Fungi of Europe Fungi of North America Taxa named by Samuel Frederick Gray Fungus species
Auriscalpium vulgare
[ "Biology" ]
3,328
[ "Fungi", "Fungus species" ]
30,504,688
https://en.wikipedia.org/wiki/Flow%20process%20chart
The flow process chart is a graphical and symbolic representation of the activities performed on the work piece during the operation in industrial engineering. History The first structured method for documenting process flow, e.g., in flow shop scheduling, the flow process chart, was introduced by Frank and Lillian Gilbreth to members of ASME in 1921 as the presentation "Process Charts, First Steps in Finding the One Best Way to Do Work". The Gilbreths' tools quickly found their way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen, began training business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences in Lake Placid, New York. A 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and Gamble, where he developed their Deliberate Methods Change Program. Another 1944 graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register Corporation, adapted the flow process chart to information processing with his development of the multi-flow process chart to display multiple documents and their relationships. In 1947, ASME adopted a symbol set derived from the Gilbreths' original work as the ASME Standard for Process Charts. Symbols Operation: to change the physical or chemical characteristics of the material. Inspection: to check the quality or the quantity of the material. Move: transporting the material from one place to another. Delay: when material cannot go to the next activity. Storage: when the material is kept in a safe location. When to use it It is used when observing a physical process, to record actions as they happen, and thus get an accurate description of the process. It is used when analyzing the steps in a process, to help identify and eliminate waste—thus, it is a tool for efficiency planning. It is used when the process is mostly sequential, containing few decisions. See also Business process mapping Control flow diagram Data flow diagram Flowchart Functional flow block diagram Workflow References External links Industrial engineering Charts
Flow process chart
[ "Engineering" ]
417
[ "Industrial engineering" ]
30,508,447
https://en.wikipedia.org/wiki/ARkStorm
The ARkStorm (for Atmospheric River 1,000) is a hypothetical megastorm, whose proposal is based on repeated historical occurrences of atmospheric rivers and other major rain events first developed and published by the Multi-Hazards Demonstration Project (MHDP) of the United States Geological Survey (USGS) in 2010. An updated model was published as ARkStorm 2.0 in 2022. ARkStorm 1.0 (2010 Study) The ARkStorm 1.0 scenario describes an extreme storm that devastates much of California, causing up to $725 billion in losses (mostly due to flooding and erosion), and affecting a quarter of California's homes. The scenario projects impacts of a storm that would be significantly less intense (25 days of rain) than the California storms that occurred between December 1861 and January 1862 (43 days). That event dumped nearly of rain in parts of California. USGS sediment research in the San Francisco Bay Area, Santa Barbara Basin, Sacramento Valley, and the Klamath Mountain region found that "megastorms" have occurred in the years: 212, 440, 603, 1029, , 1418, 1605, 1750, 1810, and, most recently, 1861–1862. Based on the intervals of these known occurrences, ranging from 51 to 426 years, for a historic recurrence of, on average, every 100–200 years. Geologic evidence indicates that several of the previous events were more intense than the one in 1861–1862, particularly those in 440, 1418, 1605, and 1750, each of which deposited a layer of silt in the Santa Barbara Basin more than one inch (2.5 cm) thick. The largest event was the one in 1605, which left a layer of silt two inches (5 cm) thick, indicating that this flood was at least 50% more powerful than any of the others recorded. Description The conditions built into the scenario are "two super-strong atmospheric rivers, just four days apart, one in Northern California and one in Southern California, and one of them stalled for an extra day". The ARkStorm 1.0 scenario would have the following effects: The Central Valley would experience flooding long and at least wide. Serious flooding also would occur in Orange County, Los Angeles County, San Diego, the San Francisco Bay area, and other coastal communities. Wind speeds in some places would reach . Hundreds of landslides would damage roads, highways, and homes. Property damage would exceed $300 billion, most from flooding. Demand surge (an increase in labor rates and other repair costs after major natural disasters) could increase property losses by 20 percent. Agricultural losses and other costs to repair lifelines, drain flooded islands, and repair damage from landslides, could bring the total direct property loss to nearly $400 billion. Power, water, sewer, and other lifelines would experience damage that could take weeks or months to restore. Up to 1.5 million residents in the inland region and delta counties would need to evacuate due to flooding. Business interruption costs could reach $325 billion, in addition to the $400 billion required for property repair costs, meaning that an ARkStorm scenario is projected to cost $750 billion (~$1 trillion in 2022 dollars), nearly three times the amount of damage predicted by the next "Big One", a hypothetical Southern California earthquake with roughly the same annual occurrence probability. ARkStorm 2.0 (2022 update) This update, with parts of the research on impacts still ongoing, has examined how climate change is expected to increase the risk of severe flooding from a hypothetical ARkStorm, with runoff 200% to 400% above historical values for the Sierra Nevada in part due to a decrease in the portion of precipitation that falls as snow, as well as an increase in the amount of water that storms can carry. The likelihood of the event outlined in the ARkStorm scenario is now once every 25–50 years, with projected economic losses of over $1 trillion (or more than five times that of Hurricane Katrina). Implications Current flood maps in the U.S. rarely take recent projections from projects like ARkStorm into account, especially FEMA's maps, which many decision-makers have relied on. Land owners, flood insurers, governments and media outlets often use maps like FEMA's that still fail to represent many significant risks due to: 1) using only historical data (instead of incorporating climate change models), 2) the omission of heavy rainfall events, and 3) lack of modeling of flooding in urban areas. More robust and up-to-date models, like the First Street Foundation's riskfactor.com, should better represent true flood risk though it is unclear if that model, for example, incorporates any ARkStorm science. Government agencies may decide how much risk to accept, and how much risk to mitigate. The Netherlands' approach to flood control, for example, plans for 1 in 10,000 year events in heavily-populated areas and 1 in 4,000 year events in less well-populated areas. See also Extreme weather Lists of floods in the United States North American Monsoon Pineapple Express The Big One (earthquake) References External links USGS Multi-Hazards Demonstration Project: ARkStorm: West Coast Storm Scenario (including video) USGS Newsroom: ARkStorm: California's other "Big One" Weather Underground – The ARkStorm: California's coming great deluge High Country News: The other Big One, Judith Lewis Water Education Foundation, Mar-Apr 2011: Plausible and Inevitable: The ARkStorm Scenario, by Gary Pitzer Megastorms Could Drown Massive Portions of California January 5, 2012 Scientific American Environment of California Natural disasters in California Weather hazards Storm
ARkStorm
[ "Physics" ]
1,162
[ "Weather", "Physical phenomena", "Weather hazards" ]
923,301
https://en.wikipedia.org/wiki/Seismic%20retrofit
Seismic retrofitting is the modification of existing structures to make them more resistant to seismic activity, ground motion, or soil failure due to earthquakes. With better understanding of seismic demand on structures and with recent experiences with large earthquakes near urban centers, the need of seismic retrofitting is well acknowledged. Prior to the introduction of modern seismic codes in the late 1960s for developed countries (US, Japan etc.) and late 1970s for many other parts of the world (Turkey, China etc.), many structures were designed without adequate detailing and reinforcement for seismic protection. In view of the imminent problem, various research work has been carried out. State-of-the-art technical guidelines for seismic assessment, retrofit and rehabilitation have been published around the world – such as the ASCE-SEI 41 and the New Zealand Society for Earthquake Engineering (NZSEE)'s guidelines. These codes must be regularly updated; the 1994 Northridge earthquake brought to light the brittleness of welded steel frames, for example. The retrofit techniques outlined here are also applicable for other natural hazards such as tropical cyclones, tornadoes, and severe winds from thunderstorms. Whilst current practice of seismic retrofitting is predominantly concerned with structural improvements to reduce the seismic hazard of using the structures, it is similarly essential to reduce the hazards and losses from non-structural elements. It is also important to keep in mind that there is no such thing as an earthquake-proof structure, although seismic performance can be greatly enhanced through proper initial design or subsequent modifications. Strategies Seismic retrofit (or rehabilitation) strategies have been developed in the past few decades following the introduction of new seismic provisions and the availability of advanced materials (e.g. fiber-reinforced polymers (FRP), fiber reinforced concrete and high strength steel). Increasing the global capacity (strengthening). This is typically done by the addition of cross braces or new structural walls. Reduction of the seismic demand by means of supplementary damping and/or use of base isolation systems. Increasing the local capacity of structural elements. This strategy recognises the inherent capacity within the existing structures, and therefore adopts a more cost-effective approach to selectively upgrade local capacity (deformation/ductility, strength or stiffness) of individual structural components. Selective weakening retrofit. This is a counter-intuitive strategy to change the inelastic mechanism of the structure, while recognising the inherent capacity of the structure. Allowing sliding connections such as passageway bridges to accommodate additional movement between seismically independent structures. Addition of seismic friction dampers to simultaneously add damping and a selectable amount of additional stiffness. Recently more holistic approaches to building retrofitting are being explored, including combined seismic and energy retrofitting. Such combined strategies aim to exploit cost savings by applying energy retrofitting and seismic strengthening interventions at once, hence improving the seismic and thermal performance of buildings. Performance objectives In the past, seismic retrofit was primarily applied to achieve public safety, with engineering solutions limited by economic and political considerations. However, with the development of Performance-based earthquake engineering (PBEE), several levels of performance objectives are gradually recognised: Public safety only. The goal is to protect human life, ensuring that the structure will not collapse upon its occupants or passersby, and that the structure can be safely exited. Under severe seismic conditions the structure may be a total economic write-off, requiring tear-down and replacement. Structure survivability. The goal is that the structure, while remaining safe for exit, may require extensive repair (but not replacement) before it is generally useful or considered safe for occupation. This is typically the lowest level of retrofit applied to bridges. Structure functionality. Primary structure undamaged and the structure is undiminished in utility for its primary application. A high level of retrofit, this ensures that any required repairs are only "cosmetic" – for example, minor cracks in plaster, drywall and stucco. This is the minimum acceptable level of retrofit for hospitals. Structure unaffected. This level of retrofit is preferred for historic structures of high cultural significance. Techniques Common seismic retrofitting techniques fall into several categories: External post-tensioning The use of external post-tensioning for new structural systems have been developed in the past decade. Under the PRESS (Precast Seismic Structural Systems), a large-scale U.S./Japan joint research program, unbonded post-tensioning high strength steel tendons have been used to achieve a moment-resisting system that has self-centering capacity. An extension of the same idea for seismic retrofitting has been experimentally tested for seismic retrofit of California bridges under a Caltrans research project and for seismic retrofit of non-ductile reinforced concrete frames. Pre-stressing can increase the capacity of structural elements such as beam, column and beam-column joints. External pre-stressing has been used for structural upgrade for gravity/live loading since the 1970s. Base isolators Base isolation is a collection of structural elements of a building that should substantially decouple the building's structure from the shaking ground thus protecting the building's integrity and enhancing its seismic performance. This earthquake engineering technology, which is a kind of seismic vibration control, can be applied both to a newly designed building and to seismic upgrading of existing structures. Normally, excavations are made around the building and the building is separated from the foundations. Steel or reinforced concrete beams replace the connections to the foundations, while under these, the isolating pads, or base isolators, replace the material removed. While the base isolation tends to restrict transmission of the ground motion to the building, it also keeps the building positioned properly over the foundation. Careful attention to detail is required where the building interfaces with the ground, especially at entrances, stairways and ramps, to ensure sufficient relative motion of those structural elements. Supplementary dampers Supplementary dampers absorb the energy of motion and convert it to heat, thus damping resonant effects in structures that are rigidly attached to the ground. In addition to adding energy dissipation capacity to the structure, supplementary damping can reduce the displacement and acceleration demand within the structures. In some cases, the threat of damage does not come from the initial shock itself, but rather from the periodic resonant motion of the structure that repeated ground motion induces. In the practical sense, supplementary dampers act similarly to Shock absorbers used in automotive suspensions. Tuned mass dampers Tuned mass dampers (TMD) employ movable weights on some sort of springs. These are typically employed to reduce wind sway in very tall, light buildings. Similar designs may be employed to impart earthquake resistance in eight to ten story buildings that are prone to destructive earthquake induced resonances. Slosh tank A slosh tank is a large container of low viscosity fluid (usually water) that may be placed at locations in a structure where lateral swaying motions are significant, such as the roof, and tuned to counter the local resonant dynamic motion. During a seismic (or wind) event the fluid in the tank will slosh back and forth with the fluid motion usually directed and controlled by internal baffles – partitions that prevent the tank itself becoming resonant with the structure, see Slosh dynamics. The net dynamic response of the overall structure is reduced due to both the counteracting movement of mass, as well as energy dissipation or vibration damping which occurs when the fluid's kinetic energy is converted to heat by the baffles. Generally the temperature rise in the system will be minimal and is passively cooled by the surrounding air. One Rincon Hill in San Francisco is a skyscraper with a rooftop slosh tank which was designed primarily to reduce the magnitude of lateral swaying motion from wind. A slosh tank is a passive tuned mass damper. In order to be effective the mass of the liquid is usually on the order of 1% to 5% of the mass it is counteracting, and often this requires a significant volume of liquid. In some cases these systems are designed to double as emergency water cisterns for fire suppression. Active control system Very tall buildings ("skyscrapers"), when built using modern lightweight materials, might sway uncomfortably (but not dangerously) in certain wind conditions. A solution to this problem is to include at some upper story a large mass, constrained, but free to move within a limited range, and moving on some sort of bearing system such as an air cushion or hydraulic film. Hydraulic pistons, powered by electric pumps and accumulators, are actively driven to counter the wind forces and natural resonances. These may also, if properly designed, be effective in controlling excessive motion – with or without applied power – in an earthquake. In general, though, modern steel frame high rise buildings are not as subject to dangerous motion as are medium rise (eight to ten story) buildings, as the resonant period of a tall and massive building is longer than the approximately one second shocks applied by an earthquake. Ad hoc addition of structural support/reinforcement The most common form of seismic retrofit to lower buildings is adding strength to the existing structure to resist seismic forces. The strengthening may be limited to connections between existing building elements or it may involve adding primary resisting elements such as walls or frames, particularly in the lower stories. Common retrofit measures for unreinforced masonry buildings in the Western United States include the addition of steel frames, the addition of reinforced concrete walls, and in some cases, the addition of base isolation. Connections between buildings and their expansion additions Frequently, building additions will not be strongly connected to the existing structure, but simply placed adjacent to it, with only minor continuity in flooring, siding, and roofing. As a result, the addition may have a different resonant period than the original structure, and they may easily detach from one another. The relative motion will then cause the two parts to collide, causing severe structural damage. Seismic modification will either tie the two building components rigidly together so that they behave as a single mass or it will employ dampers to expend the energy from relative motion, with appropriate allowance for this motion, such as increased spacing and sliding bridges between sections. Exterior reinforcement of building Exterior concrete columns Historic buildings, made of unreinforced masonry, may have culturally important interior detailing or murals that should not be disturbed. In this case, the solution may be to add a number of steel, reinforced concrete, or poststressed concrete columns to the exterior. Careful attention must be paid to the connections with other members such as footings, top plates, and roof trusses. Infill shear trusses Shown here is an exterior shear reinforcement of a conventional reinforced concrete dormitory building. In this case, there was sufficient vertical strength in the building columns and sufficient shear strength in the lower stories that only limited shear reinforcement was required to make it earthquake resistant for this location near the Hayward fault. Massive exterior structure In other circumstances, far greater reinforcement is required. In the structure shown at right – a parking garage over shops – the placement, detailing, and painting of the reinforcement becomes itself an architectural embellishment. Typical retrofit solutions Soft-story failure This collapse mode is known as soft story collapse. In many buildings the ground level is designed for different uses than the upper levels. Low rise residential structures may be built over a parking garage which have large doors on one side. Hotels may have a tall ground floor to allow for a grand entrance or ballrooms. Office buildings may have retail stores on the ground floor with continuous display windows. Traditional seismic design assumes that the lower stories of a building are stronger than the upper stories; where this is not the case—if the lower story is less strong than the upper structure—the structure will not respond to earthquakes in the expected fashion. Using modern design methods, it is possible to take a weak lower story into account. Several failures of this type in one large apartment complex caused most of the fatalities in the 1994 Northridge earthquake. Typically, where this type of problem is found, the weak story is reinforced to make it stronger than the floors above by adding shear walls or moment frames. Moment frames consisting of inverted U bents are useful in preserving lower story garage access, while a lower cost solution may be to use shear walls or trusses in several locations, which partially reduce the usefulness for automobile parking but still allow the space to be used for other storage. Beam-column joint connections Beam-column joint connections are a common structural weakness in dealing with seismic retrofitting. Prior to the introduction of modern seismic codes in early 1970s, beam-column joints were typically non-engineered or designed. Laboratory testings have confirmed the seismic vulnerability of these poorly detailed and under-designed connections. Failure of beam-column joint connections can typically lead to catastrophic collapse of a frame-building, as often observed in recent earthquakes For reinforced concrete beam-column joints – various retrofit solutions have been proposed and tested in the past 20 years. Philosophically, the various seismic retrofit strategies discussed above can be implemented for reinforced concrete joints. Concrete or steel jacketing have been a popular retrofit technique until the advent of composite materials such as Carbon fiber-reinforced polymer (FRP). Composite materials such as carbon FRP and aramic FRP have been extensively tested for use in seismic retrofit with some success. One novel technique includes the use of selective weakening of the beam and added external post-tensioning to the joint in order to achieve flexural hinging in the beam, which is more desirable in terms of seismic design. Widespread weld failures at beam-column joints of low-to-medium rise steel buildings during the Northridge 1994 earthquake for example, have shown the structural defiencies of these 'modern-designed' post-1970s welded moment-resisting connections. A subsequent SAC research project has documented, tested and proposed several retrofit solutions for these welded steel moment-resisting connections. Various retrofit solutions have been developed for these welded joints – such as a) weld strengthening and b) addition of steel haunch or 'dog-bone' shape flange. Following the Northridge earthquake, a number of steel moment -frame buildings were found to have experienced brittle fractures of beam to column connections. Discovery of these unanticipated brittle fractures of framing connections was alarming to engineers and the building industry. Starting in the 1960s, engineers began to regard welded steel moment-frame buildings as being among the most ductile systems contained in the building code. Many engineers believed that steel moment-frame buildings were essentially invulnerable to earthquake induced damage and thought that should damage occur, it would be limited to ductile yielding of members and connections. Observation of damage sustained by buildings in the 1994 Northridge earthquake indicated that contrary to the intended behavior, in many cases, brittle fractures initiated within the connections at very low levels of plastic demand. In September 1994, The SAC joint Venture, AISC, AISI, and NIST jointly convened an international workshop in Los Angeles to coordinate the efforts of various participants and to lay the foundation for systematic investigation and resolution of the problem. In September 1995 the SAC Joint Venture entered into a contractual agreement with FEMA to conduct Phase II of the SAC Steel project. Under Phase II, SAC continued its extensive problem-focused study of the performance of moment resisting steel frames and connections of various configurations, with the ultimate goal of developing seismic design criteria for steel construction. As a result of these studies it is now known that the typical moment-resisting connection detail employed in steel moment frame construction prior to the 1994 Northridge earthquake had a number of features that rendered it inherently susceptible to brittle fracture. Shear failure within floor diaphragm Floors in wooden buildings are usually constructed upon relatively deep spans of wood, called joists, covered with a diagonal wood planking or plywood to form a subfloor upon which the finish floor surface is laid. In many structures these are all aligned in the same direction. To prevent the beams from tipping over onto their side, blocking is used at each end, and for additional stiffness, blocking or diagonal wood or metal bracing may be placed between beams at one or more points in their spans. At the outer edge it is typical to use a single depth of blocking and a perimeter beam overall. If the blocking or nailing is inadequate, each beam can be laid flat by the shear forces applied to the building. In this position they lack most of their original strength and the structure may further collapse. As part of a retrofit the blocking may be doubled, especially at the outer edges of the building. It may be appropriate to add additional nails between the sill plate of the perimeter wall erected upon the floor diaphragm, although this will require exposing the sill plate by removing interior plaster or exterior siding. As the sill plate may be quite old and dry and substantial nails must be used, it may be necessary to pre-drill a hole for the nail in the old wood to avoid splitting. When the wall is opened for this purpose it may also be appropriate to tie vertical wall elements into the foundation using specialty connectors and bolts glued with epoxy cement into holes drilled in the foundation. Sliding off foundation and "cripple wall" failure Single or two-story wood-frame domestic structures built on a perimeter or slab foundation are relatively safe in an earthquake, but in many structures built before 1950 the sill plate that sits between the concrete foundation and the floor diaphragm (perimeter foundation) or studwall (slab foundation) may not be sufficiently bolted in. Additionally, older attachments (without substantial corrosion-proofing) may have corroded to a point of weakness. A sideways shock can slide the building entirely off of the foundations or slab. Often such buildings, especially if constructed on a moderate slope, are erected on a platform connected to a perimeter foundation through low stud-walls called "cripple wall" or pin-up. This low wall structure itself may fail in shear or in its connections to itself at the corners, leading to the building moving diagonally and collapsing the low walls. The likelihood of failure of the pin-up can be reduced by ensuring that the corners are well reinforced in shear and that the shear panels are well connected to each other through the corner posts. This requires structural grade sheet plywood, often treated for rot resistance. This grade of plywood is made without interior unfilled knots and with more, thinner layers than common plywood. New buildings designed to resist earthquakes will typically use OSB (oriented strand board), sometimes with metal joins between panels, and with well attached stucco covering to enhance its performance. In many modern tract homes, especially those built upon expansive (clay) soil the building is constructed upon a single and relatively thick monolithic slab, kept in one piece by high tensile rods that are stressed after the slab has set. This poststressing places the concrete under compression – a condition under which it is extremely strong in bending and so will not crack under adverse soil conditions. Multiple piers in shallow pits Some older low-cost structures are elevated on tapered concrete pylons set into shallow pits, a method frequently used to attach outdoor decks to existing buildings. This is seen in conditions of damp soil, especially in tropical conditions, as it leaves a dry ventilated space under the house, and in far northern conditions of permafrost (frozen mud) as it keeps the building's warmth from destabilizing the ground beneath. During an earthquake, the pylons may tip, spilling the building to the ground. This can be overcome by using deep-bored holes to contain cast-in-place reinforced pylons, which are then secured to the floor panel at the corners of the building. Another technique is to add sufficient diagonal bracing or sections of concrete shear wall between pylons. Reinforced concrete column burst Reinforced concrete columns typically contain large diameter vertical rebar (reinforcing bars) arranged in a ring, surrounded by lighter-gauge hoops of rebar. Upon analysis of failures due to earthquakes, it has been realized that the weakness was not in the vertical bars, but rather in inadequate strength and quantity of hoops. Once the integrity of the hoops is breached, the vertical rebar can flex outward, stressing the central column of concrete. The concrete then simply crumbles into small pieces, now unconstrained by the surrounding rebar. In new construction a greater amount of hoop-like structures are used. One simple retrofit is to surround the column with a jacket of steel plates formed and welded into a single cylinder. The space between the jacket and the column is then filled with concrete, a process called grouting. Where soil or structure conditions require such additional modification, additional pilings may be driven near the column base and concrete pads linking the pilings to the pylon are fabricated at or below ground level. In the example shown not all columns needed to be modified to gain sufficient seismic resistance for the conditions expected. (This location is about a mile from the Hayward Fault Zone.) Reinforced concrete wall burst Concrete walls are often used at the transition between elevated road fill and overpass structures. The wall is used both to retain the soil and so enable the use of a shorter span and also to transfer the weight of the span directly downward to footings in undisturbed soil. If these walls are inadequate they may crumble under the stress of an earthquake's induced ground motion. One form of retrofit is to drill numerous holes into the surface of the wall, and secure short L-shaped sections of rebar to the surface of each hole with epoxy adhesive. Additional vertical and horizontal rebar is then secured to the new elements, a form is erected, and an additional layer of concrete is poured. This modification may be combined with additional footings in excavated trenches and additional support ledgers and tie-backs to retain the span on the bounding walls. Damage to masonry (infill) walls In masonry structures, brick building structures have been reinforced with coatings of glass fiber and appropriate resin (epoxy or polyester). In lower floors these may be applied over entire exposed surfaces, while in upper floors this may be confined to narrow areas around window and door openings. This application provides tensile strength that stiffens the wall against bending away from the side with the application. The efficient protection of an entire building requires extensive analysis and engineering to determine the appropriate locations to be treated. In reinforced concrete buildings, masonry infill walls are considered non-structural elements, but damage to infills can lead to large repair costs and change the behaviour of a structure, even leading to aforementioned soft-storey or beam-column joint shear failures. Local failure of the infill panels due to in and out-of-plane mechanisms, but also due to their combination, can lead to a sudden drop in capacity and hence cause global brittle failure of the structure. Even at lower intensity earthquakes, damage to infilled frames can lead to high economic losses and loss of life. To prevent masonry infill damage and failure, typical retrofit strategies aim to strengthen the infills and provide adequate connection to the frame. Examples of retrofit techniques for masonry infills include steel reinforced plasters, engineered cementitious composites, thin layers fibre-reinforced polymers (FRP), and most recently also textile-reinforced mortars (TRM). Lift Where moist or poorly consolidated alluvial soil interfaces in a "beach like" structure against underlying firm material, seismic waves traveling through the alluvium can be amplified, just as are water waves against a sloping beach. In these special conditions, vertical accelerations up to twice the force of gravity have been measured. If a building is not secured to a well-embedded foundation it is possible for the building to be thrust from (or with) its foundations into the air, usually with severe damage upon landing. Even if it is well-founded, higher portions such as upper stories or roof structures or attached structures such as canopies and porches may become detached from the primary structure. Good practices in modern, earthquake-resistant structures dictate that there be good vertical connections throughout every component of the building, from undisturbed or engineered earth to foundation to sill plate to vertical studs to plate cap through each floor and continuing to the roof structure. Above the foundation and sill plate the connections are typically made using steel strap or sheet stampings, nailed to wood members using special hardened high-shear strength nails, and heavy angle stampings secured with through bolts, using large washers to prevent pull-through. Where inadequate bolts are provided between the sill plates and a foundation in existing construction (or are not trusted due to possible corrosion), special clamp plates may be added, each of which is secured to the foundation using expansion bolts inserted into holes drilled in an exposed face of concrete. Other members must then be secured to the sill plates with additional fittings. Soil One of the most difficult retrofits is that required to prevent damage due to soil failure. Soil failure can occur on a slope, a slope failure or landslide, or in a flat area due to liquefaction of water-saturated sand and/or mud. Generally, deep pilings must be driven into stable soil (typically hard mud or sand) or to underlying bedrock or the slope must be stabilized. For buildings built atop previous landslides the practicality of retrofit may be limited by economic factors, as it is not practical to stabilize a large, deep landslide. The likelihood of landslide or soil failure may also depend upon seasonal factors, as the soil may be more stable at the beginning of a wet season than at the beginning of the dry season. Such a "two season" Mediterranean climate is seen throughout California. In some cases, the best that can be done is to reduce the entrance of water runoff from higher, stable elevations by capturing and bypassing through channels or pipes, and to drain water infiltrated directly and from subsurface springs by inserting horizontal perforated tubes. There are numerous locations in California where extensive developments have been built atop archaic landslides, which have not moved in historic times but which (if both water-saturated and shaken by an earthquake) have a high probability of moving en masse, carrying entire sections of suburban development to new locations. While the most modern of house structures (well tied to monolithic concrete foundation slabs reinforced with post tensioning cables) may survive such movement largely intact, the building will no longer be in its proper location. Utility pipes and cables: risks Natural gas and propane supply pipes to structures often prove especially dangerous during and after earthquakes. Should a building move from its foundation or fall due to cripple wall collapse, the ductile iron pipes transporting the gas within the structure may be broken, typically at the location of threaded joints. The gas may then still be provided to the pressure regulator from higher pressure lines and so continue to flow in substantial quantities; it may then be ignited by a nearby source such as a lit pilot light or arcing electrical connection. There are two primary methods of automatically restraining the flow of gas after an earthquake, installed on the low pressure side of the regulator, and usually downstream of the gas meter. A caged metal ball may be arranged at the edge of an orifice. Upon seismic shock, the ball will roll into the orifice, sealing it to prevent gas flow. The ball may later be reset by the use of an external magnet. This device will respond only to ground motion. A flow-sensitive device may be used to close a valve if the flow of gas exceeds a set threshold (very much like an electrical circuit breaker). This device will operate independently of seismic motion, but will not respond to minor leaks which may be caused by an earthquake. It appears that the most secure configuration would be to use one of each of these devices in series. Tunnels Unless the tunnel penetrates a fault likely to slip, the greatest danger to tunnels is a landslide blocking an entrance. Additional protection around the entrance may be applied to divert any falling material (similar as is done to divert snow avalanches) or the slope above the tunnel may be stabilized in some way. Where only small- to medium-sized rocks and boulders are expected to fall, the entire slope may be covered with wire mesh, pinned down to the slope with metal rods. This is also a common modification to highway cuts where appropriate conditions exist. Underwater tubes The safety of underwater tubes is highly dependent upon the soil conditions through which the tunnel was constructed, the materials and reinforcements used, and the maximum predicted earthquake expected, and other factors, some of which may remain unknown under current knowledge. BART tube A tube of particular structural, seismic, economic, and political interest is the BART (Bay Area Rapid Transit) transbay tube. This tube was constructed at the bottom of San Francisco Bay through an innovative process. Rather than pushing a shield through the soft bay mud, the tube was constructed on land in sections. Each section consisted of two inner train tunnels of circular cross section, a central access tunnel of rectangular cross section, and an outer oval shell encompassing the three inner tubes. The intervening space was filled with concrete. At the bottom of the bay a trench was excavated and a flat bed of crushed stone prepared to receive the tube sections. The sections were then floated into place and sunk, then joined with bolted connections to previously placed sections. An overfill was then placed atop the tube to hold it down. Once completed from San Francisco to Oakland, the tracks and electrical components were installed. The predicted response of the tube during a major earthquake was likened to be as that of a string of (cooked) spaghetti in a bowl of gelatin dessert. To avoid overstressing the tube due to differential movements at each end, a sliding slip joint was included at the San Francisco terminus under the landmark Ferry Building. The engineers of the construction consortium PBTB (Parsons Brinckerhoff-Tudor-Bechtel) used the best estimates of ground motion available at the time, now known to be insufficient given modern computational analysis methods and geotechnical knowledge. Unexpected settlement of the tube has reduced the amount of slip that can be accommodated without failure. These factors have resulted in the slip joint being designed too short to ensure survival of the tube under possible (perhaps even likely) large earthquakes in the region. To correct this deficiency the slip joint must be extended to allow for additional movement, a modification expected to be both expensive and technically and logistically difficult. Other retrofits to the BART tube include vibratory consolidation of the tube's overfill to avoid potential liquefying of the overfill, which has now been completed. (Should the overfill fail there is a danger of portions of the tube rising from the bottom, an event which could potentially cause failure of the section connections.) Bridge retrofit Bridges have several failure modes. Expansion rockers Many short bridge spans are statically anchored at one end and attached to rockers at the other. This rocker gives vertical and transverse support while allowing the bridge span to expand and contract with temperature changes. The change in the length of the span is accommodated over a gap in the roadway by comb-like expansion joints. During severe ground motion, the rockers may jump from their tracks or be moved beyond their design limits, causing the bridge to unship from its resting point and then either become misaligned or fail completely. Motion can be constrained by adding ductile or high-strength steel restraints that are friction-clamped to beams and designed to slide under extreme stress while still limiting the motion relative to the anchorage. Deck rigidity Suspension bridges may respond to earthquakes with a side-to-side motion exceeding that which was designed for wind gust response. Such motion can cause fragmentation of the road surface, damage to bearings, and plastic deformation or breakage of components. Devices such as hydraulic dampers or clamped sliding connections and additional diagonal reinforcement may be added. Lattice girders, beams, and ties Lattice girders consist of two "I"-beams connected with a criss-cross lattice of flat strap or angle stock. These can be greatly strengthened by replacing the open lattice with plate members. This is usually done in concert with the replacement of hot rivets with bolts. Hot rivets Many older structures were fabricated by inserting red-hot rivets into pre-drilled holes; the soft rivets are then peened using an air hammer on one side and a bucking bar on the head end. As these cool slowly, they are left in an annealed (soft) condition, while the plate, having been hot rolled and quenched during manufacture, remains relatively hard. Under extreme stress the hard plates can shear the soft rivets, resulting in failure of the joint. The solution is to burn out each rivet with an oxygen torch. The hole is then prepared to a precise diameter with a reamer. A special locator bolt, consisting of a head, a shaft matching the reamed hole, and a threaded end is inserted and retained with a nut, then tightened with a wrench. As the bolt has been formed from an appropriate high-strength alloy and has also been heat-treated, it is not subject to either the plastic shear failure typical of hot rivets nor the brittle fracture of ordinary bolts. Any partial failure will be in the plastic flow of the metal secured by the bolt; with proper engineering any such failure should be non-catastrophic. Fill and overpass Elevated roadways are typically built on sections of elevated earth fill connected with bridge-like segments, often supported with vertical columns. If the soil fails where a bridge terminates, the bridge may become disconnected from the rest of the roadway and break away. The retrofit for this is to add additional reinforcement to any supporting wall, or to add deep caissons adjacent to the edge at each end and connect them with a supporting beam under the bridge. Another failure occurs when the fill at each end moves (through resonant effects) in bulk, in opposite directions. If there is an insufficient founding shelf for the overpass, then it may fall. Additional shelf and ductile stays may be added to attach the overpass to the footings at one or both ends. The stays, rather than being fixed to the beams, may instead be clamped to them. Under moderate loading, these keep the overpass centered in the gap so that it is less likely to slide off its founding shelf at one end. The ability for the fixed ends to slide, rather than break, will prevent the complete drop of the structure if it should fail to remain on the footings. Viaducts Large sections of roadway may consist entirely of viaduct, sections with no connection to the earth other than through vertical columns. When concrete columns are used, the detailing is critical. Typical failure may be in the toppling of a row of columns due either to soil connection failure or to insufficient cylindrical wrapping with rebar. Both failures were seen in the 1995 Great Hanshin earthquake in Kobe, Japan, where an entire viaduct, centrally supported by a single row of large columns, was laid down to one side. Such columns are reinforced by excavating to the foundation pad, driving additional pilings, and adding a new, larger pad, well connected with rebar alongside or into the column. A column with insufficient wrapping bar, which is prone to burst and then hinge at the bursting point, may be completely encased in a circular or elliptical jacket of welded steel sheet and grouted as described above. Sometimes viaducts may fail in the connections between components. This was seen in the failure of the Cypress Freeway in Oakland, California, during the Loma Prieta earthquake. This viaduct was a two-level structure, and the upper portions of the columns were not well connected to the lower portions that supported the lower level; this caused the upper deck to collapse upon the lower deck. Weak connections such as these require additional external jacketing – either through external steel components or by a complete jacket of reinforced concrete, often using stub connections that are glued (using epoxy adhesive) into numerous drilled holes. These stubs are then connected to additional wrappings, external forms (which may be temporary or permanent) are erected, and additional concrete is poured into the space. Large connected structures similar to the Cypress Viaduct must also be properly analyzed in their entirety using dynamic computer simulations. Residential retrofit Side-to-side forces cause most earthquake damage. Bolting of the mudsill to the foundation and application of plywood to cripple walls are a few basic retrofit techniques which homeowners may apply to wood-framed residential structures to mitigate the effects of seismic activity. The City of San Leandro created guidelines for these procedures, as outlined in the following pamphlet. Public awareness and initiative are critical to the retrofit and preservation of existing building stock, and such efforts as those of the Association of Bay Area Governments are instrumental in providing informational resources to seismically active communities. Wood frame structure Most houses in North America are wood-framed structures. Wood is one of the best materials for earthquake-resistant construction since it is lightweight and more flexible than masonry. It is easy to work with and less expensive than steel, masonry, or concrete. In older homes the most significant weaknesses are the connection from the wood-framed walls to the foundation and the relatively weak "cripple-walls." (Cripple walls are the short wood walls that extend from the top of the foundation to the lowest floor level in houses that have raised floors.) Adding connections from the base of the wood-framed structure to the foundation is almost always an important part of a seismic retrofit. Bracing the cripple-walls to resist side-to-side forces is essential in houses with cripple walls; bracing is usually done with plywood. Oriented strand board (OSB) does not perform as consistently as plywood, and is not the favored choice of retrofit designers or installers. Retrofit methods in older wood-frame structures may consist of the following, and other methods not described here. The lowest plate rails of walls (usually called "mudsills" or "foundation sills" in North America) are bolted to a continuous foundation, or secured with rigid metal connectors bolted to the foundation so as to resist side-to-side forces. Cripple walls are braced with plywood. Selected vertical elements (typically the posts at the ends of plywood wall bracing panels) are connected to the foundation. These connections are intended to prevent the braced walls from rocking up and down when subjected to back-and-forth forces at the top of the braced walls, not to resist the wall or house "jumping" off the foundation (which almost never occurs). In two-story buildings using "platform framing" (sometimes called "western" style construction, where walls are progressively erected upon the lower story's upper diaphragm, unlike "eastern" or balloon framing), the upper walls are connected to the lower walls with tension elements. In some cases, connections may be extended vertically to include retention of certain roof elements. This sort of strengthening is usually very costly with respect to the strength gained. Vertical posts are secured to the beams or other members they support. This is particularly important where loss of support would lead to collapse of a segment of a building. Connections from posts to beams cannot resist appreciable side-to-side forces; it is much more important to strengthen around the perimeter of a building (bracing the cripple-walls and supplementing foundation-to-wood-framing connections) than it is to reinforce post-to-beam connections. Wooden framing is efficient when combined with masonry, if the structure is properly designed. In Turkey, the traditional houses (bagdadi) are made with this technology. In El Salvador, wood and bamboo are used for residential construction. Reinforced and unreinforced masonry In many parts of developing countries such as Pakistan, Iran and China, unreinforced or in some cases reinforced masonry is the predominantly form of structures for rural residential and dwelling. Masonry was also a common construction form in the early part of the 20th century, which implies that a substantial number of these at-risk masonry structures would have significant heritage value. Masonry walls that are not reinforced are especially hazardous. Such structures may be more appropriate for replacement than retrofit, but if the walls are the principal load bearing elements in structures of modest size they may be appropriately reinforced. It is especially important that floor and ceiling beams be securely attached to the walls. Additional vertical supports in the form of steel or reinforced concrete may be added. In the western United States, much of what is seen as masonry is actually brick or stone veneer. Current construction rules dictate the amount of tie–back required, which consist of metal straps secured to vertical structural elements. These straps extend into mortar courses, securing the veneer to the primary structure. Older structures may not secure this sufficiently for seismic safety. A weakly secured veneer in a house interior (sometimes used to face a fireplace from floor to ceiling) can be especially dangerous to occupants. Older masonry chimneys are also dangerous if they have substantial vertical extension above the roof. These are prone to breakage at the roofline and may fall into the house in a single large piece. For retrofit, additional supports may be added; however, it is extremely expensive to strengthen an existing masonry chimney to conform with contemporary design standards. It is best to simply remove the extension and replace it with lighter materials, with special metal flue replacing the flue tile and a wood structure replacing the masonry. This may be matched against existing brickwork by using very thin veneer (similar to a tile, but with the appearance of a brick). See also Destructive testing Earthquake engineering Earthquake Engineering Research Institute Earthquake simulation Mitigation of seismic motion OpenSees – Open System for Earthquake Engineering Simulation San Francisco–Oakland Bay Bridge Western span retrofit Eastern span replacement Seismic hazard Seismic performance Superadobe Tsunami-proof building Vibration control References External links Retrofit Solutions for New Zealand – Retrofit Solutions for New Zealand – research group dedicated to seismic retrofit. Contacts and publications are available. ABAG Home Quake Safety Toolkit From ABAG, the Association of Bay Area Governments, their web site includes much valuable information and interactive analysis tools. If you know or can reasonably estimate in the worst case the expected shaking index for your area you can still use the included home safety evaluation quiz, even if you are not located within the San Francisco Bay Area. There are other sections generally applicable for any potential level of seismic activity, such as securing furnishings. This is an especially valuable reference for any resident of an area subject to seismic activity Standard engineered plan sets for residential seismic retrofit Also from ABAG, these will require further approval by the local building official. Extensive article including some structural retrofits and a comparison of various natural gas safety shutoffs: The Homeowner's Guide to Earthquake Safety (BYU) Infrastructure Risk Research Project at The University of British Columbia, Vancouver, Canada How the City of San Leandro can help you strengthen your home ... San Leandro, California pamphlet illustrating simple house structural improvements that the homeowner can perform. Seismic Rehabilitation Handbook FEMA Seismic Retrofit Cost Calculator Performance of the Built Environment (Loma Prieta Earthquake), U. S. Geological Survey Professional Paper 1152–A Raising the Bar: Engineering the New East Span of the Bay Bridge Highlights Large Scale Bridge Building & Engineering Techniques in a Seismically Active Quake Zone Earthquake and seismic risk mitigation Earthquake engineering Construction
Seismic retrofit
[ "Engineering" ]
8,956
[ "Structural engineering", "Construction", "Civil engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation" ]
924,869
https://en.wikipedia.org/wiki/Longitude%20of%20the%20ascending%20node
The longitude of the ascending node, also known as the right ascension of the ascending node, is one of the orbital elements used to specify the orbit of an object in space. Denoted with the symbol Ω, it is the angle from a specified reference direction, called the origin of longitude, to the direction of the ascending node (☊), as measured in a specified reference plane. The ascending node is the point where the orbit of the object passes through the plane of reference, as seen in the adjacent image. Types Commonly used reference planes and origins of longitude include: For geocentric orbits, Earth's equatorial plane as the reference plane, and the First Point of Aries (FPA) as the origin of longitude. In this case, the longitude is also called the right ascension of the ascending node (RAAN). The angle is measured eastwards (or, as seen from the north, counterclockwise) from the FPA to the node. An alternative is the local time of the ascending node (LTAN), based on the local mean time at which the spacecraft crosses the equator. Similar definitions exist for satellites around other planets (see planetary coordinate systems). For heliocentric orbits, the ecliptic as the reference plane, and the FPA as the origin of longitude. The angle is measured counterclockwise (as seen from north of the ecliptic) from the First Point of Aries to the node. For orbits outside the Solar System, the plane tangent to the celestial sphere at the point of interest (called the plane of the sky) as the reference plane, and north (i.e. the perpendicular projection of the direction from the observer to the north celestial pole onto the plane of the sky) as the origin of longitude. The angle is measured eastwards (or, as seen by the observer, counterclockwise) from north to the node., pp. 40, 72, 137; , chap. 17. In the case of a binary star known only from visual observations, it is not possible to tell which node is ascending and which is descending. In this case the orbital parameter which is recorded is simply labeled longitude of the node, ☊, and represents the longitude of whichever node has a longitude between 0 and 180 degrees., chap. 17;, p. 72. Calculation from state vectors In astrodynamics, the longitude of the ascending node can be calculated from the specific relative angular momentum vector h as follows: Here, n = ⟨nx, ny, nz⟩ is a vector pointing towards the ascending node. The reference plane is assumed to be the xy-plane, and the origin of longitude is taken to be the positive x-axis. k is the unit vector (0, 0, 1), which is the normal vector to the xy reference plane. For non-inclined orbits (with inclination equal to zero), ☊ is undefined. For computation it is then, by convention, set equal to zero; that is, the ascending node is placed in the reference direction, which is equivalent to letting n point towards the positive x-axis. See also Equinox Kepler orbits List of orbits Orbital node Perturbation of the orbital plane can cause precession of the ascending node. References Orbits Angle
Longitude of the ascending node
[ "Physics" ]
671
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Wikipedia categories named after physical quantities", "Angle" ]
22,906,493
https://en.wikipedia.org/wiki/Benzene-1%2C2-dithiol
Benzene-1,2-dithiol is the organosulfur compound with the formula CH(SH). This colourless viscous liquid consists of a benzene ring with a pair of adjacent thiol groups. The conjugate base of this diprotic compound serves as chelating agent in coordination chemistry and a building block for the synthesis of other organosulfur compounds. Synthesis The compound is prepared by ortho-lithiation of benzenethiol using butyl lithium (BuLi) followed by sulfidation: CHSH + 2 BuLi → CHSLi-2-Li + 2 BuH CHSLi-2-Li + S → CH(SLi) CH(SLi) + 2 HCl → CH(SH) + 2 LiCl The compound was first prepared from 2-aminobenzenethiol via diazotization. Alternatively, it forms from 1,2-dibromobenzene. Reactions Oxidation mainly affords the polymeric disulfide. Reaction with metal dihalides and metal oxides gives the dithiolate complexes of the formula LM(SCH) where LM represents a variety of metal centers, e.g. (CH)Ti. Ketones and aldehydes condense to give the heterocycles called dithianes: CH(SH) + RR’CO → CH(S)CRR’ + HO Related compounds 3,4-Toluenedithiol, also called dimercaptotoluene (CAS#496-74-2), behaves similarly to 1,2-benzenedithiol but is a solid at ambient temperatures (m.p. 135-137 °C). Alkene-1,2-dithiols are unstable, although metal complexes of alkene-1,2-dithiolates, called dithiolene complexes, are well known. References Thiols Benzene derivatives Foul-smelling chemicals
Benzene-1,2-dithiol
[ "Chemistry" ]
409
[ "Organic compounds", "Thiols" ]
22,906,936
https://en.wikipedia.org/wiki/Ruled%20variety
In algebraic geometry, a variety over a field is ruled if it is birational to the product of the projective line with some variety over . A variety is uniruled if it is covered by a family of rational curves. (More precisely, a variety is uniruled if there is a variety and a dominant rational map which does not factor through the projection to .) The concept arose from the ruled surfaces of 19th-century geometry, meaning surfaces in affine space or projective space which are covered by lines. Uniruled varieties can be considered to be relatively simple among all varieties, although there are many of them. Properties Every uniruled variety over a field of characteristic zero has Kodaira dimension −∞. The converse is a conjecture which is known in dimension at most 3: a variety of Kodaira dimension −∞ over a field of characteristic zero should be uniruled. A related statement is known in all dimensions: Boucksom, Demailly, Păun and Peternell showed that a smooth projective variety X over a field of characteristic zero is uniruled if and only if the canonical bundle of X is not pseudo-effective (that is, not in the closed convex cone spanned by effective divisors in the Néron-Severi group tensored with the real numbers). As a very special case, a smooth hypersurface of degree d in Pn over a field of characteristic zero is uniruled if and only if d ≤ n, by the adjunction formula. (In fact, a smooth hypersurface of degree d ≤ n in Pn is a Fano variety and hence is rationally connected, which is stronger than being uniruled.) A variety X over an uncountable algebraically closed field k is uniruled if and only if there is a rational curve passing through every k-point of X. By contrast, there are varieties over the algebraic closure k of a finite field which are not uniruled but have a rational curve through every k-point. (The Kummer variety of any non-supersingular abelian surface over p with p odd has these properties.) It is not known whether varieties with these properties exist over the algebraic closure of the rational numbers. Uniruledness is a geometric property (it is unchanged under field extensions), whereas ruledness is not. For example, the conic x2 + y2 + z2 = 0 in P2 over the real numbers R is uniruled but not ruled. (The associated curve over the complex numbers C is isomorphic to P1 and hence is ruled.) In the positive direction, every uniruled variety of dimension at most 2 over an algebraically closed field of characteristic zero is ruled. Smooth cubic 3-folds and smooth quartic 3-folds in P4 over C are uniruled but not ruled. Positive characteristic Uniruledness behaves very differently in positive characteristic. In particular, there are uniruled (and even unirational) surfaces of general type: an example is the surface xp+1 + yp+1 + zp+1 + wp+1 = 0 in P3 over p, for any prime number p ≥ 5. So uniruledness does not imply that the Kodaira dimension is −∞ in positive characteristic. A variety X is separably uniruled if there is a variety Y with a dominant separable rational map Y × P1 – → X which does not factor through the projection to Y. ("Separable" means that the derivative is surjective at some point; this would be automatic for a dominant rational map in characteristic zero.) A separably uniruled variety has Kodaira dimension −∞. The converse is true in dimension 2, but not in higher dimensions. For example, there is a smooth projective 3-fold over 2 which has Kodaira dimension −∞ but is not separably uniruled. It is not known whether every smooth Fano variety in positive characteristic is separably uniruled. Notes References Algebraic geometry
Ruled variety
[ "Mathematics" ]
842
[ "Fields of abstract algebra", "Algebraic geometry" ]
22,908,095
https://en.wikipedia.org/wiki/SOPHIE%20%C3%A9chelle%20spectrograph
The SOPHIE (Spectrographe pour l’Observation des Phénomènes des Intérieurs stellaires et des Exoplanètes, literally meaning "spectrograph for the observation of the phenomena of the stellar interiors and of the exoplanets") échelle spectrograph is a high-resolution echelle spectrograph installed on the 1.93m reflector telescope at the Haute-Provence Observatory located in south-eastern France. The purpose of this instrument is asteroseismology and extrasolar planet detection by the radial velocity method. It builds upon and replaces the older ELODIE spectrograph. This instrument was made available for use by the general astronomical community October 2006. Characteristics The electromagnetic spectrum wavelength range is from 387.2 to 694.3 nanometers. The spectrograph is fed from the Cassegrain focus through either one of two separate optical fiber sets, yielding two different spectral resolutions (HE and HR modes). The instrument is entirely computer-controlled. A standard data reduction pipeline automatically processes the data upon every CCD readout cycle. HR mode is the high resolution mode. This mode incorporates a 40 micrometre exit slit to achieve high spectral resolution of R = 75000. HE mode is the high efficiency mode. This mode is used when a higher throughput is desired particularly in the case of faint objects spectral resolution is set to R = 40000. The R2 échelle diffraction grating has 52.65 grooves per millimeter and was manufactured by Richardson Gratings. It is blazed at 65° and its size is 20.4 cm x 40.8 cm. It is mounted in a fixed configuration. The spectrum is projected onto the E2V Technologies type 44-82 CCD detector of 4096 x 2048 pixels kept at a constant temperature of –100 °C. This grating yields 41 spectral orders, of which 39 are currently extracted, to obtain wavelengths between 387.2 nm and 694.3 nm. Performance In HE mode, a signal-to-noise ratio (per pixel) of 27 was reached in 90 min for an object of magnitude 14.5 in the V band. The stability of the instrument can be described by the lowest dispersion possible for radial velocity observations, in m/s. In HR mode the short term stability has been measured to be 1.3 m/s, while it is 2 m/s for longer timescales. See also CORALIE spectrograph HARPS spectrograph References External links SOPHIE Home Page Spectrographs Astronomical instruments Exoplanet search projects
SOPHIE échelle spectrograph
[ "Physics", "Chemistry", "Astronomy" ]
537
[ "Exoplanet search projects", "Spectrum (physical sciences)", "Spectrographs", "Astronomical instruments", "Astronomy projects", "Spectroscopy" ]
22,910,171
https://en.wikipedia.org/wiki/European%20Summer%20School%20in%20Logic%2C%20Language%20and%20Information
The European Summer School in Logic, Language and Information (ESSLLI) is an annual academic conference organized by the European Association for Logic, Language and Information. The focus of study is the "interface between linguistics, logic and computation, with special emphasis on human linguistic and cognitive ability". The conference is held over two weeks of the European Summer, and offers about 50 courses at introductory and advanced levels. It attracts around 500 participants from all over the world. Venues See also Dynamic semantics Generalized quantifier Type theory References Bibliography Program for ESSLLI 2019: Riga Program for ESSLLI 2015: Barcelona Program for ESSLLI 2014: Tübingen Program for ESSLLI 2013: Düsseldorf Program for ESSLLI 2012: Opole Program for ESSLLI 2011: Ljubljana Program for ESSLLI 2010: Copenhagen Program for ESSLLI 2009: Bordeaux Program for ESSLLI 2008: Hamburg Program for ESSLLI 2007: Dublin Program for ESSLLI 2006: Málaga Program for ESSLLI 2005: Edinburgh External links Association for Logic, Language and Information – official home page Computer science conferences Information technology organizations based in Europe Linguistics conferences Logic conferences Mathematical logic organizations Philosophical logic Philosophy education Summer schools
European Summer School in Logic, Language and Information
[ "Mathematics", "Technology" ]
241
[ "Mathematical logic", "Computer science conferences", "Computer conference stubs", "Computer science", "Computing stubs", "Mathematical logic organizations" ]
22,910,426
https://en.wikipedia.org/wiki/Association%20for%20Logic%2C%20Language%20and%20Information
The Association for Logic, Language and Information (FoLLI) is an international, especially European, learned society. It was founded in 1991 "to advance the practicing of research and education on the interfaces between Logic, Linguistics, Computer Science and Cognitive Science and related disciplines." The academic journal Journal of Logic, Language and Information (JoLLI) is published under its auspices; it co-ordinates summer schools such as the European Summer School in Logic, Language and Information (ESSLLI), the North American Summer School in Logic, Language, and Information (NASSLLI), and the International Conference and Second East-Asian School on Logic, Language and Computation (EASLLC); and it awards the E. W. Beth Dissertation Prize to outstanding dissertations in the fields of Logic, Language, and Information. Governance The current president of FoLLI is Larry Moss (since 2020). The current management board consists of Larry Moss (president), Sonja Smets (vice president), Natasha Alechina (secretary), Nina Gierasimczuk (treasurer), Valentin Goranko (senior member), Darja Fiser, Benedikt Löwe, Louise McNally, and Pritty Patel-Grosz. Past Presidents include Johan van Benthem (1991–1995), Wilfrid Hodges (1995–1996), Erhard Hinrichs (1997–1998), Paul Gochet (1999–2001), Hans Uzskoreit (2002–2003), Luigia Carlucci Aiello (2004–2007), Michael Moortgat (2007–2012), Ann Copestake (2012–2016), and Valentin Goranko (2016–2020). See also Dynamic semantics Generalized quantifier Information theory Type theory References Bibliography Program for ESSLLI 2012: Opole Program for ESSLLI 2009: Bordeaux Program for ESSLLI 2008: Hamburg Program for ESSLLI 2007: Dublin Program for ESSLLI 2006: Málaga Program for ESSLLI 2005: Edinburgh External links Association for Logic, Language and Information – FoLLI official home page Academic conferences Computer science organizations Mathematical logic organizations Philosophical logic Philosophy organizations Organizations established in 1991 Linguistics organizations 1991 establishments in France
Association for Logic, Language and Information
[ "Mathematics", "Technology" ]
455
[ "Computer science", "Mathematical logic", "Mathematical logic organizations", "Computer science organizations" ]
22,913,171
https://en.wikipedia.org/wiki/Duane%E2%80%93Hunt%20law
The Duane–Hunt law, named after the American physicists William Duane and Franklin L. Hunt, gives the maximum frequency of X-rays that can be emitted by Bremsstrahlung in an X-ray tube by accelerating electrons through an excitation voltage V into a metal target. The maximum frequency νmax is given by which corresponds to a minimum wavelength where h is the Planck constant, e is the charge of the electron, and c is the speed of light. This can also be written as: The process of X-ray emission by incoming electrons is also known as the inverse photoelectric effect. Explanation In an X-ray tube, electrons are accelerated in a vacuum by an electric field and shot into a piece of metal called the "target". X-rays are emitted as the electrons slow down (decelerate) in the metal. The output spectrum consists of a continuous spectrum of X-rays, with additional sharp peaks at certain energies (see graph on right). The continuous spectrum is due to bremsstrahlung, while the sharp peaks are characteristic X-rays associated with the atoms in the target. The spectrum has a sharp cutoff at low wavelength (high frequency), which is due to the limited energy of the incoming electrons. For example, if each electron in the tube is accelerated through 60 kV, then it will acquire a kinetic energy of 60 keV, and when it strikes the target it can create X-ray photons with energy of at most 60 keV, by conservation of energy. (This upper limit corresponds to the electron coming to a stop by emitting just one X-ray photon. Usually the electron emits many photons, and each has an energy less than 60 keV.) A photon with energy of 60 keV or less has a wavelength of 21 pm or more, so the X-ray spectrum has exactly that cutoff, as seen in the graph. This cutoff applies to both the continuous (bremsstrahlung) spectrum and the characteristic sharp peaks: There is no X-ray of any kind beyond the cutoff. However, the cutoff is most obvious for the continuous spectrum. The exact formula for the cutoff comes from setting equal the kinetic energy of the electron, , and the energy of the X-ray photon, . References X-rays
Duane–Hunt law
[ "Physics" ]
480
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
22,914,364
https://en.wikipedia.org/wiki/Phosphatidylinositol%205-phosphate
Phosphatidylinositol 5-phosphate (PtdIns5P) is a phosphoinositide, one of the phosphorylated derivatives of phosphatidylinositol (PtdIns), that are well-established membrane-anchored regulatory molecules. Phosphoinositides participate in signaling events that control cytoskeletal dynamics, intracellular membrane trafficking, cell proliferation and many other cellular functions. Generally, phosphoinositides transduce signals by recruiting specific phosphoinositide-binding proteins to intracellular membranes. Phosphatidylinositol 5-phosphate is one of the 7 known cellular phosphoinositides with less understood functions. It is phosphorylated on position D-5 of the inositol head group, which is attached via phosphodiester linkage to diacylglycerol (with varying chemical composition of the acyl chains, frequently 1-stearoyl-2-arachidonoyl chain). In quiescent cells, on average, PtdIns5P is of similar or higher abundance as compared to PtdIns3P and ~20-100-fold below the levels of PtdIns4P (Phosphatidylinositol 4-phosphate and PtdIns(4,5)P2 (Phosphatidylinositol 4,5-bisphosphate). Notably, steady-state PtdIns5P levels are more than 5-fold higher than those of PtdIns(3,5)P2. PtdIns5P was first demonstrated by HPLC (high pressure liquid chromatography) in mouse fibroblasts as a substrate for PtdIns(4,5)P2 synthesis by type II PIP kinases (1-phosphatidylinositol-5-phosphate 4-kinase). In many cell types, however, PtdIns5P is not detected by HPLC due to technical limitations associated with its poor separation from the abundant PtdIns4P. Rather, PtdIns5P is measured by the "mass assay", where PtdIns5P (as a part of the extracted cellular lipids) is converted in vitro by purified PtdIns5P 4-kinase to PtdIns(4,5)P2 that is subsequently quantified. Based on studies with the mass assay and an improved HPLC technique, PtdIns5P is detected in all studied mammalian cells. Most of the cellular PtdIns5P is found on cytoplasmic membranes whereas a smaller fraction resides in the nucleus. The cytoplasmic and nuclear pools have distinct functions and regulation. Metabolism Cellular PtdIns5P could be produced by D-5-phosphorylation of phosphatidylinositol or by dephosphorylation of PtdIns(3,5)P2 or PtdIns(4,5)P2. Each of these possibilities is experimentally supported. PtdIns5P is synthesized in vitro by PIKfyve, an enzyme principally responsible for PtdIns(3,5)P2 production, as well as by [PIP5K]s. A major role for PIKfyve in synthesis of cellular PtdIns5P is suggested by data for reduced PtdIns5P mass levels upon heterologous overexpression of the enzymatically inactive PIKfyve point-mutant (PIKfyveK1831E) and PIKfyve silencing by small interfering RNAs. Such a role is reinforced by data in transgenic fibroblasts with one genetically disrupted PIKfyve allele, demonstrating equal reduction of steady-state levels of PtdIns5P and PtdIns(3,5)P2. Likewise, similar reduction of PtdIns5P and PtdIns(3,5)P2 is found in fibroblasts with knockout of the PIKfyve activator ArPIKfyve/VAC14. This experimental evidence coupled with the fact that the cellular levels of PtdIns5P exceed more than 5-fold those of PtdIns(3,5)P2 indicate a predominant role of PIKfyve in maintenance of the steady-state PtdIns5P levels via D-5 phosphorylation of phosphatidylinositol. A role for the myotubularin protein family in PtdIns5P production has been proposed based on dephosphorylation of PtdIns(3,5)P2 by overexpressed myotubularin 1. Concordantly, genetic ablation of the myotubularin-related protein 2 (MTMR2) causes elevation of cellular PtdIns(3,5)P2 and a decrease of PtdIns5P. The low cellular levels of PtdIns(3,5)P2 suggest that myotubularin phosphatase activity plays a minor role in maintaining the steady-state PtdIns5P levels. Importantly, PtdIns(3,5)P2 is synthesized from PtdIns3P by the PIKfyve complex that includes ArPIKfyve and Sac3/Fig4. Noteworthy, the PIKfyve complex underlies both PtdIns(3,5)P2 synthesis from and turnover to PtdIns3P. The relative proportion of PtdIns(3,5)P2 turnover by myotubularin phosphatases versus that by Sac3 is unknown. PtdIns5P can also be produced by dephosphorylation of PtdIns(4,5)P2. Such phosphatase activity is shown for Shigella flexneri effector IpgD and two mammalian phosphatases – PtdIns(4,5)P2 4-phosphatase type I and type II. In myoblast, PtdIns5P is rapidly metabolized by the PI5P 4-kinase α into PI(4,5)P2 which accumulates at the plasma membrane thereby facilitating the formation of podosome-like protrusions, playing a crucial role in the spatiotemporal regulation of myoblast fusion. Currently, there is no known mammalian phosphatase to specifically dephosphorylate PtdIns5P. The pathway for PtdIns5P clearance involves synthesis of PtdIns(4,5)P2. Functions The levels of PtdIns5P change significantly in response to physiological and pathological stimuli. Insulin, thrombin, T-cell activation, and cell transformation with nucleophosmin anaplastic lymphoma tyrosine kinase (NPM-ALK), cause elevation of cellular PtdIns5P levels. In contrast, hypoosmotic shock and histamine treatment decrease the levels of PtdIns5P. In T-cells, two “downstream of tyrosine kinase” proteins DOK1 and DOK2 are proposed as PtdIns5P-binding proteins and effectors. As the other phosphoinositides, PtdIns5P is also present in the nucleus of mammalian cells. The nuclear PtdIns5P pool is controlled by the nuclear type I PtdIns(4,5)P2 4-phosphatase that, in conjunction with the PIPKIIbeta kinase, plays a role in UV stress, apoptosis and cell cycle progression. The function of PtdIns5P in nuclear signaling likely involves ING2, a member of the ING family. The proteins of this family associate with and modulate the activity of histone acetylases and deacetylases as well as induce apoptosis through p53 acetylation. The ING2 interacts with PtdIns5P via its plant homeodomain (PHD) finger motif. In summary, the available evidence indicates that PIKfyve activity is the major source of steady-state cellular PtdIns5P. Under certain conditions, PtdIns5P is produced by dephosphorylation of bis-phosphoinositides. PtdIns5P is involved in regulation of both basic cellular functions and responses to a multitude of physiological and pathological stimuli by yet- to- be specified molecular mechanisms. References Cell signaling Phospholipids
Phosphatidylinositol 5-phosphate
[ "Chemistry" ]
1,799
[ "Phospholipids", "Signal transduction" ]
34,543,357
https://en.wikipedia.org/wiki/Bilbao%20Crystallographic%20Server
Bilbao Crystallographic Server is an open access website offering online crystallographic database and programs aimed at analyzing, calculating and visualizing problems of structural and mathematical crystallography, solid state physics and structural chemistry. Initiated in 1997 by the Materials Laboratory of the Department of Condensed Matter Physics at the University of the Basque Country, Bilbao, Spain, the Bilbao Crystallographic Server is developed and maintained by academics. Information on contents and an overview of tools hosted Focusing on crystallographic data and applications of the group theory in solid state physics, the server is built on a core of databases and contains different shells. Space Groups Retrieval Tools The set of databases includes data from International Tables of Crystallography, Vol. A: Space-Group Symmetry, and the data of maximal subgroups of space groups as listed in International Tables of Crystallography, Vol. A1: Symmetry relations between space groups. A k-vector database with Brillouin zone figures and classification tables of the k-vectors for space groups is also available via the KVEC tool. Magnetic Space Groups In 2011, the Magnetic Space Groups data compiled from H.T. Stokes & B.J. Campbell's and D. Litvin's's works general positions/symmetry operations and Wyckoff positions for different settings, along with systematic absence rules have also been incorporated into the server and a new shell has been dedicated to the related tools (MGENPOS, MWYCKPOS, MAGNEXT). Group-Subgroup Relations of Space Groups This shell contains applications which are essential for problems involving group-subgroup relations between space groups. Given the space group types of G and H and their index, the program SUBGROUPGRAPH provides graphs of maximal subgroups for a group-subgroup pair G > H, all the different subgroups H and their distribution into conjugacy classes. The Wyckoff position splitting rules for a group-subgroup pair are calculated by the program WYCKSPLIT. Representation Theory Applications The fourth shell includes programs on representation theory of space and point groups. REPRES constructs little group and full group irreducible representations for a given space group and a k-vector; CORREL deals with the correlations between the irreducible representations of group-subgroup related space groups. The program POINT lists character tables of crystallographic point groups, Kronecker multiplication tables of their irreducible representations and further useful symmetry information. Solid State Theory Applications This shell is related to solid state physics and structural chemistry. The program PSEUDO performs an evaluation of the pseudosymmetry of a given structure with respect to supergroups of its space group. AMPLIMODES performs the symmetry-mode analysis of any distorted structure of displacive type. The analysis consists in decomposing the symmetry-breaking distortion present in the distorted structure into contributions from different symmetry-adapted modes. Given the high and low symmetry structures, the program calculates the amplitudes and polarization vectors of the distortion modes of different symmetry frozen in the structure. The program SAM calculates symmetry-adapted modes for the centre of the Brillouin zone and classifies them according to their infrared and Raman activity. NEUTRON computes the phonon extinction rules in inelastic neutron scattering. Its results are also relevant for diffuse-scattering experiments. Structure Utilities A set of structure utilities has been included for various applications such as: the transformation of unit cells (CELLTRAN) or complete structures (TRANSTRU); strain tensor calculation (STRAIN); assignment of Wyckoff Positions (WPASSIGN); equivalent descriptions of a given structure (EQUIVSTRU); comparison of different structures with support for the affine normalizers of monoclinic space groups. STRUCTURE RELATIONS calculates the possible transformation matrices for a given pair of group-subgroup related structures. Incommensurate Crystal Structures Database The Bilbao Crystallographic Server also hosts the B-IncStrDB: Bilbao Incommensurate Crystal Structures Database, a database for incommensurately modulated and composite structures. Scientific Research In addition to receiving citations from scientific articles and theses, the Bilbao Crystallographic Server also actively publishes research reports in internationally reviewed articles, as well as hosting/participating in international workshops, summer schools and conferences. A list of these publications and events are accessible from the server's web page.. Development History and People The Bilbao Crystallographic Server came to life in 1997 as a scientific project by the Departments of Condensed Matter Physics and Applied Physics II of the University of the Basque Country (EHU) under the supervision of J. Manuel Perez-Mato (EHU) and Mois I. Aroyo (EHU), in coordination with Gotzon Madariaga (EHU) and Hans Wondratschek (Karlsruhe Institute of Technology, Germany) with funding from the Basque government and several ministries of the Spanish government. The initial code was written by then Ph.D. students Eli Kroumova (EHU) and Svet Ivantchev (EHU) and the very first shells related to retrieval tools, group-subgroup relations and space group representations have soon appeared online. Afterwards, in collaboration with Harold T. Stokes and Dorian M. Hatch from Brigham Young University, USA, the server extended its services to include symmetry modes analysis. Asen K. Kirov, a Ph.D. student from Sofia University, Bulgaria contributed to the server, working on programs dedicated to irreducible representations and extinction rules. In 2001, Ph.D. student Cesar Capillas began his research on the server and became the main developer and system administrator focusing on structure relations, such as pseudosymmetry and phase transitions. Danel Orobengoa, also a Ph.D. student, joined the developer team in 2005 and worked mainly on symmetry modes, k-vector classification tables and non-characteristic orbits (in collaboration with Massimo Nespolo of the Nancy-Université, France), writing his Ph.D. thesis on the applications of the server for ferroic materials. In 2009, Ph.D. student Gemma de la Flor and post-doc Emre S. Tasci were recruited for the development team: de la Flor working mainly on the identification and interpretation of symmetry operations, structure comparison and Tasci becoming the new system administrator and main developer, focusing in the structure relations concerning phase transitions. The Bilbao Crystallographic Server team took its current (2012) line-up in 2010 with the addition of Ph.D. student Samuel Vidal Gallego, his main research field being the magnetic space groups. References External links Crystallography Science software Crystallographic databases
Bilbao Crystallographic Server
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,359
[ "Crystallographic databases", "Crystallography", "Condensed matter physics", "Materials science" ]
25,832,512
https://en.wikipedia.org/wiki/Lightfastness
Lightfastness is a property of a colourant such as dye or pigment that describes its resistance to fading when exposed to light. Dyes and pigments are used for example for dyeing of fabrics, plastics or other materials and manufacturing paints or printing inks. The bleaching of the color is caused by the impact of ultraviolet radiation in the chemical structure of the molecules giving the color of the subject. The part of a molecule responsible for its color is called the chromophore. Light encountering a painted surface can either alter or break the chemical bonds of the pigment, causing the colors to bleach or change in a process known as photodegradation. Materials that resist this effect are said to be lightfast. The electromagnetic spectrum of the sun contains wavelengths from gamma waves to radio waves. The high energy of ultraviolet radiation in particular accelerates the fading of the dye. The photon energy of UVA-radiation which is not absorbed by atmospheric ozone exceeds the dissociation energy of the carbon-carbon single bond, resulting in the cleavage of the bond and fading of the color. Inorganic colourants are considered to be more lightfast than organic colourants. Black colourants are usually considered the most lightfast. Lightfastness is measured by exposing a sample to a lightsource for a predefined period of time and then comparing it to an unexposed sample. Chemical processes During the fading, colourant molecules undergo various chemical processes which result in fading. When a UV-photon reacts with a molecule acting as colourant, the molecule is excited from the ground state to an excited state. The excited molecule is highly reactive and unstable. During the quenching of the molecule from excited state to ground state, atmospheric triplet oxygen reacts with the colourant molecule to form singlet oxygen and superoxide oxygen radical. The oxygen atom and the superoxide radical resulting from the reaction are both highly reactive and capable of destroying the colourants. Photolysis Photolysis, i.e., photochemical decomposition is a chemical reaction where the compound is broken down by the photons. This decomposition occurs when a photon of sufficient energy encounters a colorant molecule bond with a suitable dissociation energy. The reaction causes homolytic cleavage in the chromophoric system resulting in the fading of the colourant. Photo-oxidation Photo-oxidation, i.e., photochemical oxidation. A colorant molecule, when excited by a photon of sufficient energy, undergoes an oxidation process. In the process the chromophoric system of the colorant molecule reacts with the atmospheric oxygen to form a non-chromophoric system, resulting in fading. Colorants which contain a carbonyl group as the chromophore are particularly vulnerable to oxidation. Photoreduction Photo-reduction, i.e., photochemical reduction. A colorant molecule with an unsaturated double bond (typical to alkenes) or triple bond (typical to alkynes) acting as a chromophore undergoes reduction in the presence of hydrogen and photons of sufficient energy, forming a saturated chromophoric system. Saturation reduces the length of the chromophoric system, resulting in the fading of the colorant. Photosensitization Photosensitization, i.e., photochemical sensitization. Exposing dyed cellulosic material, such as plant-based fibers, to sunlight allows dyes to remove hydrogen from the cellulose, resulting in photoreduction on the cellulosic substrate. Simultaneously, the colorant will undergo oxidation in the presence of the atmospheric oxygen, resulting in photo-oxidation of the colourant. These processes result in both fading of the colorant and strength loss of the substrate. Phototendering Phototendering, i.e., photochemical tendering. As a result of UV light, the substrate material supplies hydrogen to the colourant molecules, reducing the colorant molecule. As the hydrogen is removed, the material undergoes oxidation. Standards and measure scales Some organizations publish standards for rating the lightfastness of pigments and materials. Testing is typically done by controlled exposure to sunlight, or to artificial light generated by a xenon arc lamp. Watercolors, inks, pastels, and colored pencils are particularly susceptible to fading over time, so choosing lightfast pigments is especially important in these media. The most well known scales measuring the lightfastness are the Blue Wool Scale, Grey scale and the scale defined by ASTM (American Standard Test Measure). On the Blue Wool Scale the lightfastness is rated between 1–8. 1 being very poor and 8 being excellent lightfastness. In grey scale the lightfastness is rated between 1–5. 1 being very poor and 5 being excellent lightfastness. On ASTM scale the lightfastness is rated between I-V. I is excellent lightfastness and it corresponds to ratings 7–8 on Blue Wool Scale. V is very poor lightfastness and it corresponds to Blue Wool scale rating 1. The actual lightfastness is dependent on the strength of the radiation of the sun, so lightfastness is relative to geographic location, season, and exposure direction. The following table is listing suggestive relations of the lightfastness ratings on different measure scales and the relation relative to time in direct sunlight and normal conditions of display: away from a window, under indirect sunlight and properly framed behind a UV protective glass. Test procedure The relative amount of fading can be measured and studied by using standard test strips. In the workflow of the Blue Wool test, one reference strip set shall be stored protected from any exposure to light. Simultaneously, another equivalent test strip set is exposed under a light source defined in the standard. For example, if the lightfastness of the colourant is indicated to be 5 on the Blue Wool scale, it can be expected to fade by a similar amount as the strip number 5 in the Blue Wool test strip set. The success of the test can be confirmed by comparing the test strip set with the reference set that was stored protected from the light. In graphical industry In printing, organic pigments are mainly used in the inks, so the shifting or bleaching of the color of a printing product due to the presence of UV light is usually just a matter of time. The use of organic pigments is justified primarily by their inexpensive cost compared to inorganic pigments. The particle size of the inorganic pigments is often larger than that of organic pigments, thus inorganic pigments are often not suitable to be used in offset printing. In screen printing, the particle size of the pigment is not the limiting factor. Thus it is the preferred printing method for printing jobs requiring extreme lightfastness. The thickness of the ink layer affects the lightfastness by the amount of pigment laid on the substrate. The ink layer printed by screen printing is thicker than that printed by offset printing. In other words, it contains more pigment per area. This leads to better lightfastness even though the printing ink used in both methods would be based on the same pigment. When mixing printing inks, the ink with the weaker lightfastness defines the lightfastness of the whole mixed color. The fading of one of the pigments leads to a tone shift towards the component with better lightfastness. If it is required that there will be something visible from the printing, even though its dominant pigment would fade, then a small amount of pigment with excellent lightfastness can be mixed with it. See also Blue Wool Scale – a measure of dye permanence Color fastness – resistance to fading of textile colors Fugitive pigment – pigments that are susceptible to fading or altering over time References External links Doing your own lightfastness tests Pupulandia: Onko taide ikuista – tai kuuluuko sen olla? Pigments Color Photochemistry Properties of textiles
Lightfastness
[ "Chemistry" ]
1,617
[ "nan" ]
25,839,999
https://en.wikipedia.org/wiki/Computer-automated%20design
Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems. The concept of CAutoD perhaps first appeared in 1963, in the IBM Journal of Research and Development, where a computer program was written. to search for logic circuits having certain constraints on hardware design to evaluate these logics in terms of their discriminating ability over samples of the character set they are expected to recognize. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically-inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms. Guiding designs by performance improvements To meet the ever-growing demand of quality and competitiveness, iterative physical prototyping is now often replaced by 'digital prototyping' of a 'good design', which aims to meet multiple objectives such as maximised output, energy efficiency, highest speed and cost-effectiveness. The design problem concerns both finding the best design within a known range (i.e., through 'learning' or 'optimisation') and finding a new and better design beyond the existing ones (i.e., through creation and invention). This is equivalent to a search problem in an almost certainly, multidimensional (multivariate), multi-modal space with a single (or weighted) objective or multiple objectives. Normalized objective function: cost vs. fitness Using single-objective CAutoD as an example, if the objective function, either as a cost function , or inversely, as a fitness function , where , is differentiable under practical constraints in the multidimensional space, the design problem may be solved analytically. Finding the parameter sets that result in a zero first-order derivative and that satisfy the second-order derivative conditions would reveal all local optima. Then comparing the values of the performance index of all the local optima, together with those of all boundary parameter sets, would lead to the global optimum, whose corresponding 'parameter' set will thus represent the best design. However, in practice, the optimization usually involves multiple objectives and the matters involving derivatives are a lot more complex. Dealing with practical objectives In practice, the objective value may be noisy or even non-numerical, and hence its gradient information may be unreliable or unavailable. This is particularly true when the problem is multi-objective. At present, many designs and refinements are mainly made through a manual trial-and-error process with the help of a CAD simulation package. Usually, such a posteriori learning or adjustments need to be repeated many times until a ‘satisfactory’ or ‘optimal’ design emerges. Exhaustive search In theory, this adjustment process can be automated by computerised search, such as exhaustive search. As this is an exponential algorithm, it may not deliver solutions in practice within a limited period of time. Search in polynomial time One approach to virtual engineering and automated design is evolutionary computation such as evolutionary algorithms. Evolutionary algorithms To reduce the search time, the biologically-inspired evolutionary algorithm (EA) can be used instead, which is a (non-deterministic) polynomial algorithm. The EA based multi-objective "search team" can be interfaced with an existing CAD simulation package in a batch mode. The EA encodes the design parameters (encoding being necessary if some parameters are non-numerical) to refine multiple candidates through parallel and interactive search. In the search process, 'selection' is performed using 'survival of the fittest' a posteriori learning. To obtain the next 'generation' of possible solutions, some parameter values are exchanged between two candidates (by an operation called 'crossover') and new values introduced (by an operation called 'mutation'). This way, the evolutionary technique makes use of past trial information in a similarly intelligent manner to the human designer. The EA based optimal designs can start from the designer's existing design database, or from an initial generation of candidate designs obtained randomly. A number of finely evolved top-performing candidates will represent several automatically optimized digital prototypes. There are websites that demonstrate interactive evolutionary algorithms for design. allows you to evolve 3D objects online and have them 3D printed. allows you to do the same for 2D images. See also Electronic design automation Design Automation Design Automation Conference Generative design Genetic algorithm (GA) applications - automated design References External links An online interactive GA based CAutoD demonstrator. Learn step by step or watch global convergence in 2-parameter CAutoD Design Computer-aided design Applications of evolutionary algorithms Evolutionary computation
Computer-automated design
[ "Engineering", "Biology" ]
1,006
[ "Computer-aided design", "Design engineering", "Evolutionary computation", "Bioinformatics", "Design" ]
33,058,043
https://en.wikipedia.org/wiki/Yttrium%20%2890Y%29%20clivatuzumab%20tetraxetan
{{DISPLAYTITLE:Yttrium (90Y) clivatuzumab tetraxetan}} Yttrium (90Y) clivatuzumab tetraxetan (trade name hPAM4-Cide) is a humanized monoclonal antibody-drug conjugate designed for the treatment of pancreatic cancer. The antibody part, clivatuzumab (targeted at MUC1), is conjugated with tetraxetan, a chelator for yttrium-90, a radioisotope which destroys the tumour cells. The drug was developed by Immunomedics, Inc. In March 2016 the phase III PANCRIT-1 trial in metastatic pancreatic cancer was terminated early due to lack of improvement of overall survival. References Monoclonal antibodies for tumors Antibody-drug conjugates Radiopharmaceuticals Yttrium compounds
Yttrium (90Y) clivatuzumab tetraxetan
[ "Chemistry", "Biology" ]
200
[ "Antibody-drug conjugates", "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
33,062,050
https://en.wikipedia.org/wiki/%C5%9Aleszy%C5%84ski%E2%80%93Pringsheim%20theorem
In mathematics, the Śleszyński–Pringsheim theorem is a statement about convergence of certain continued fractions. It was discovered by Ivan Śleszyński and Alfred Pringsheim in the late 19th century. It states that if , , for are real numbers and for all , then converges absolutely to a number satisfying , meaning that the series where are the convergents of the continued fraction, converges absolutely. See also Convergence problem Notes and references Continued fractions Theorems in real analysis
Śleszyński–Pringsheim theorem
[ "Mathematics" ]
104
[ "Theorems in mathematical analysis", "Mathematical analysis", "Continued fractions", "Mathematical analysis stubs", "Theorems in real analysis", "Number theory" ]
20,446,105
https://en.wikipedia.org/wiki/Marine%20optical%20buoy
The marine optical buoy (MOBY) measures light at and very near the sea surface in a specific location over a long period of time, serving as part of an ocean color observation system. Satellites are another component of the system, providing global coverage through remote sensing; however, satellites measure light above the Earth's atmosphere, becoming subject to interference from the atmosphere itself and other light sources. The Marine Optical Buoy helps alleviate that interference and thus improve the quality of the overall ocean color observation system. Physical description MOBY is a buoy 15 meters tall floating vertically in the water with approximately 3 meters above the surface and 12 meters below. A float canister is at water level, measuring approximately 2 meters high and 1.5 meters in diameter above the water, 1 meter in diameter below the water. Above the float canister are four solar panels and an antenna column. From the bottom of the float canister, a central column descends to a 2-meter-high, 1-meter-diameter instrument canister. Along the central column are three standoff arms measuring 3 meters long, 2.5 meters long, and 2 meters long, respectively. The standoff arms can be relocated up and down the central column during maintenance. Light collectors are at the ends of the standoff arms and at the top of the antenna column. The antenna column includes Global Positioning System (GPS), very high frequency (VHF), and cellular telephone antennas. Computers, communications, and control electronics occupy the float canister. A marine optical system (MOS), a power system, and batteries occupy the instrument canister. The MOS includes spectrographs with charge-coupled device (CCD) detectors, an optical multiplexer, and fiber optic sensor lines to the light collectors. MOBY has a tether to another buoy that is moored to the sea floor at a depth of about 1200 meters. MOBY is located at , west of Lanai, in the lee of the Hawaiian Islands. Function Light from the Sun crosses space, enters and travels through the Earth's atmosphere, then enters the Earth's oceans. In the atmosphere and in the oceans, this light reflects from, refracts around, and absorbs into molecules and other objects. Some of this light leaves the water to again travel through the atmosphere and out into space, carrying the color of whatever it struck. At the sea surface, light coming down through the atmosphere enters the collector at the top of MOBY's antenna column. Each of MOBY's three submerged standoff arms has a pair of light collectors: one on top of the arm to collect downward moving light; and one underneath the arm to collect upward moving reflected light. Light entering the collectors travels through optical fibers and the optical multiplexer to the CCD detectors and spectrographs. The spectrographs record the light signals, and a computer stores the measurement data. The communications system aboard MOBY daily transmits much of the light measurement data to operators on shore. There is one Marine Optical Buoy operating in the water, and another in maintenance on shore. Every 3 to 4 months, a team exchanges the two buoys. The team calibrates each MOBY while it is in maintenance, both before deploying the buoy and after recovering it. Additionally, a team visits the MOBY in the water monthly, to clean algae, barnacles, and other organisms off the light collectors; and to generate independent comparison data using portable reference light sources. Each MOBY has internal reference light sources, as well, for continuous but not independent comparison. The MOBY calibration data traces to National Institute of Standards and Technology (NIST) radiometric standards directly, as opposed to using intermediate standards. Contribution MOBY has generated calibrated measurements of ocean color at the sea surface since 1996. MOBY served as the primary sea surface calibration for satellite borne sensors such as the sea-viewing wide field-of-view sensor (SeaWiFS) and the moderate-resolution imaging spectroradiometer (MODIS). MOBY has contributed to the calibration of the Ocean Color and Temperature Sensor (OCTS), the polarization detection environmental radiometer (POLDER), and the Modular Optoelectronic Scanner (IRS1-MOS). Long term sensors on the sea surface, such as MOBY, help improve the quality of the global ocean color observation system. References Further reading External links MOBY @ MLML NIST Optical Technology Division News: Measuring Global Carbon Concentrations NOAA's MOBY/MOCE project overview BOUSSOLE: Buoy for the acquisition of long-term optical time series IOCCG publications and reports MOBY on NOSA Satellite meteorology Oceanographic instrumentation Physical oceanography Radiometry
Marine optical buoy
[ "Physics", "Technology", "Engineering" ]
978
[ "Telecommunications engineering", "Oceanographic instrumentation", "Applied and interdisciplinary physics", "Measuring instruments", "Physical oceanography", "Radiometry" ]
20,453,649
https://en.wikipedia.org/wiki/Quantum%20pseudo-telepathy
Quantum pseudo-telepathy describes the use of quantum entanglement to eliminate the need for classical communications. A nonlocal game is said to display quantum pseudo-telepathy if players who can use entanglement can win it with certainty while players without it can not. The prefix pseudo refers to the fact that quantum pseudo-telepathy does not involve the exchange of information between any parties. Instead, quantum pseudo-telepathy removes the need for parties to exchange information in some circumstances. Quantum pseudo-telepathy is generally used as a thought experiment to demonstrate the non-local characteristics of quantum mechanics. However, quantum pseudo-telepathy is a real-world phenomenon which can be verified experimentally. It is thus an especially striking example of an experimental confirmation of Bell inequality violations. The magic square game A simple magic square game demonstrating nonclassical correlations was introduced by P.K. Aravind based on a series of papers by N. David Mermin and Asher Peres and Adán Cabello that developed simplifying demonstrations of Bell's theorem. The game has been reformulated to demonstrate quantum pseudo-telepathy. Game rules This is a cooperative game featuring two players, Alice and Bob, and a referee. The referee asks Alice to fill in one row, and Bob one column, of a 3×3 table with plus and minus signs. Their answers must respect the following constraints: Alice's row must contain an even number of minus signs, Bob's column must contain an odd number of minus signs, and they both must assign the same sign to the cell where the row and column intersects. If they manage they win, otherwise they lose. Alice and Bob are allowed to elaborate a strategy together, but crucially are not allowed to communicate after they know which row and column they will need to fill in (as otherwise the game would be trivial). Classical strategy It is easy to see that if Alice and Bob can come up with a classical strategy where they always win, they can represent it as a 3×3 table encoding their answers. But this is not possible, as the number of minus signs in this hypothetical table would need to be even and odd at the same time: every row must contain an even number of minus signs, making the total number of minus signs even, and every column must contain an odd number of minus signs, making the total number of minus signs odd. With a bit further analysis one can see that the best possible classical strategy can be represented by a table where each cell now contains both Alice and Bob's answers, that may differ. It is possible to make their answers equal in 8 out of 9 cells, while respecting the parity of Alice's rows and Bob's columns. This implies that if the referee asks for a row and column whose intersection is one of the cells where their answers match they win, and otherwise they lose. Under the usual assumption that the referee asks for them uniformly at random, the best classical winning probability is 8/9. Pseudo-telepathic strategies Use of quantum pseudo-telepathy would enable Alice and Bob to win the game 100% of the time without any communication once the game has begun. This requires Alice and Bob to possess two pairs of particles with entangled states. These particles must have been prepared before the start of the game. One particle of each pair is held by Alice and the other by Bob, so they each have two particles. When Alice and Bob learn which column and row they must fill, each uses that information to select which measurements they should make to their particles. The result of the measurements will appear to each of them to be random (and the observed partial probability distribution of either particle will be independent of the measurement performed by the other party), so no real "communication" takes place. However, the process of measuring the particles imposes sufficient structure on the joint probability distribution of the results of the measurement such that if Alice and Bob choose their actions based on the results of their measurement, then there will exist a set of strategies and measurements allowing the game to be won with probability 1. Note that Alice and Bob could be light years apart from one another, and the entangled particles will still enable them to coordinate their actions sufficiently well to win the game with certainty. Each round of this game uses up one entangled state. Playing N rounds requires that N entangled states (2N independent Bell pairs, see below) be shared in advance. This is because each round needs 2-bits of information to be measured (the third entry is determined by the first two, so measuring it isn't necessary), which destroys the entanglement. There is no way to reuse old measurements from earlier games. The trick is for Alice and Bob to share an entangled quantum state and to use specific measurements on their components of the entangled state to derive the table entries. A suitable correlated state consists of a pair of entangled Bell states: here and are eigenstates of the Pauli operator Sx with eigenvalues +1 and −1, respectively, whilst the subscripts a, b, c, and d identify the components of each Bell state, with a and c going to Alice, and b and d going to Bob. The symbol represents a tensor product. Observables for these components can be written as products of the Pauli matrices: Products of these Pauli spin operators can be used to fill the 3×3 table such that each row and each column contains a mutually commuting set of observables with eigenvalues +1 and −1, and with the product of the observables in each row being the identity operator, and the product of observables in each column equating to minus the identity operator. This is a so-called Mermin–Peres magic square. It is shown in below table. Effectively, while it is not possible to construct a 3×3 table with entries +1 and −1 such that the product of the elements in each row equals +1 and the product of elements in each column equals −1, it is possible to do so with the richer algebraic structure based on spin matrices. The play proceeds by having each player make one measurement on their part of the entangled state per round of play. Each of Alice's measurements will give her the values for a row, and each of Bob's measurements will give him the values for a column. It is possible to do that because all observables in a given row or column commute, so there exists a basis in which they can be measured simultaneously. For Alice's first row she needs to measure both her particles in the basis, for the second row she needs to measure them in the basis, and for the third row she needs to measure them in an entangled basis. For Bob's first column he needs to measure his first particle in the basis and the second in the basis, for second column he needs to measure his first particle in the basis and the second in the basis, and for his third column he needs to measure both his particles in a different entangled basis, the Bell basis. As long as the table above is used, the measurement results are guaranteed to always multiply out to +1 for Alice along her row, and −1 for Bob down his column. Of course, each completely new round requires a new entangled state, as different rows and columns are not compatible with each other. Current research It has been demonstrated that the above-described game is the simplest two-player game of its type in which quantum pseudo-telepathy allows a win with probability one. Other games in which quantum pseudo-telepathy occurs have been studied, including larger magic square games, graph colouring games giving rise to the notion of quantum chromatic number, and multiplayer games involving more than two participants. In July 2022 a study reported the experimental demonstration of quantum pseudotelepathy via playing the nonlocal version of Mermin-Peres magic square game. Greenberger–Horne–Zeilinger game The Greenberger–Horne–Zeilinger (GHZ) game is another example of quantum pseudo-telepathy. Classically, the game has 0.75 winning probability. However, with a quantum strategy, the players can achieve a winning probability of 1, meaning they always win. In the game there are three players, Alice, Bob, and Carol playing against a referee. The referee poses a binary question to each player (either or ). The three players each respond with an answer again in the form of either or . Therefore, when the game is played the three questions of the referee x, y, z are drawn from the 4 options . For example, if question triple is chosen, then Alice receives bit 0, Bob receives bit 1, and Carol receives bit 1 from the referee. Based on the question bit received, Alice, Bob, and Carol each respond with an answer a, b, c, also in the form of 0 or 1. The players can formulate a strategy together prior to the start of the game. However, no communication is allowed during the game itself. The players win if , where indicates OR condition and indicates summation of answers modulo 2. In other words, the sum of three answers has to be even if . Otherwise, the sum of answers has to be odd. Classical strategy Classically, Alice, Bob, and Carol can employ a deterministic strategy that always end up with odd sum (e.g. Alice always output 1. Bob and Carol always output 0). The players win 75% of the time and only lose if the questions are . This is the best classical strategy: only 3 out of 4 winning conditions can be satisfied simultaneously. Let be Alice's response to question 0 and 1 respectively, be Bob's response to question 0, 1, and be Carol's response to question 0, 1. We can write all constraints that satisfy winning conditions as Suppose that there is a classical strategy that satisfies all four winning conditions, all four conditions hold true. Through observation, each term appears twice on the left hand side. Hence, the left side sum = 0 mod 2. However, the right side sum = 1 mod 2. The contradiction shows that all four winning conditions cannot be simultaneously satisfied. Quantum strategy When Alice, Bob, and Carol decide to adopt a quantum strategy they share a tripartite entangled state , known as the GHZ state. If question 0 is received, the player makes a measurement in the X basis . If question 1 is received, the player makes a measurement in the Y basis . In both cases, the players give answer 0 if the result of the measurement is the first state of the pair, and answer 1 if the result is the second state of the pair. With this strategy the players win the game with probability 1. See also Quantum game theory Quantum refereed game GHZ state – an entangled 3-particle state. EPR paradox Kochen–Specker theorem Quantum information science Qubit Tsirelson's bound Wheeler–Feynman absorber theory Notes External links Understanding and simulating quantum pseudo-telepathy Quantum Pseudo-Telepathy Concepts in physics Quantum information science Quantum measurement Thought experiments in quantum mechanics Game theory
Quantum pseudo-telepathy
[ "Physics", "Mathematics" ]
2,311
[ "Quantum game theory", "Quantum mechanics", "Game theory", "Quantum measurement", "nan", "Thought experiments in quantum mechanics" ]
20,455,314
https://en.wikipedia.org/wiki/Coherent%20electromagnetic%20radio%20tomography
The Coherent Electromagnetic Radio Tomography (CERTO) is a radio beacon which measures ionospheric parameters in coordination with ground receivers. CERTO provides global ionospheric maps to aid prediction of radio wave scattering. CERTO was developed by the Naval Research Lab and is one of the 4 experiment packages aboard the PicoSAT satellite. CERTO provides near–real-time measurements of the ionosphere. CERTO was used for the Equatorial Vortex Experiment in 2013. Specifications NSSDC ID: 2001-043B-01A Mission: PicoSAT 9 References NASA: Picosat Experiment 2001-43B Kirtland AFB CERTO Space science experiments Ionosphere Satellite meteorology Radio technology
Coherent electromagnetic radio tomography
[ "Physics", "Astronomy", "Technology", "Engineering" ]
149
[ "Information and communications technology", "Telecommunications engineering", "Plasma physics", "Astronomy stubs", "Astrophysics", "Radio technology", "Astrophysics stubs", "Plasma physics stubs" ]
20,455,445
https://en.wikipedia.org/wiki/Ionospheric%20Occultation%20Experiment
The Ionospheric Occultation Experiment (IOX) was a remote sensing satellite package that used a dual frequency Global Positioning System (GPS) receiver to measure properties of the ionosphere. IOX demonstrated remote sensing techniques for future United States Department of Defense space systems and helped to improve operational models for ionospheric and thermospheric forecasts. IOX was developed by the United States Air Force Space and Missile Systems Center and was one of four experiment packages on PicoSAT, which was launched by an Athena rocket in September 2001. Specifications NSSDC ID: 2001-043B-02 Mission: PicoSAT 9 Further reading Official Abstract University Corporation for Atmospheric Research: Further Overview of IOX and GPS Receiver August 2002 Workshop Harvard University: IOX Abstract References NASA: Experiment Package 2001-043B-02 Space science experiments Ionosphere Satellite meteorology
Ionospheric Occultation Experiment
[ "Physics", "Astronomy" ]
179
[ "Plasma physics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Plasma physics stubs" ]
20,455,598
https://en.wikipedia.org/wiki/Polymer%20Battery%20Experiment
The Polymer Battery Experiment (PBEX) demonstrates the charging and discharging characteristics of polymer batteries in the space environment. PBEX validates use of lightweight, flexible battery technology to decrease cost and weight for future military and commercial space systems. PBEX was developed by Johns Hopkins University and is one of four On Orbit Mission Control (OOMC) packages on PicoSat 9: Polymer Battery Experiment Ionospheric Occultation Experiment Coherent Electromagnetic Radio Tomography Optical Precision Platform Experiment Specifications NSSDC ID: 2001-043B-03 Mission: PicoSAT 9 Sources NASA: Picosat Experiment Package 2001-043B-03 Mainpage See also Batteries in space References External links NASA: PicoSAT 9 Mainpage NASA: Coherent Electromagnetic Radio Tomography Mainpage NASA: Ionospheric Occultation Experiment Mainpage Electric battery Space science experiments Chemistry experiments
Polymer Battery Experiment
[ "Chemistry" ]
180
[ "nan" ]
31,545,601
https://en.wikipedia.org/wiki/Elgiloy
Elgiloy (Co-Cr-Ni Alloy) is a "super-alloy" consisting of 39-41% cobalt, 19-21% chromium, 14-16% nickel, 11.3-20.5% iron, 6-8% molybdenum, 1.5-2.5% manganese and 0.15% max. carbon. It is used to make springs that are corrosion resistant and exhibit high strength, ductility, and good fatigue life. These same properties led to it being used for control cables in the Lockheed SR-71 Blackbird airplane, as they needed to cope with repeated stretching and contracting. Elgiloy meets specifications AMS 5876, AMS 5833, and UNS R30003. Due to its chemical composition, Elgiloy is highly resistant to sulfide stress corrosion cracking and pitting, and can operate at temperatures up to 454 °C. Elgiloy is a trade name for this super alloy. Phynox is another trade name for the same super alloy. See also List of named alloys References External links Superalloys Metallurgy
Elgiloy
[ "Chemistry", "Materials_science", "Engineering" ]
233
[ "Alloy stubs", "Metallurgy", "Materials science", "Superalloys", "Alloys", "nan" ]
31,546,299
https://en.wikipedia.org/wiki/Quantum%20finance
Quantum finance is an interdisciplinary research field, applying theories and methods developed by quantum physicists and economists in order to solve problems in finance. It is a branch of econophysics. Quantum computing is now being used for a number of financial applications, including fraud detection, stock price prediction, portfolio optimization, and product recommendation. Quantum continuous model Most quantum option pricing research typically focuses on the quantization of the classical Black–Scholes–Merton equation from the perspective of continuous equations like the Schrödinger equation. Emmanuel Haven builds on the work of Zeqian Chen and others, but considers the market from the perspective of the Schrödinger equation. The key message in Haven's work is that the Black–Scholes–Merton equation is really a special case of the Schrödinger equation where markets are assumed to be efficient. The Schrödinger-based equation that Haven derives has a parameter ħ (not to be confused with the complex conjugate of h) that represents the amount of arbitrage that is present in the market resulting from a variety of sources including non-infinitely fast price changes, non-infinitely fast information dissemination and unequal wealth among traders. Haven argues that by setting this value appropriately, a more accurate option price can be derived, because in reality, markets are not truly efficient. This is one of the reasons why it is possible that a quantum option pricing model could be more accurate than a classical one. Belal E. Baaquie has published many papers on quantum finance and even written a book that brings many of them together. Core to Baaquie's research and others like Matacz are Richard Feynman's path integrals. Baaquie applies path integrals to several exotic options and presents analytical results comparing his results to the results of Black–Scholes–Merton equation showing that they are very similar. Edward Piotrowski et al. take a different approach by changing the Black–Scholes–Merton assumption regarding the behavior of the stock underlying the option. Instead of assuming it follows a Wiener–Bachelier process, they assume that it follows an Ornstein–Uhlenbeck process. With this new assumption in place, they derive a quantum finance model as well as a European call option formula. Other models such as Hull–White and Cox–Ingersoll–Ross have successfully used the same approach in the classical setting with interest rate derivatives. Andrei Khrennikov builds on the work of Haven and others and further bolsters the idea that the market efficiency assumption made by the Black–Scholes–Merton equation may not be appropriate. To support this idea, Khrennikov builds on a framework of contextual probabilities using agents as a way of overcoming criticism of applying quantum theory to finance. Luigi Accardi and Andreas Boukas again quantize the Black–Scholes–Merton equation, but in this case, they also consider the underlying stock to have both Brownian and Poisson processes. Quantum binomial model Chen published a paper in 2001, where he presents a quantum binomial options pricing model or simply abbreviated as the quantum binomial model. Metaphorically speaking, Chen's quantum binomial options pricing model (referred to hereafter as the quantum binomial model) is to existing quantum finance models what the Cox–Ross–Rubinstein classical binomial options pricing model was to the Black–Scholes–Merton model: a discretized and simpler version of the same result. These simplifications make the respective theories not only easier to analyze but also easier to implement on a computer. Multi-step quantum binomial model In the multi-step model the quantum pricing formula is: , which is the equivalent of the Cox–Ross–Rubinstein binomial options pricing model formula as follows: . This shows that assuming stocks behave according to Maxwell–Boltzmann statistics, the quantum binomial model does indeed collapse to the classical binomial model. Quantum volatility is as follows as per Keith Meyer: . Bose–Einstein assumption Maxwell–Boltzmann statistics can be replaced by the quantum Bose–Einstein statistics resulting in the following option price formula: . The Bose–Einstein equation will produce option prices that will differ from those produced by the Cox–Ross–Rubinstein option pricing formula in certain circumstances. This is because the stock is being treated like a quantum boson particle instead of a classical particle. Quantum algorithm for the pricing of derivatives Patrick Rebentrost showed in 2018 that an algorithm exists for quantum computers capable of pricing financial derivatives with a square root advantage over classical methods. This development marks a shift from using quantum mechanics to gain insight into functional finance, to using quantum systems- quantum computers, to perform those calculations. In 2020 David Orrell proposed an option-pricing model based on a quantum walk which can run on a photonics device. Criticism In their review of Baaquie's work, Arioli and Valente point out that, unlike Schrödinger's equation, the Black-Scholes-Merton equation uses no imaginary numbers. Since quantum characteristics in physics like superposition and entanglement are a result of the imaginary numbers, Baaquie's numerical success must result from effects other than quantum ones. Rickles critiques Baaquies's work on economics grounds: empirical economic data are not random so they don't need a quantum randomness explanation. References Further reading Belal E. Baaquie, Quantum Finance: Path Integrals and Hamiltonians for Options and Interest Rates, Cambridge University Press (Cambridge, UK, 2004) Belal E. Baaquie, Mathematical Methods and Quantum Mathematics for Economics and Finance, Springer (Singapore, 2020) Applied and interdisciplinary physics Mathematical finance Quantum information science Schools of economic thought Statistical mechanics Interdisciplinary subfields of economics
Quantum finance
[ "Physics", "Mathematics" ]
1,195
[ "Applied mathematics", "Statistical mechanics", "Applied and interdisciplinary physics", "Mathematical finance" ]
31,547,791
https://en.wikipedia.org/wiki/Open%20Compute%20Project
The Open Compute Project (OCP) is an organization that facilitates the sharing of data center product designs and industry best practices among companies. Founded in 2011, OCP has significantly influenced the design and operation of large-scale computing facilities worldwide. As of July 2024, over 300 companies across the world are members of OCP, including Arm, Meta, IBM, Wiwynn, Intel, Nokia, Google, Microsoft, Seagate Technology, Dell, Rackspace, Hewlett Packard Enterprise, NVIDIA, Cisco, Goldman Sachs, Fidelity, Lenovo, Accton Technology Corporation and Alibaba Group. Structure The Open Compute Project Foundation is a 501(c)(6) non-profit incorporated in the state of Delaware, United States. OCP has multiple committees, including the board of directors, advisory board and steering committee to govern its operations. As of July 2020, there are seven members who serve on the board of directors which is made up of one individual member and six organizational members. Mark Roenigk (Facebook) is the Foundation's president and chairman. Andy Bechtolsheim is the individual member. In addition to Mark Roenigk who represents Facebook, other organizations on the Open Compute board of directors include Intel (Rebecca Weekly), Microsoft (Kushagra Vaid), Google (Partha Ranganathan), and Rackspace (Jim Hawkins). A current list of members can be found on the opencompute.org website. History The Open Compute Project began in Facebook as an internal project in 2009 called "Project Freedom". The hardware designs and engineering team were led by Amir Michael (Manager, Hardware Design) and sponsored by Jonathan Heiliger (VP, Technical Operations) and Frank Frankovsky (Director, Hardware Design and Infrastructure). The three would later open source the designs of Project Freedom and co-found the Open Compute Project. The project was announced at a press event at Facebook's headquarters in Palo Alto on April 7, 2011. OCP projects The Open Compute Project Foundation maintains a number of OCP projects, such as: Server designs Two years after the Open Compute Project had started, with regards to a more modular server design, it was admitted that "the new design is still a long way from live data centers". However, some aspects published were used in Facebook's Prineville data center to improve energy efficiency, as measured by the power usage effectiveness index defined by The Green Grid. Efforts to advance server compute node designs included one for Intel processors and one for AMD processors. In 2013, Calxeda contributed a design with ARM architecture processors. Since then, several generations of OCP server designs have been deployed: Wildcat (Intel), Spitfire (AMD), Windmill (Intel E5-2600), Watermark (AMD), Winterfell (Intel E5-2600 v2) and Leopard (Intel E5-2600 v3). OCP Accelerator Module OCP Accelerator Module (OAM) is a design specification for hardware architectures that implement artificial intelligence systems that require high module-to-module bandwidth. OAM is used in some of AMD's Instinct accelerator modules. Rack and Power designs The designs for a mechanical mounting system have been published, so that open racks have the same outside width (600 mm) and depth as standard 19-inch racks, but are designed to mount wider chassis with a 537 mm width (21 inches). This allows more equipment to fit in the same volume and improves air flow. Compute chassis sizes are defined in multiples of an OpenU or OU, which is 48 mm, slightly taller than the typical 44mm rack unit. The most current base mechanical specifications were defined and published by Meta as the Open Rack V3 Base Specification in 2022, with significant contributions from Google and Rittal. At the time the base specification was released, Meta also defined in greater depth the specifications for the rectifiers and power shelf. Specifications for the power monitoring interface (PMI), a communications interface enabling upstream communications between the rectifiers and battery backup unit(BBU) were published by Meta that same year, with Delta Electronics as the main technical contributor to the BBU spec. Since 2022 however, the power demands of AI in the data center has necessitated higher power requirements in order to fulfill the heavy power demands of newer data center processors that have since been released. Meta is currently in the process of updating its Open Rack v3 rectifier, power shelf, battery backup and power management interface specifications to account for these new more powerful AI architectures being used. In May 2024, at an Open Compute regional summit, Meta and Rittal outlined their plans for development of their High Power Rack (HPR) ecosystem in conjunction with rack, power and cable partners, increasing the power capacity in the rack to 92 kilowatts or more of power, enabling the higher power needs of the latest generation of processors. At the same meeting, Delta Electronics and Advanced Energy introduced their progress in developing new Open Compute standards specifying power shelf and rectifier designs for these HPR applications. Rittal also outlined their collaboration with Meta in designing airflow containment, busbar designs and grounding schemes to the new HPR requirements. Data storage Open Vault storage building blocks offer high disk densities, with 30 drives in a 2U Open Rack chassis designed for easy disk drive replacement. The 3.5 inch disks are stored in two drawers, five across and three deep in each drawer, with connections via serial attached SCSI. This storage is also called Knox, and there is also a cold storage variant where idle disks power down to reduce energy consumption. Another design concept was contributed by Hyve Solutions, a division of Synnex in 2012. At the OCP Summit 2016 Facebook together with Taiwanese ODM Wistron's spin-off Wiwynn introduced Lightning, a flexible NVMe JBOF (just a bunch of flash), based on the existing Open Vault (Knox) design. Energy efficient data centers The OCP has published data center designs for energy efficiency. These include power distribution at 277 VAC, which eliminates one transformer stage in typical data centers, a single voltage (12.5 VDC) power supply designed to work with 277 VAC input, and 48 VDC battery backup. Open networking switches On May 8, 2013, an effort to define an open network switch was announced. The plan was to allow Facebook to load its own operating system software onto the switch. Press reports predicted that more expensive and higher-performance switches would continue to be popular, while less expensive products treated more like a commodity (using the buzzword "top-of-rack") might adopt the proposal. The first attempt at an open networking switch by Facebook was designed together with Taiwanese ODM Accton using Broadcom Trident II chip and is called Wedge, the Linux OS that it runs is called FBOSS. Later switch contributions include "6-pack" and Wedge-100, based on Broadcom Tomahawk chips. Similar switch hardware designs have been contributed by: Accton Technology Corporation (and its Edgecore Networks subsidiary), Mellanox Technologies, Interface Masters Technologies, Agema Systems. Capable of running Open Network Install Environment (ONIE)-compatible network operating systems such as Cumulus Linux, Switch Light OS by Big Switch Networks, or PICOS by Pica8. A similar project for a custom switch for the Google platform had been rumored, and evolved to use the OpenFlow protocol. Servers Sub-project for Mezzanine (NIC) OCP NIC 3.0 specification 1v00 was released in late 2019 establishing 3 form factors: SFF, TSFF, and LFF . Litigation In March, 2015, BladeRoom Group Limited and Bripco (UK) Limited sued Facebook, Emerson Electric Co. and others alleging that Facebook has disclosed BladeRoom and Bripco's trade secrets for prefabricated data centers in the Open Compute Project. Facebook petitioned for the lawsuit to be dismissed, but this was rejected in 2017. A confidential mid-trial settlement was agreed in April 2018. See also References External links Data Centers Prineville Data Center Forest City Data Center Altoona Data Center Luleå Data Center (Sweden) Fort Worth Data Center Clonee Data Center (Ireland) Videos , Hot Chips 23, 2011 2.5 Hour Tutorial , Facebook V1 Open Compute Server , Open Compute starts at 5:40 Case Studies Game publisher builds a cost-efficient, scalable data center and reduces operational complexities with OCP. Open-source hardware Facebook 2011 software Data centers Data management Servers (computing) Distributed data storage Distributed data storage systems Applications of distributed computing Cloud storage Computer networking Science and technology in the San Francisco Bay Area
Open Compute Project
[ "Technology", "Engineering" ]
1,796
[ "Computer networking", "Computer engineering", "Data centers", "Data management", "Computer science", "Data", "Computers" ]
31,548,061
https://en.wikipedia.org/wiki/Affinity%20magnetic%20separation
Affinitymagnetic separation (AMS) is a laboratory tool that can efficiently isolate bacterial cells out of body fluid or cultured cells. It can also be used as a method of quantifying the pathogenicity of food, blood or feces. Another laboratory separation tool is the immunomagnetic separation (IMS), which is more suitable for the isolation of eucaryotic cells. Technique Host recognition of bacteriophages occur via bacteria-binding proteins that have strong binding affinities to specific protein or carbohydrate structures on the surface of the bacterial host. Bacteria-binding proteins derived from bacteriophage coating paramagnetic beads will bind to specific cell components present on the surface of host thus capturing the cells and facilitate the concentration of these bead-attached cells. The concentration process is created by a magnet placed on the side of the test tube bringing the beads to it. Due to the phage-ligand technology, AMS is superior to the antibody based immunomagnetic separation (IMS) on sorting bacterial cells. References Laboratory techniques Molecular biology
Affinity magnetic separation
[ "Chemistry", "Biology" ]
229
[ "Biochemistry", "nan", "Molecular biology" ]
31,550,360
https://en.wikipedia.org/wiki/PhEVER
PhEVER is a database of homologous gene families between viral sequences and sequences from cellular organisms. See also Phylogenetics References External links https://web.archive.org/web/20101105222933/http://pbil.univ-lyon1.fr/databases/phever/ Genetics databases Phylogenetics Virology
PhEVER
[ "Biology" ]
76
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
30,511,252
https://en.wikipedia.org/wiki/Nanocrystal%20solar%20cell
Nanocrystal solar cells are solar cells based on a substrate with a coating of nanocrystals. The nanocrystals are typically based on silicon, CdTe or CIGS and the substrates are generally silicon or various organic conductors. Quantum dot solar cells are a variant of this approach which take advantage of quantum mechanical effects to extract further performance. Dye-sensitized solar cells are another related approach, but in this case the nano-structuring is a part of the substrate. Previous fabrication methods relied on expensive molecular beam epitaxy processes, but colloidal synthesis allows for cheaper manufacturing. A thin film of nanocrystals is obtained by a process known as "spin-coating". This involves placing an amount of the quantum dot solution onto a flat substrate, which is then rotated very quickly. The solution spreads out uniformly, and the substrate is spun until the required thickness is achieved. Quantum dot based photovoltaic cells based on dye-sensitized colloidal TiO2 films were investigated in 1991 and were found to exhibit promising efficiency of converting incident light energy to electrical energy, and to be incredibly encouraging due to the low cost of materials used. A single-nanocrystal (channel) architecture in which an array of single particles between the electrodes, each separated by ~1 exciton diffusion length, was proposed to improve the device efficiency and research on this type of solar cell is being conducted by groups at Stanford, Berkeley and the University of Tokyo. Although research is still in its infancy, nanocrystal photovoltaics may offer advantages such as flexibility (quantum dot-polymer composite photovoltaics) lower costs, clean power generation and an efficiency of 65%, compared to around 20 to 25% for first-generation, crystalline silicon-based photovoltaics in the future. It is argued that many measurements of the efficiency of the nanocrystal solar cell are incorrect and that nanocrystal solar cells are not suitable for large scale manufacturing. Recent research has experimented with lead selenide (PbSe) semiconductor, as well as with cadmium telluride photovoltaics (CdTe), which has already been well established in the production of second-generation thin film solar cells. Other materials are being researched as well. Other third generation solar cells Photoelectrochemical cell Polymer solar cell Perovskite solar cell See also Nanocrystalline silicon Nanoparticle References External links Science News Online, Quantum-Dots Leap: Tapping tiny crystals' inexplicable light-harvesting talent, June 3, 2006. InformationWeek, Nanocrystal Discovery Has Solar Cell Potential, January 6, 2006. Berkeley Lab, Berkeley Lab Air-stable Inorganic Nanocrystal Solar Cells Processed from Solution, 2005. ScienceDaily, Sunny Future For Nanocrystal Solar Cells, October 23, 2005. Solar cells Nanomaterials Quantum electronics
Nanocrystal solar cell
[ "Physics", "Materials_science" ]
599
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Nanomaterials" ]
30,511,786
https://en.wikipedia.org/wiki/Coupled%20mode%20theory
Coupled mode theory (CMT) is a perturbational approach for analyzing the coupling of vibrational systems (mechanical, optical, electrical, etc.) in space or in time. Coupled mode theory allows a wide range of devices and systems to be modeled as one or more coupled resonators. In optics, such systems include laser cavities, photonic crystal slabs, metamaterials, and ring resonators. History Coupled mode theory first arose in the 1950s in the works of Miller on microwave transmission lines, Pierce on electron beams, and Gould on backward wave oscillators. This put in place the mathematical foundations for the modern formulation expressed by H. A. Haus et al. for optical waveguides. In the late 1990s and early 2000s, the field of nanophotonics has revitalized interest in coupled mode theory. Coupled mode theory has been used to account for the Fano resonances in photonic crystal slabs and has also been modified to account for optical resonators with non-orthogonal modes. Since late 2000s, researchers have capitalized on coupled mode theory to explain the concept of magnetically coupled resonators. Overview The oscillatory systems to which coupled mode theory applies are described by second order partial differential equations. CMT allows the second order partial differential equation to be expressed as one or more coupled first order ordinary differential equations. The following assumptions are generally made with CMT: Linearity Time-reversal symmetry Time-invariance Weak mode coupling (small perturbation of uncoupled modes) Energy conservation Formulation The formulation of the coupled mode theory is based on the development of the solution to an electromagnetic problem into modes. Most of the time it is eigenmodes which are taken in order to form a complete base. The choice of the basis and the adoption of certain hypothesis like parabolic approximation differs from formulation to formulation. The classification proposed by of the different formulation is as follows: The choice of starting differential equation. some of the coupled mode theories are derived directly from the Maxwell differential equations although others use simplifications in order to obtain a Helmholtz equation. The choice of principle to derive the equations of the CMT. Either the reciprocity theorem or the variational principle have been used. The choice of orthogonality product used to establish the eigenmode base. Some references use the unconjugated form and others the complex-conjugated form. Finally, the choice of the form of the equation, either vectorial or scalar. When n modes of an electromagnetic wave propagate through a media in the direction z without loss, the power transported by each mode is described by a modal power Pm. At a given frequency ω. where Nm is the norm of the mth mode and am is the modal amplitude. See also Eigenmode expansion References External links WMM mode solver manual on couple mode theory Computational electromagnetics Numerical differential equations Photonics
Coupled mode theory
[ "Physics" ]
604
[ "Computational electromagnetics", "Computational physics" ]
3,840,763
https://en.wikipedia.org/wiki/Christofilos%20effect
The Christofilos effect, sometimes known as the Argus effect, refers to the entrapment of electrons from nuclear weapons in the Earth's magnetic field. It was first predicted in 1957 by Nicholas Christofilos, who suggested the effect had defensive potential in a nuclear war, with so many beta particles becoming trapped that warheads flying through the region would experience huge electrical currents that would destroy their trigger electronics. The concept that a few friendly warheads could disrupt an enemy attack was so promising that a series of new nuclear tests was rushed into the US schedule before a testing moratorium came into effect in late 1958. These tests demonstrated that the effect was not nearly as strong as predicted, and not enough to damage a warhead. However, the effect is strong enough to be used to black out radar systems and disable satellites. Concept Electrons from nuclear explosions Among the types of energy released by a nuclear explosion are a large number of beta particles, or high energy electrons. These are primarily the result of beta decay within the debris from the fission portions of the bomb, which, in most designs, represents about 50% of the total yield. Because electrons are electrically charged, they induce electrical currents in surrounding atoms as they pass them at high speed. This causes the atoms to ionize while also causing the beta particles to slow down. In the lower atmosphere, this reaction is so powerful that the beta particles slow to thermal speeds within a few tens of meters at most. This is well within a typical nuclear explosion fireball, so the effect is too small to be seen. At high altitudes, the much less-dense atmosphere means the electrons are free to travel long distances. They have enough energy that they will not be recaptured by the proton that is created in the beta decay, so they can, in theory, last indefinitely. Mirror effect In 1951, as part of the first wave of research into fusion energy, University of California Radiation Laboratory at Livermore ("Livermore") researcher Richard F. Post introduced the magnetic mirror concept. The mirror is a deceptively simple device, consisting largely of a cylindrical vacuum chamber that holds the fusion fuel and an electromagnet wound around it to form a modified solenoid. A solenoid normally generates a linear magnetic field along the center of its axis, in this case down the middle of the vacuum chamber. When charged particles are placed in a magnetic field, they orbit around the field lines, which, in this case, stops them from moving sideways and hitting the walls of the chamber. In a normal solenoid, they would still be free to move along the lines and thus escape out the ends. Post's insight was to wind the electromagnet in such a way that the field was stronger at the ends than in the center of the chamber. As particles flow towards the ends, these stronger fields force the lines together, and the resulting curved field causes particles to "reflect" back, thus leading to the name mirror. In a perfect magnetic mirror, the particles of fuel would bounce back and forth, never reaching the ends nor touching the sides of the cylinder. However, even in theory, no mirror is perfect; there is always a population of particles with the right energy and trajectory that allow them to flow out of the ends through the "loss cone". This makes magnetic mirrors inherently leaky systems, although initial calculations suggested the rate of leakage was low enough that one could still use it to produce a fusion reactor. Christofilos effect The shape of the Earth's magnetic field, or geomagnetic field, is similar to that of a magnetic mirror. The field balloons outward over the equator, and then necks down as it approaches the poles. Such a field would thus reflect charged particles in the same fashion as Post's mirrors. This was not a new revelation; it was already long understood to be the underlying basis for the formation of aurora. In the case of the aurora, particles of the solar wind begin orbiting around the field lines, bouncing back and forth between the poles. With every pass, some of the particles leak past the mirror points and interact with the atmosphere, ionizing the air and causing the light. Electrons released by fission events are generally in the range of . Initially, these would be subject to mirroring high in the atmosphere, where they are unlikely to react with atmospheric atoms and might reflect back and forth for some time. When one considers a complete "orbit" from north pole to south and back again, the electrons naturally spend more time in the mirror regions because this is where they are slowing down and reversing. This leads to increased electron density at the mirror points. The magnetic field created by the moving electrons in this region interacts with the geomagnetic field in a way that causes the mirror points to be forced down into the atmosphere. Here, the electrons undergo more interactions as the density of the atmosphere increases rapidly. These interactions slow the electrons so they produce less magnetic field, resulting in an equilibrium point being reached in the upper atmosphere about in altitude. Using this as the average altitude as the basis for the air density calculation allowed the interaction rate with the atmosphere to be calculated. Running the numbers, it appeared that the average lifetime of an electron would be of the order of 2.8 days. Example As an illustration, Christofilos considered the explosion of a bomb. This would produce 10 fission events, which in turn produce four electrons per fission. For the mirror points being considered, almost any beta particle traveling roughly upward or downward would be captured, which he estimated to be about half of them, leaving 2×10 electrons trapped in the field. Because of the shape of the Earth's field, and the results of the right-hand rule, the electrons would drift eastward and eventually create a shell around the entire Earth. Assuming the electrons were evenly spread, a density of 0.2 electrons per cubic centimeter would be produced. Because the electrons are moving rapidly, any object within the field would be subjected to impacts of about 1.5×10 electrons per second per square centimeter. These impacts cause the electrons to slow down, which, through bremsstrahlung, releases radiation into the object. The rate of bremsstrahlung depends on the relative atomic mass, or Z, of the material. For an object with an average Z of 10, the resulting flux is about 100 roentgen/hour, compared to the median lethal dose of about 450. Christofilos noted that this would be a significant risk to space travelers and their electronic equipment. As reentry vehicles (RVs) from ICBMs approach their targets, they travel at about , or around . An RV traveling through the mirror layer, where the electrons are at their densest, would thus be in the midst of the electric field for about ten seconds. Because of a warhead's high speed, the apparent voltage spike would induce an enormous current in any of its metal components. This might be so high as to melt the airframe, but more realistically, could destroy the trigger or guidance mechanisms. The density of the field is greatest at the mirror points, of which there are always two for a given explosion, the so-called magnetic conjugates. The explosion can take place at either of these two points, and the magnetic field will cause them to concentrate at the other point as well. Christofilos noted that the conjugate point for most of the continental United States is in the South Pacific, far west of Chile, where such explosions would not be noticed. Thus, if one were to explode a series of such bombs in these locations, a massive radiation belt would form over the US, which might disable the warheads of a Soviet attack. Of additional interest to military planners was the possibility of using this effect as an offensive weapon. In the case of an attack by US forces on the Soviet Union, the southern conjugate locations are generally in the Indian Ocean, where they would not be seen by Soviet early warning radar. A series of explosions would cause a massive radar blackout over Russia, degrading its anti-ballistic missile (ABM) system without warning. Since these effects were expected to endure for up to five minutes, about the amount of time that a line-of-sight radar in Russia would have to see the warheads, careful timing of the attack could render the ABM system useless. History Background Christofilos began his career in physics while reading journal articles at an elevator company during the Axis occupation of Greece when he had little else to do. In the post-war era, he started an elevator repair service, during which time he began to develop the concept today known as strong focusing, a key development in the history of particle accelerators. In 1949, he sent a letter describing the idea to the Berkeley Lab but they rejected it after finding a minor error. In 1952, the idea was developed independently at the Brookhaven National Laboratory, which published on the topic. Convinced they had stolen the idea, Christofilos traveled to the US where he managed to win a job at Brookhaven. Christofilos soon became more interested in nuclear fusion efforts than particle accelerator design. At the time there were three primary designs being actively worked on in the US program, the magnetic mirror, the stellarator, and the z-pinch. The mirror was often viewed unfavorably due to its inherent leakiness, a side effect of its open field lines. Christofilos developed a new concept to address this problem, known as the Astron. This consisted of a mirror with an associated particle accelerator that injected electrons outside the traditional mirror area. Their rapid movement formed a second magnetic field which mixed with that of the electromagnet and caused the resulting net field to "close", fixing the mirror's biggest problem. Sputnik and Explorer During the same period, plans were being made by the US to test the presence of the expected charged layer directly using the Explorer 1 satellite as part of the International Geophysical Year (IGY). Before Explorer launched, the Soviets surprised everyone by launching Sputnik 1 in October 1957. This event caused near-panic in US defense circles, where many concluded the Soviets had achieved an insurmountable scientific lead. Among those worried about the Soviet advances was Christofilos, who published his idea in an internal memo that same month. When Explorer launched in January 1958, it confirmed the existence of what became known as the Van Allen radiation belts. This led to new panic within the defense establishment when some concluded that the Van Allen belts were not due to the Sun's particles, but secret Soviet high-altitude nuclear tests of the Christofilos concept. Planning begins Christofilos' idea immediately sparked intense interest; if the concept worked in practice, the US would have a "magic bullet" that might render the Soviet ICBM fleet useless. In February 1958, James Rhyne Killian, chairman of the recently formed President's Science Advisory Committee (PSAC), convened a working group at Livermore to explore the concept. The group agreed that the basic concept was sound, but many practical issues could only be solved by direct testing with explosions at high altitudes. By that time, planning for the 1958 nuclear testing series, Operation Hardtack I, was already nearing completion. This included several high-altitude explosions launched over the South Pacific testing range. As these were relatively close to the equator, the proper injection point for the magnetic field was at a relatively high altitude, far higher than the of Shot Teak. This would limit the usefulness of these explosions for testing the Christofilos effect. A new series of explosions to test the effect would be needed. Adding to the urgency of the planning process was the ongoing negotiations in Geneva between the US and the USSR to arrange what eventually became the Partial Nuclear Test Ban Treaty. At the time, it appeared that a test ban might come into place in the northern-hemisphere fall of 1958. The Soviets would react negatively if the US began high-altitude tests while negotiations were taking place. The planners were given the task of completing the tests by 1 September 1958. The launch of Sputnik also resulted in the formation of the Advanced Research Projects Agency (ARPA) in February 1958, initially with the mission of centralizing the various US missile development projects. Its charter was soon expanded to consider the topic of defense in general, especially defense against missile attack that Sputnik made clear was a real possibility. ARPA's scientific director, Herbert York, formed a blue-ribbon committee under the name "Project 137" to "identify problems not now receiving adequate attention". The twenty-two man committee of who's-who in the physics world was chaired by John Archibald Wheeler, who popularized the term black hole. York briefed President Eisenhower on the Christofilos concept and, on 6 March 1958, received a go-ahead to run a separate test series. Intense planning was carried out over the next two months. Christofilos did not have Q clearance and could not be part of the planning. The Project 137 group nevertheless arranged for Christofilos to meet with them at Fort McNair on 14 July 1958 for a discussion of the plans. Testing To achieve the September deadline, weapons and equipment would need to be drawn as much as possible from existing stocks. This resulted in the only suitable launcher being the Lockheed X-17, which was under production for reentry testing and was available in some quantity. Unfortunately, the X-17's limited altitude capability meant it could not reach the required altitude to hit mirror points in the South Pacific over the testing grounds. The only area that had a field low enough for the X-17 to hit easily was the South Atlantic Anomaly, where the Van Allen Belt descends as low as . Planning for tests normally took a year or more, which is why tests normally occurred in closely spaced "series". In contrast, Operation Argus tests went from initial approval by the President on 6 March 1958 to actual tests in only five months. Among other firsts, the tests were to be kept entirely secret from start to after completion, were the first ballistic missile tests from a ship at sea, and were the only atmospheric nuclear test operation in the Atlantic Ocean. The final plans were approved by the President on 1 May 1958. To measure the effect, Explorer IV and Explorer V were launched in August, although only IV reached orbit. Operation Argus was carried out in late August and early September 1958. Three low-yield atomic bombs were detonated over the south Atlantic at a height of . The bombs released charged particles that behaved exactly as Christofilos had predicted, being trapped along the lines of force. Those that managed to get far enough into the atmosphere to the north and south set up a small magnetic storm. Outcome These tests demonstrated that the possibility of using the effect as a defensive system did not work. However, exact details on the lack of effectiveness remain absent in available sources. Most references state that the effect did not last long enough to be useful, with an ARPA report concluding that it "dissipated rapidly" and would thus have little value as an anti-warhead system. However, other sources state that the effect persisted for over six days on the last test. Public release Late in June 1958, Hanson Baldwin, a Pulitzer Prize-winning military correspondent at The New York Times, received tantalizing hints of a major US military operation. It is now believed that this leaked from the University of Iowa lab run by James Van Allen, which was working with ARPA on Argus throughout this period. Baldwin asked his science reporter colleague Walter Sullivan (journalist) about the matter. Sullivan spoke to Richard Porter, chair of the IGY Panel on Rockets and Satellites, who was "horrified" by how much information Baldwin had found out. An hour later, Sullivan received a call from ARPA, asking him to hold the story until the tests were complete. By the end of the year, with the tests over and the concept largely abandoned, Christofilos was able to talk about the concept openly at an October 1958 meeting of the American Physical Society, leaving out only the detail that an atomic bomb would be used to create the radiation. At the December meeting of the American Association for the Advancement of Science, Sullivan heard that a paper on the topic, titled "Artificial Modification of the Earth's Radiation Belt", was being readied for publication. Sullivan and Baldwin realized they were about to lose their "scoop", so Sullivan wrote to York asking for clearance as it was clear other reporters were learning of the tests. York discussed the matter with James Killian, chair of the Presidents Science Advisory Committee (PSAC), who added that Van Allan was also pressing hard for publication rights. Sullivan later drove home his point about the information coming out anyway by calling the IGY monitoring stations and asking about records for aurora during August and September. He was told there was a "rather remarkable event" that did not correspond to any known solar storm. He sent another letter to York, noting that the hints about the project were already public and were simply waiting for someone to connect the dots. York called him to the Pentagon and asked him again to hold off. Sullivan concluded this was no longer due to military necessity but was political; the test ban negotiations were ongoing and the sudden release of news the US had performed new tests in space would be a serious problem. Sullivan and Baldwin once again sat on the story. In February 1959, Killian was in New York giving a speech. Sullivan attended and at the end handed him a letter. The two sat down and Killian read it. The letter outlined the fact that an increasing amount of information was leaking about the tests and that the Times had been patiently waiting on approval from the Pentagon that appeared not to be forthcoming. Meanwhile, scientists working on the project were becoming increasingly vocal about the publication of the data, and a late February meeting resulted in arguments. At a PSAC meeting, Killian finally agreed to release the data at the April meeting of the National Academy of Sciences, but still did not tell the Times. Baldwin and Sullivan had had enough; they went to the top of the Times hierarchy, publisher Arthur Hays Sulzberger, president Orvil E. Dryfoos, and managing editor Turner Catledge, who approved publication. On 18 March 1959, Sullivan tried to call Killian but reached his assistant instead, while Baldwin spoke with ARPA director Roy Johnson. The two wrote the story that night, waiting for the phone call that would again kill the story. The phone never rang and the story was published the next day. Ongoing concerns In 2008, science writer Mark Wolverton noted ongoing concerns about the use of the Christofilos effect as a way to disable satellites. See also Operation Argus Operation Fishbowl Outer Space Treaty Soviet Project K nuclear tests Starfish Prime Van Allen radiation belt List of artificial radiation belts Nicholas Christofilos Notes References Citations General references Astroparticle physics Exoatmospheric nuclear weapons testing Anti-ballistic weapons
Christofilos effect
[ "Physics" ]
3,890
[ "Astroparticle physics", "Particle physics", "Astrophysics" ]
3,840,994
https://en.wikipedia.org/wiki/Bent%27s%20rule
In chemistry, Bent's rule describes and explains the relationship between the orbital hybridization and the electronegativities of substituents. The rule was stated by Henry A. Bent as follows: Valence bond theory gives a good approximation of molecular structure. Bent's rule addresses disparities between the observed and idealized geometries. According to Bent's rule, a central atom bonded to multiple groups will rehybridize so that orbitals with more s character are directed towards electropositive groups, and orbitals with more p character will be directed towards groups that are more electronegative. By removing the assumption that all hybrid orbitals are equivalent, Bent's rule leads to improved predictions of molecular geometry and bond strengths. Bent's rule can be justified through the relative energy levels of s and p orbitals. Bent's rule represents a modification of VSEPR theory for molecules of lower than ideal symmetry. For bonds with the larger atoms from the lower periods, trends in orbital hybridization depend strongly on both electronegativity and orbital size. History In the early 1930s, shortly after much of the initial development of quantum mechanics, those theories began to be applied towards molecular structure by Pauling, Slater, Coulson, and others. In particular, Pauling introduced the concept of hybridisation, where atomic s and p orbitals are combined to give hybrid sp, sp2, and sp3 orbitals. Hybrid orbitals proved powerful in explaining the molecular geometries of simple molecules like methane, which is tetrahedral with an sp3 carbon atom and bond angles of 109.5° between the four equivalent C-H bonds. However, slight deviations from these ideal geometries became apparent in the 1940s. A particularly well known example is water, where the angle between the two O-H bonds is only 104.5°. To explain such discrepancies, it was proposed that hybridisation can result in orbitals with unequal s and p character. A. D. Walsh described in 1947 a relationship between the electronegativity of groups bonded to carbon and the hybridisation of said carbon atom. Finally, in 1961, Bent published a major review of the literature that related molecular structure, central atom hybridisation, and substituent electronegativities and it is for this work that Bent's rule takes its name. Bent's original paper considers the group electronegativity of the methyl group to be less than that of the hydrogen atom because methyl substitution reduces the acid dissociation constants of formic acid and of acetic acid. Nonbonding orbitals Bent's rule can be extended to rationalize the hybridization of nonbonding orbitals as well. On the one hand, a lone pair (an occupied nonbonding orbital) can be thought of as the limiting case of an electropositive substituent, with electron density completely polarized towards the central atom. Bent's rule predicts that, in order to stabilize the unshared, closely held nonbonding electrons, lone pair orbitals should take on high s character. On the other hand, an unoccupied (empty) nonbonding orbital can be thought of as the limiting case of an electronegative substituent, with electron density completely polarized towards the ligand and away from the central atom. Bent's rule predicts that, in order to leave as much s character as possible for the remaining occupied orbitals, unoccupied nonbonding orbitals should maximize p character. Experimentally, the first conclusion is in line with the reduced bond angles of molecules with lone pairs like water or ammonia compared to methane, while the second conclusion accords with the planar structure of molecules with unoccupied nonbonding orbitals, like monomeric borane and carbenium ions. Consequences Bent's rule can be used to explain trends in both molecular structure and reactivity. After determining how the hybridisation of the central atom should affect a particular property, the electronegativity of substituents can be examined to see if Bent's rule holds. Bond angles: VSEPR Theory and Bent's Rule Valence bond theory predicts that methane is tetrahedral and that ethylene is planar. In water and ammonia, the situation is more complicated because the bond angles are 104.5° and 107° respectively, which are less than the expected tetrahedral angle of 109.5°. One rationale for those deviations is VSEPR theory, where valence electrons are assumed to lie in localized regions and lone pairs are assumed to repel each other to a greater extent than bonding pairs. Bent's rule provides an alternative explanation. Valence shell electron pair repulsion (VSEPR) theory predicts molecule geometry. VSEPR predicts molecular geometry to take the configuration that allows electron pairs to be most spaced out. This electron distance maximization happens to achieve the most stable electron distribution. The result of VSEPR theory is being able to predict bond angles with accuracy. According to VSEPR theory, the geometry of a molecule can be predicted by counting how many electron pairs and atoms are connected to a central atom. Bent's rule states "[A]tomic s character concentrates in orbitals directed toward electropositive substituents". Bent's rule implies that bond angles will deviate from the bond angle predicted by VSEPR theory; the relative electronegativities of atoms surrounding the central atom will impact the molecule geometry. VSEPR theory suggests a way to accurately predict molecule shape using simple rules. However, VSEPR theory predicts observed molecular bond angles only approximately. On the other hand, Bent's rule is more accurate. Furthermore, it has been shown that Bent's rule corroborates quantum mechanical computations when describing molecule geometry. The table above demonstrates the differences between VSEPR theory predicted bond angles and their real-world angles. According to VSEPR theory, diethyl ether, methanol, water and oxygen difluoride should all have a bond angle of 109.5o. Using VSEPR theory, all these molecules should have the same bond angle because they have the same "bent" shape. Yet, clearly the bond angles between all these molecules deviate from their ideal geometries in different ways. Bent's rule can help elucidate these apparent discrepancies. Electronegative substituents will have more p character. Bond angle has a proportional relationship with s character and an inverse relationship with p character. Thus, as substituents become more electronegative, the bond angle of the molecule should decrease. Dimethyl ether, methanol, water and oxygen difluoride follow this trend as expected (as is shown in the table above).  Two methyl groups are the substituents attached to the central oxygen in diethyl ether. Because the two methyl groups are electropositive, greater s character will be observed and the real bond angle is larger than the ideal bond angle of 109.5o. Methanol has one electropositive methyl substituent and one electronegative hydrogen substituent. Hence, less s character is observed than dimethyl ether. When there are two hydrogen substituent groups, the angle is decreased even further with the increase in electronegativity and p character. Finally, when both hydrogen substituents are replaced with fluorine in oxygen difluoride, there is another decrease in the bond angle. Fluorine is highly electronegative, resulting in this significant decrease in bond angle. In predicting the bond angle of water, Bent's rule suggests that hybrid orbitals with more s character should be directed towards the lone pairs, while that leaves orbitals with more p character directed towards the hydrogens, resulting in deviation from idealized O(sp3) hybrid orbitals with 25% s character and 75% p character. In the case of water, with its 104.5° HOH angle, the OH bonding orbitals are constructed from O(~sp4.0) orbitals (~20% s, ~80% p), while the lone pairs consist of O(~sp2.3) orbitals (~30% s, ~70% p). As discussed in the justification above, the lone pairs behave as very electropositive substituents and have excess s character. As a result, the bonding electrons have increased p character. This increased p character in those orbitals decreases the bond angle between them to less than the tetrahedral 109.5°. The same logic can be applied to ammonia (107.0° HNH bond angle, with three N(~sp3.4 or 23% s) bonding orbitals and one N(~sp2.1 or 32% s) lone pair), the other canonical example of this phenomenon. The same trend holds for nitrogen containing compounds. Against the expectations of VSEPR theory but consistent with Bent's rule, the bond angles of ammonia (NH3) and nitrogen trifluoride (NF3) are 107° and 102°, respectively. Unlike VSEPR theory, whose theoretical foundations now appear shaky, Bent's rule is still considered to be an important principle in modern treatments of bonding. For instance, a modification of this analysis is still viable, even if the lone pairs of H2O are considered to be inequivalent by virtue of their symmetry (i.e., only s, and in-plane px and py oxygen AOs are hybridized to form the two O-H bonding orbitals σO-H and lone pair nO(σ), while pz becomes an inequivalent pure p-character lone pair nO(π)), as in the case of lone pairs emerging from natural bond orbital methods. For a tetrahedral molecule such as difluoromethane with two types of atom bonded to the central atom, the C-F bond to the more electronegative substituent (F) will involve a carbon orbital with less s character than the C-H bond, so that the angle between the C-F bonds is less than the tetrahedral bond angle of 109.5°. Trigonal bipyramid molecules have both with axial and equatorial positions. If there are two types of substituents, the more electronegative substituent will prefer the axial position as there are smaller bond angles between axial and electronegative substituents than between two equatorial substituents. Bond lengths Similarly to bond angles, the hybridisation of an atom can be related to the lengths of the bonds it forms. As bonding orbitals increase in s character, the σ bond length decreases. By adding electronegative substituents and changing the hybridisation of the central atoms, bond lengths can be manipulated. If a molecule contains a structure X-A--Y, replacement of the substituent X by a more electronegative atom changes the hybridization of central atom A and shortens the adjacent A--Y bond. Bonds between elements of disparate electronegativities will be polar and the electron density in such bonds will be shifted towards the more electronegative element. Applying this idea to the molecule fluoromethane illustrates the power of Bent's rule. Because carbon is more electronegative than hydrogen, the electron density in a C-H bond will be shortened and the C-F bond will be elongated. The same trend also holds for the chlorinated analogs of methane, although the effect is less dramatic because chlorine is less electronegative than fluorine. The above cases seem to demonstrate that the size of the chlorine is less important than its electronegativity. A prediction based on sterics alone would lead to the opposite trend, as the large chlorine substituents would be more favorable far apart. As the steric explanation contradicts the experimental result, Bent's rule is likely playing a primary role in structure determination. JCH Coupling constants Perhaps the most direct measurement of s character in a bonding orbital between hydrogen and carbon is via the 1H−13C coupling constants determined from NMR spectra. Theory predicts that JCH values correlates with s character. In particular, the one bond 13C-1H coupling constant 1J13C-1H is related to the fractional s character of the carbon hybrid orbital used to form the bond through the empirical relationship , where is the s character. (For instance the pure sp3 hybrid atomic orbital found in the C-H bond of methane would have 25% s character resulting in an expected coupling constant of 500 Hz × 0.25 = 125 Hz, in excellent agreement with the experimentally determined value.) As the electronegativity of the substituent increases, the amount of p character directed towards the substituent increases as well. This leaves more s character in the bonds to the methyl protons, which leads to increased JCH coupling constants. Inductive effect The inductive effect can be explained with Bent's rule. The inductive effect is the transmission of charge through covalent bonds and Bent's rule provides a mechanism for such results via differences in hybridisation. In the table below, as the groups bonded to the central carbon become more electronegative, the central carbon becomes more electron-withdrawing as measured by the polar substituent constant. The polar substituent constants are similar in principle to σ values from the Hammett equation, as an increasing value corresponds to a greater electron-withdrawing ability. Bent's rule suggests that as the electronegativity of the groups increase, more p character is diverted towards those groups, which leaves more s character in the bond between the central carbon and the R group. As s orbitals have greater electron density closer to the nucleus than p orbitals, the electron density in the C−R bond will more shift towards the carbon as the s character increases. This will make the central carbon more electron-withdrawing to the R group. Thus, the electron-withdrawing ability of the substituents has been transferred to the adjacent carbon, as the inductive effect predicts. Formal theory Bent's rule provides an additional level of accuracy to valence bond theory. Valence bond theory proposes that covalent bonds consist of two electrons lying in overlapping, usually hybridised, atomic orbitals from two bonding atoms. The assumption that a covalent bond is a linear combination of atomic orbitals of just the two bonding atoms is an approximation (see molecular orbital theory), but valence bond theory is accurate enough that it has had and continues to have a major impact on how bonding is understood. In valence bond theory, two atoms each contribute an atomic orbital and the electrons in the orbital overlap form a covalent bond. Atoms do not usually contribute a pure hydrogen-like orbital to bonds. If atoms could only contribute hydrogen-like orbitals, then the experimentally confirmed tetrahedral structure of methane would not be possible as the 2s and 2p orbitals of carbon do not have that geometry. That and other contradictions led to the proposing of orbital hybridisation. In that framework, atomic orbitals are allowed to mix to produce an equivalent number of orbitals of differing shapes and energies. In the aforementioned case of methane, the 2s and three 2p orbitals of carbon are hybridized to yield four equivalent sp3 orbitals, which resolves the structure discrepancy. Orbital hybridisation allowed valence bond theory to successfully explain the geometry and properties of a vast number of molecules. In traditional hybridisation theory, the hybrid orbitals are all equivalent. Namely the atomic s and p orbital(s) are combined to give four orbitals, three orbitals, or two orbitals. These combinations are chosen to satisfy two conditions. First, the total amount of s and p orbital contributions must be equivalent before and after hybridisation. Second, the hybrid orbitals must be orthogonal to each other. If two hybrid orbitals were not orthogonal, by definition they would have nonzero orbital overlap. Electrons in those orbitals would interact and if one of those orbitals were involved in a covalent bond, the other orbital would also have a nonzero interaction with that bond, violating the two electron per bond tenet of valence bond theory. To construct hybrid s and p orbitals, let the first hybrid orbital be given by , where pi is directed towards a bonding group and λi determines the amount of p character this hybrid orbital has. This is a weighted sum of the wavefunctions. Now choose a second hybrid orbital , where pj is directed in some way and λj is the amount of p character in this second orbital. The value of λj and direction of pj must be determined so that the resulting orbital can be normalized and so that it is orthogonal to the first hybrid orbital. The hybrid can certainly be normalized, as it is the sum of two normalized wavefunctions. Orthogonality must be established so that the two hybrid orbitals can be involved in separate covalent bonds. The inner product of orthogonal orbitals must be zero and computing the inner product of the constructed hybrids gives the following calculation. The s orbital is normalized and so the inner product . Also, the s orbital is orthogonal to the pi and pj orbitals, which leads to two terms in the above equaling zero. Finally, the last term is the inner product of two normalized functions that are at an angle of to each other, which gives by definition. However, the orthogonality of bonding orbitals demands that , so we get Coulson's theorem as a result: This means that the four s and p atomic orbitals can be hybridised in arbitrary directions provided that all of the coefficients λ satisfy the above condition pairwise to guarantee the resulting orbitals are orthogonal. Bent's rule, that central atoms direct orbitals of greater p character towards more electronegative substituents, is easily applicable to the above by noting that an increase in the λi coefficient increases the p character of the hybrid orbital. Thus, if a central atom A is bonded to two groups X and Y and Y is more electronegative than X, then A will hybridise so that . More sophisticated theoretical and computation techniques beyond Bent's rule are needed to accurately predict molecular geometries from first principles, but Bent's rule provides an excellent heuristic in explaining molecular structures. Henry Bent originally proposed his rule in 1960 on empirical grounds, but a few years later it was supported by molecular orbital calculations by Russell Drago. Applications of Bent's Rule Bent's rule is able to characterize molecule geometry with accuracy. Bent's rule provides a reliable and robust framework for predicting the bond angles of molecules. Bent's rule accuracy and precision in predicting the geometry of real-world molecules continues to demonstrate its credibility. Beyond bond angle prediction, Bent's rule has some significant applications and is of considerable interest to chemists. Bent's rule can be applied to analyzing bonding interactions and molecular syntheses. Bent's rule can be used to predict which products are favored in an organic synthesis depending on the starting materials. Wang et. al. considered how the substituents affected the silabenzenes' equilibrium and found that Bent's rule played a significant role in the results. The study conducted by Wang et. al. demonstrates how Bent's rule can be used to predict the route of a synthesis and the stability of products. Showing a similar application, Dubois et. al were able to justify some of their findings using Bent's rule when they found a reaction to be irreversible. Both these studies show how Bent's rule can be used to aid synthetic chemistry. Knowing how molecular geometry accurately due to Bent's rule allows synthetic chemists to predict relative product stability. Additionally, Bent's rule can help chemists choose their starting materials to drive the reaction towards a particular product. Hence, Bent's rule allows synthetic chemists to exert more control over reactions of interest. See also Molecular orbital theory Orbital hybridisation Molecular geometry Linear combination of atomic orbitals References Molecular geometry Eponymous chemical rules Chemical bonding
Bent's rule
[ "Physics", "Chemistry", "Materials_science" ]
4,208
[ "Molecular geometry", "Molecules", "Stereochemistry", "Condensed matter physics", "nan", "Chemical bonding", "Matter" ]