id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
22,562,598
https://en.wikipedia.org/wiki/Purnell%20equation
The Purnell equation is an equation used in analytical chemistry to calculate the resolution Rs between two peaks in a chromatogram. where Rs is the resolution between the two peaks N2 is the plate number of the second peak α is the separation factor between the two peaks k '2 is the retention factor of the second peak. The higher the resolution, the better the separation. References Chromatography Equations
Purnell equation
[ "Chemistry", "Mathematics" ]
85
[ "Chromatography", "Separation processes", "Mathematical objects", "Equations", "Analytical chemistry stubs" ]
22,564,916
https://en.wikipedia.org/wiki/Bezoar
A bezoar stone ( ) is a mass often found trapped in the gastrointestinal system, though it can occur in other locations. A pseudobezoar is an indigestible object introduced intentionally into the digestive system. There are several varieties of bezoar, some of which have inorganic constituents and others organic. The term has both modern (medical, scientific) and traditional usage. Types By content Food boluses (or boli; singular bolus) have the archaic and positive meaning of bezoar, and are composed of loose aggregates of food items such as seeds, fruit pith, or pits, as well as other types of items such as shellac, bubble gum, soil, and concretions of some medications. Lactobezoars are a specific type of food bezoar consisting of inspissated milk. It is most commonly seen in premature infants receiving formula foods. Pharmacobezoars (or medication bezoars) are mostly tablets or semiliquid masses of drugs, normally found following an overdose of sustained-release medications. Pseudobezoars are man-made ingestible, permeable, expandable implements that can swell in the stomach or in the intestines and stay inflated for a certain period of time, during which they perform particular functions, such as reducing gastric volume. Phytobezoars are composed of indigestible plant material (e.g., cellulose), and are frequently reported in patients with impaired digestion and decreased gastric motility. Diospyrobezoar is a type of phytobezoar formed from unripe persimmons. Coca-Cola has been used to treat them. Trichobezoar is a bezoar formed from hair – an extreme form of hairball. Humans who frequently consume hair sometimes require these to be removed. In cases of Rapunzel syndrome, surgery may be required. By location A bezoar in the esophagus is common in young children and in horses; in horses, it is known as choke. A bezoar in the large intestine is known as a fecalith. A bezoar in the trachea is called a tracheobezoar. Cause Esophageal bezoars discovered in nasogastrically fed patients on mechanical ventilation and sedation are reported to be due to the precipitation of certain food types rich in casein, which are precipitated with gastric acid reflux to form esophageal bezoars. Bezoars can also be caused by gastroparesis due to the slowing of gastric emptying, which allows food to form a bolus. History The word bezoar is derived from the Persian (), literally . The myth of the bezoar as an antidote reached Europe from the Middle East in the 11th century and remained popular until it started to fall into disrepute by the 18th century. People believed that a bezoar had the power of a universal antidote and would work against any poison – a drinking glass that contained a bezoar could allegedly neutralize any poison poured into it. Ox bezoars ( () or ) are used in Chinese herbology to treat various diseases. They are gallstones or gallstone substitutes formed from ox or cattle bile. Some products allegedly remove toxins from the body. The Andalusian physician Ibn Zuhr ( 1161), known in the West as Avenzoar, is thought to have made the earliest description of bezoar stones as medicinal items. Extensive reference to bezoars also appears in the Picatrix. In 1567, French surgeon Ambroise Paré did not believe that it was possible for the bezoar to cure the effects of any poison and described an experiment to test the properties of the stone. A cook in the King's court was sentenced to death and chose to be poisoned rather than hanged, under the condition that he would be given a bezoar after the poison. Paré administered the bezoar stone to the cook, but it had no effect, and the cook died in agony seven hours after taking the poison, proving that – contrary to popular belief – the bezoar could not cure all poisons. Modern examinations of the properties of bezoars by Gustaf Arrhenius and Andrew A. Benson of the Scripps Institution of Oceanography show that when bezoars are immersed in an arsenic-laced solution, they can remove the poison. The toxic compounds in arsenic are arsenate and arsenite; each is acted upon differently by the bezoars: arsenate is removed by being exchanged for phosphate in brushite found in the stones, while arsenite is bound to sulfur compounds in the protein of degraded hair, which is a key component in bezoars. A famous case in the common law of England (Chandelor v Lopus, 79 Eng Rep. 3, Cro. Jac. 4, Eng. Ct. Exch. 1603) announced the rule of ("let the buyer beware") if the goods purchased are not in fact genuine and effective. The case concerned a purchaser who sued for the return of the purchase price of an allegedly fraudulent bezoar. Bezoars were important objects in cabinets of curiosity and in natural-history collections, mainly for their use in early-modern pharmacy and in the study of animal health. The Merck Manual of Diagnosis and Therapy notes that consumption of unripened persimmons has been identified as the main cause of epidemics of intestinal bezoars and that up to 90 percent of bezoars that occur from excessive consumption require surgery for removal. A 2013 review of three databases identified 24 publications presenting 46 patients treated with Coca-Cola for phytobezoars. Clinicians administered the cola in doses of to up to over 24 hours, orally or by gastric lavage. A total of 91.3% of patients had complete resolution after treatment with Coca-Cola: 50% after a single treatment, with others requiring cola plus endoscopic removal. Doctors resorted to surgical removal in four cases. See also Bezoardicum Coca-Cola treatment of phytobezoars Enterolith Fecalith Gastrolith Goa stone Gorochana Regurgitalith Snake-stones Toadstone References Bibliography Barry Levine. 1999. Principles of Forensic Toxicology. Amer. Assoc. for Clinical Chemistry. . Martín-Gil FJ, Blanco-Ávarez JI, Barrio-Arredondo MT, Ramos-Sanchez MC, Martin-Gil J. Jejunal bezoar caused by a piece of apple peel – Presse Med, 1995 Feb. 11; 24(6):326. This article incorporates text from a publication now in the public domain: Further reading Borschberg, Peter, "The Euro-Asian Trade in Bezoar Stones (approx. 1500-1700)", Artistic and Cultural Exchanges between Europe and Asia, 1400–1900: Rethinking Markets, Workshops and Collections, ed. Thomas DaCosta Kaufmann and Michael North, Aldershot: Ashgate, 2010, pp. 29–43. Borschberg, Peter, "The Trade, Forgery and Medicinal Use of Porcupine Bezoars in the Early Modern Period (c.1500–1750)", ed. Carla Alferes Pinto, Oriente, vol. 14, Lisbon: Fundação Oriente, 2006. External links Gastrointestinal tract disorders History of medicine Magic items
Bezoar
[ "Physics" ]
1,573
[ "Magic items", "Physical objects", "Matter" ]
27,304,163
https://en.wikipedia.org/wiki/Tryptic%20soy%20broth
Tryptic soy broth or Trypticase soy broth (frequently abbreviated as TSB) is used in microbiology laboratories as a culture broth to grow aerobic and facultative anaerobic bacteria. It is a general purpose medium that is routinely used to grow bacteria which tend to have high nutritional requirements (i.e., they are fastidious). Uses Sterility test medium in USP and EP as well as for inocula preparation for CLSI standards. TSB is frequently used in commercial diagnostics in conjunction with the additive sodium thioglycolate which promotes growth of anaerobes. Preparation To prepare 1 liter of TSB, the following ingredients are dissolved under gentle heat. Adjustments to pH should be made using 1N HCl or 1N NaOH to reach a final target pH of 7.3 ± 0.2 at 25°C. The solution is then autoclaved for 15 minutes at 121°C. Tryptic Soy Agar contains per liter: 17 g pancreatic digest of casein 3 g peptic digest of soybean 5 g sodium chloride 2.5g dipotassium phosphate (K2HPO4) 2.5g glucose References Microbiological media
Tryptic soy broth
[ "Biology" ]
256
[ "Microbiological media", "Microbiology equipment" ]
27,309,180
https://en.wikipedia.org/wiki/CompCert
CompCert is a formally verified optimizing compiler for a large subset of the C99 programming language (known as Clight) which currently targets PowerPC, ARM, RISC-V, x86 and x86-64 architectures. This project, led by Xavier Leroy, started officially in 2005, funded by the French institutes ANR and INRIA. The compiler is specified, programmed and proven in Coq. It aims to be used for programming embedded systems requiring reliability. The performance of its generated code is often close to that of GCC (version 3) at optimization level -O1, and always better than that of GCC without optimizations. Since 2015, AbsInt offers commercial licenses, provides support and maintenance, and contributes to the advancement of the tool. CompCert is released under a noncommercial license, and is therefore not free software, although some of its source files are dual-licensed with the GNU Lesser General Public License version 2.1 or later or are available under the terms of other licenses. For the development of CompCert, the first practically useful optimizing compiler targeting multiple commercial architectures that has a complete, mechanically checked proof of its correctness, Xavier Leroy and the development team of CompCert received the 2021 ACM Software System Award. References External links Formal verification of a realistic compiler Software System Award — ACM Awards Compilers Formal methods Logic in computer science Software using the GNU Lesser General Public License
CompCert
[ "Mathematics", "Engineering" ]
299
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
27,310,823
https://en.wikipedia.org/wiki/Pachner%20moves
In topology, a branch of mathematics, Pachner moves, named after Udo Pachner, are ways of replacing a triangulation of a piecewise linear manifold by a different triangulation of a homeomorphic manifold. Pachner moves are also called bistellar flips. Any two triangulations of a piecewise linear manifold are related by a finite sequence of Pachner moves. Definition Let be the -simplex. is a combinatorial n-sphere with its triangulation as the boundary of the n+1-simplex. Given a triangulated piecewise linear (PL) n-manifold , and a co-dimension 0 subcomplex together with a simplicial isomorphism , the Pachner move on N associated to C is the triangulated manifold . By design, this manifold is PL-isomorphic to but the isomorphism does not preserve the triangulation. See also Flip graph Unknotting problem References . Topology Geometric topology Structures on manifolds
Pachner moves
[ "Physics", "Mathematics" ]
206
[ "Geometric topology", "Topology", "Space", "Geometry", "Spacetime" ]
27,313,510
https://en.wikipedia.org/wiki/Kinoshita%E2%80%93Lee%E2%80%93Nauenberg%20theorem
The Kinoshita–Lee–Nauenberg theorem or KLN theorem states that perturbatively the standard model as a whole is infrared (IR) finite. That is, the infrared divergences coming from loop integrals are canceled by IR divergences coming from phase space integrals. It was introduced independently by and . An analogous result for quantum electrodynamics alone is known as Bloch–Nordsieck theorem. Ultraviolet divergences in perturbative quantum field theory are dealt with in renormalization. References Taizo Muta, Foundations of Quantum Chromodynamics: An Introduction to Perturbative Methods in Gauge Theories, World Scientific Publishing Company; 3 edition (September 30, 2009) Standard Model Quantum field theory Theorems in quantum mechanics
Kinoshita–Lee–Nauenberg theorem
[ "Physics", "Mathematics" ]
161
[ "Quantum field theory", "Theorems in quantum mechanics", "Standard Model", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Particle physics", "Quantum physics stubs", "Physics theorems" ]
4,946,900
https://en.wikipedia.org/wiki/Voltameter
A voltameter or coulometer is a scientific instrument used for measuring electric charge (quantity of electricity) through electrolytic action. The SI unit of electric charge is the coulomb. The voltameter should not be confused with a voltmeter, which measures electric potential. The SI unit for electric potential is the volt. Etymology Michael Faraday used an apparatus that he termed a "volta-electrometer"; subsequently John Frederic Daniell called this a "voltameter". Types The voltameter is an electrolytic cell and the measurement is made by weighing the element deposited or released at the cathode in a specified time. Silver voltameter This is the most accurate type. It consists of two silver plates in a solution of silver nitrate. When current is flowing, silver dissolves at the anode and is deposited at the cathode. The cathode is initially massed, current is passed for a measured time and the cathode is massed again. Copper coulometer This is similar to the silver voltameter but the anode and cathode are copper and the solution is copper sulfate, acidified with sulfuric acid. It is cheaper than the silver voltameter, but slightly less accurate. Mercury voltameter In this device, mercury is used to determine the amount of charges transformed during the following reaction: {Hg2+} + 2e- <=> Hg^\circ These oxidation/reduction processes have 100% efficiency with the wide range of the current densities. Measuring of the quantity of electricity (coulombs) is based on the changes of the mass of the mercury electrode. Mass of the electrode can be increased during cathodic deposition of the mercury ions or decreased during the anodic dissolution of the metal. Sulfuric acid voltameter The anode and cathode are platinum and the solution is dilute sulfuric acid. Hydrogen is released at the cathode and collected in a graduated tube so that its volume can be measured. The volume is adjusted to standard temperature and pressure and the mass of hydrogen is calculated from the volume. This kind of voltameter is sometimes called Hofmann voltameter. Coulometer A coulometer is a device to determine electric charges. The term comes from the unit of charge, the coulomb. There can be two goals in measuring charge: Coulometers can be devices that are used to determine an amount of substance by measuring the charges. The devices do a quantitative analysis. This method is called coulometry, and related coulometers are either devices used for a coulometry or instruments that perform a coulometry in an automatic way. Coulometers can be used to determine electric quantities in the direct current circuit, namely the total charge or a constant current. These devices invented by Michael Faraday were used frequently in the 19th century and in the first half of the 20th century. In the past, the coulometers of that type were named voltameters. See also Electrochemical cell Electrochemical equivalent Electrochemistry Electrolysis Electrolytic cell Equivalent (chemistry) Equivalent weight Faraday's laws of electrolysis Stoichiometry Sources Practical Electricity by W. E. Ayrton and T. Mather, published by Cassell and Company, London, 1911, pp 12–26 References Measuring instruments Electroanalytical chemistry devices
Voltameter
[ "Chemistry", "Technology", "Engineering" ]
692
[ "Electroanalytical chemistry devices", "Electroanalytical chemistry", "Measuring instruments" ]
24,065,181
https://en.wikipedia.org/wiki/C4H4N2OS
{{DISPLAYTITLE:C4H4N2OS}} The molecular formula C4H4N2OS (molar mass: 128.15 g/mol, exact mass: 128.0044 u) may refer to: 2-Thiouracil 4-Thiouracil Molecular formulas
C4H4N2OS
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,065,513
https://en.wikipedia.org/wiki/Mixed-mating%20model
The mixed-mating model is a mathematical model that describes the mating system of a plant population in terms of degree of self-fertilisation. It is a fairly simplistic model, employing several simplifying assumptions, most notably the assumption that every fertilisation event may be classed as either self-fertilisation, or outcrossing with a completely random mate. Thus the only model parameter to be estimated is the probability of self-fertilisation. The mixed mating model originated in the 1910s, with plant breeders who were seeking evidence of outcrossing contamination of self-pollinating crops, but a formal description of the model and its parameter estimation was not published until 1951. The model is still in common use today, though a number of more complex models are also now in use. For example, a weakness of the model lies in its assumption that inbreeding occurs only as a result of self-fertilisation; in reality, inbreeding may also occur through outcrossing between closely related individuals. The effective selfing model relaxes this assumption by seeking also to estimate the degree of shared ancestry of outcrossing mates. References Plant sexuality Mating systems Mathematical modeling
Mixed-mating model
[ "Mathematics", "Biology" ]
246
[ "Behavior", "Mathematical modeling", "Plant sexuality", "Applied mathematics", "Mating systems", "Sexuality", "Mating" ]
24,070,987
https://en.wikipedia.org/wiki/Yaw%20%28rotation%29
A yaw rotation is a movement around the yaw axis of a rigid body that changes the direction it is pointing, to the left or right of its direction of motion. The yaw rate or yaw velocity of a car, aircraft, projectile or other rigid body is the angular velocity of this rotation, or rate of change of the heading angle when the aircraft is horizontal. It is commonly measured in degrees per second or radians per second. Another important concept is the yaw moment, or yawing moment, which is the component of a torque about the yaw axis. Measurement Yaw velocity can be measured by measuring the ground velocity at two geometrically separated points on the body, or by a gyroscope, or it can be synthesized from accelerometers and the like. It is the primary measure of how drivers sense a car's turning visually. It is important in electronic stabilized vehicles. The yaw rate is directly related to the lateral acceleration of the vehicle turning at constant speed around a constant radius, by the relationship tangential speed*yaw velocity = lateral acceleration = tangential speed^2/radius of turn, in appropriate units The sign convention can be established by rigorous attention to coordinate systems. In a more general manoeuvre where the radius is varying, and/or the speed is varying, the above relationship no longer holds. Yaw rate control The yaw rate can be measured with accelerometers in the vertical axis. Any device intended to measure the yaw rate is called a yaw rate sensor. Road vehicles Studying the stability of a road vehicle requires a reasonable approximation to the equations of motion. The diagram illustrates a four-wheel vehicle, in which the front axle is located a metres ahead of the centre of gravity and the rear axle is b metres towards the rear from the center of gravity. The body of the car is pointing in a direction (theta) while it is travelling in a direction (psi). In general, these are not the same. The tyre treads at the region of contact point in the direction of travel, but the hubs are aligned with the vehicle body, with the steering held central. The tyres distort as they rotate to accommodate this mis-alignment, and generate side forces as a consequence. From directional stability study, denoting the angular velocity , the equations of motion are: with the mass of the vehicle, the vehicle speed and the vehicle's overall angle. The coefficient of will be called the 'damping' by analogy with a mass-spring-damper which has a similar equation of motion. By the same analogy, the coefficient of will be called the 'stiffness', as its function is to return the system to zero deflection, in the same manner as a spring. The form of the solution depends only on the signs of the damping and stiffness terms. The four possible solution types are presented in the figure. The only satisfactory solution requires both stiffness and damping to be positive. If the centre of gravity is ahead of the centre of the wheelbase , this will always be positive, and the vehicle will be stable at all speeds. However, if it lies further aft, the term has the potential of becoming negative above a speed given by: Above this speed, the vehicle will be directionally (yaw) unstable. Corrections for relative effect of front and rear tyres and steering forces are available in the main article. Relationship with other rotation systems These rotations are intrinsic rotations and the calculus behind them is similar to the Frenet-Serret formulas. Performing a rotation in an intrinsic reference frame is equivalent to right-multiply its characteristic matrix (the matrix that has the vector of the reference frame as columns) by the matrix of the rotation. History The first aircraft to demonstrate active control about all three axes was the Wright brothers' 1902 glider. See also Adverse yaw Aircraft principal axes Coriolis acceleration Directional stability Flight dynamics Six degrees of freedom Vehicle dynamics Yaw rate sensor References Dynamics (mechanics) Attitude control
Yaw (rotation)
[ "Physics", "Engineering" ]
824
[ "Physical phenomena", "Attitude control", "Classical mechanics", "Motion (physics)", "Dynamics (mechanics)", "Aerospace engineering" ]
24,073,609
https://en.wikipedia.org/wiki/Effective%20selfing%20model
The effective selfing model is a mathematical model that describes the mating system of a plant population in terms of the degree of self-fertilisation present. Overview It was developed in the 1980s by Kermit Ritland, as an alternative to the simplistic mixed mating model. The mixed mating model assumes that every fertilisation event may be classed as either self-fertilisation, or outcrossing with a completely random mate. That is, it assumes that inbreeding is caused solely by self-fertilisation. This assumption is often violated in wild plant populations, where inbreeding may be due to outcrossing between closely related plants. For example, in dense stands, mating often occurs between plants in close proximity; and in plants with short seed dispersal distances, plants are often closely related to their nearest neighbours. When both these criteria are met, plants will tend to be closely related to the near neighbours with which they mate, resulting in significant inbreeding. In such a scenario, the mixed mating model will attribute all inbreeding to self-fertilisation, and therefore overestimate the extent of self-fertilisation occurring. The effective selfing model takes into account the potential for inbreeding to occur as a result of outcrossing between closely related plants, by considering the extent of kinship between mates. Ultimately, it is not possible to tease apart the two potential causes of inbreeding, and attributed the observed inbreeding to one cause or the other. Therefore, just as with the mixed mating model, in the effective selfing model there is only one parameter to be estimated. However this parameter, termed the effective selfing rate, is often a more accurate measure of the proportion of self-fertilisation than the corresponding parameter in the mixed mating model. References Plant sexuality Mating systems Mathematical modeling
Effective selfing model
[ "Mathematics", "Biology" ]
382
[ "Behavior", "Mathematical modeling", "Plant sexuality", "Applied mathematics", "Mating systems", "Sexuality", "Mating" ]
24,073,803
https://en.wikipedia.org/wiki/C15H10O3
{{DISPLAYTITLE:C15H10O3}} The molecular formula C15H10O3 (molar mass: 238.24 g/mol, exact mass: 238.0630 u) may refer to: 3-hydroxyflavone, a flavonol 6-hydroxyflavone, a flavone Molecular formulas
C15H10O3
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
25,457,745
https://en.wikipedia.org/wiki/Iron%E2%80%93nickel%20clusters
Iron–nickel (Fe–Ni) clusters are metal clusters consisting of iron and nickel, i.e. Fe–Ni structures displaying polyhedral frameworks held together by two or more metal–metal bonds per metal atom, where the metal atoms are located at the vertices of closed, triangulated polyhedra. Individually, iron (Fe) and nickel (Ni) generally form metal clusters with π-acceptor ligands. Π acceptor ligands are ligands that remove some of the electron density from the metal. Figure 1 contains pictures of representative cluster shapes. Clusters take the form of closed, triangulated polyhedral. Corresponding bulk systems of Fe and Ni atoms show a variety of composition-dependent abnormalities and unusual effects. Fe–Ni composites are studied in hopes to understand and utilize these unusual and new properties. Fe–Ni clusters are used for several main purposes. Fe–Ni clusters ranging from single to hundreds of atoms are used in catalysis, depending on the reaction mechanism. Additionally, Fe–Ni clusters, usually of one or two metal atoms, are used in biological systems. These applications are discussed below. General properties Structure and geometry Several general trends are recognized in determining the structure of Fe–Ni clusters. Larger clusters, containing both iron and nickel, are most stable with Fe atoms located in the inner parts of the cluster and Ni metals on outside. In other terms, when iron and nickel form body-centered cubic structures the preferred position of Ni atoms is at the surface, instead of at the center of the cluster, as it is energetically unfavorable for two nickel atoms to occupy nearest-neighbor positions. Metal–metal bonds, being d-orbital interactions, happen at larger distances. More stable metal–metal bonds are expected to be longer than unstable bonds. This is shown by the fact that the Fe–Ni bond length is in between Ni–Ni and Fe–Fe bond lengths. For example, in Fe–Ni four-atom clusters (FeNi)2 which are most stable in a tetrahedral structure, the bond length of metal–metal Fe–Ni bond is 2.65Å and Fe–Fe bond is 2.85 Å. When bonding in these structures is examined, it follows that lowest energy cluster structures of iron and nickel are given by geometries with a maximum number of Fe–Fe bonds, and a small number of Ni–Ni bonds. The simplest Fe–Ni clusters are of one iron atom and one nickel atom bonded together. More complex clusters can be added through the addition of another atom. Some pictures of sample geometries are shown in Fig. 2. All Fe–Ni clusters exhibit some degree of distortion from usual geometry. This distortion generally becomes more pronounced as the number of Fe atoms increases. Notice how in the above cluster diagrams, as calculated by Rollmann and colleagues, the symmetry of the cluster changes from a pure octahedron (D3h) to a square pyramid (C4v) as more iron atoms are added. Reactivity and stability As mentioned previously, the relative bonding between Ni atoms in (FeNi)n clusters is weak and the stability of these clusters could be enhanced by increasing the number of Fe–Fe and Fe–Ni bonds. One measure of stability in Fe–Ni clusters is the binding energy, or how much energy is required to break the bonds between two atoms. The larger the binding energy, the stronger the bond. Binding energies of Fen-xNix clusters are found to generally decrease by successive substitutions of Ni atoms for Fe atoms. The average magnetic moment (μav) increases in a Fe–Ni cluster through the replacement of more and more Fe atoms. This is due to fact that magnetic moments of Fe atom/ Fe bulk are more than that of Ni atom/ Ni bulk values. The local magnetic moment of Ni (μatom,local) decreases by a proportional increase of Fe atoms. This is due to charge transfer from nickel's 4s orbital and iron atoms to nickel's 3d orbitals. Below is a table of the bond length (Re, in Å), binding energy (Eb, in eV), and magnetic moment (M, in μa) of the small clusters Fe2, Ni2, and FeNi from two authors. Notice how both authors show that Fe2 has the smallest bond length, the lowest binding energy, and the largest magnetic moment of the cluster combinations. Below is another table of bond length (Re), binding energy (Eb), and magnetic moment (M) of Fe–Ni clusters containing five atoms. Magnetic properties The magnetic properties of metal clusters are strongly influenced by their size and surface ligands. In general, the magnetic moments in small metal clusters are larger than in the case of a macroscopic bulk metal structure. For example, the average magnetic moment per atom in Ni clusters was found to be 0.7-0.8 μB, as compared with 0.6 μB for bulk Ni. This is explained by longer metal–metal bonds in cluster structures than in bulk structures, a consequence of a larger s character of metal–metal bonds in clusters. Magnetic moments approach bulk values as cluster size increases, though this is often difficult to predict computationally. Magnetic quenching is an important phenomenon that is well documented for Ni clusters, and represents a significant effect of ligands on metal cluster magnetism. It has been shown that CO ligands cause the magnetic moments of surface Ni atoms to go to zero and the magnetic moment of inner Ni atoms to decrease to 0.5 μB. In this case, the 4s-derived Ni–Ni bonding molecular orbitals experience repulsion with the Ni-CO σ orbital, which causes its energy level to increase so that 3d-derived molecular orbitals are filled instead. Furthermore, Ni-CO π backbonding leaves Ni slightly positive, causing more transfer of electrons to 3d-derived orbitals, which are less disperse than those of 4s. Together, these effects result in a 3d10, diamagnetic character of the ligated Ni atoms, and their magnetic moment decreases to zero. Density functional theory (DFT) calculations have shown that these ligand-induced electronic effects are limited to only surface Ni atoms, and inner cluster atoms are virtually unperturbed. Experimental findings have described two electronically distinct cluster atoms, inner atoms and surface atoms. These results indicate the significant effect that a cluster's size has on its properties, magnetic and other. Fe–Ni clusters in biology Fe–Ni metal clusters are crucial for energy production in many bacteria. A primary source of energy in bacteria is the oxidation and reduction of H2 which is performed by hydrogenase enzymes. These enzymes are able to create a charge gradient across the cell membrane which serves as an energy store. In aerobic environments, the oxidation and reduction of oxygen is the primary energy source. However, many bacteria are capable of living in environments where O2 supply is limited and use H2 as their primary energy source . The hydrogense enzymes which provide energy to the bacteria are centered around either a Fe–Fe or Fe–Ni active site. H2 metabolism is not used by humans or other complex life forms, but proteins in the mitochondria of mammalian life appear to have evolved from hydrogenase enzymes, indicating that hydrogenase is a crucial step in the evolutionary development of metabolism. The active site of Fe–Ni containing hydrogenase enzymes often is composed of one or more bridging sulfur ligands, carbonyl, cyanide and terminal sulfur ligands. The non-bridging sulfur ligands are often cystine amino acid residues that attach the active site to the protein backbone. Metal–metal bonds between the Fe and Ni have not been observed. Several oxidation states of the Fe–Ni core have been observed in a variety of enzymes, though not all appear to be catalytically relevant. The extreme oxygen and carbon monoxide sensitivity of these enzymes presents a challenge when studying the enzymes, but many crystallographic studies have been performed. Crystal structures for enzymes isolated from D. gigas, Desulfovibrio vulgaris, Desulfovibrio fructosovorans, Desulfovibrio desulfuricans, and Desulfomicrobium baculatum have been obtained, among others. A few bacteria, such as R. eutropha, have adapted to survive under ambient oxygen levels. These enzymes have inspired study of structural and functional model complexes in hopes of making synthetic catalysis for hydrogen production (see Fe–Ni and hydrogen production, below, for more detail). Fe–Ni and hydrogen production In the search for a clean, renewable energy source to replace fossil fuels, hydrogen has gained much attention as a possible fuel for the future. One of the challenges that must be overcome if this is to become a reality is an efficient way to produce and consume hydrogen. Currently, we have the technology to generate hydrogen from coal, natural gas, biomass and water. The majority of hydrogen currently produced comes from natural gas reformation, and hence does not help remove fossil fuel as an energy source. A variety of sustainable methods for hydrogen production are currently being researched, including solar, geothermal and catalytic hydrogen production. Platinum is currently used to catalyze hydrogen production, but as Pt is expensive, found in limited supply, and easily poisoned by carbon monoxide during H2 production, it is not a practical for large-scale use. Catalysts inspired by the Fe–Ni active site of many hydrogen producing enzymes are particularly desirable due to the readily available and inexpensive metals. The synthesis of Fe–Ni biomimetic catalytic complexes has proved difficult, primarily due to the extreme oxygen-sensitivity of such complexes. To date, only one example of a Fe–Ni model complex that is stable enough to withstand the range of electronic potential required for catalysis has been published. When designing model complexes, it is crucial to preserve the key features of the active site of the Fe–Ni hydrogenases: the iron organometallic moiety with CO or CN− ligands, nickel coordinated to terminal sulfur ligands, and the thiolate bridge between the metals. By preserving these traits of the enzyme active site, it is hoped that the synthetic complexes will operate at the electrochemical potential necessary for catalysis, have a high turnover frequency and be robust. References Cluster chemistry Iron compounds Nickel compounds
Iron–nickel clusters
[ "Chemistry" ]
2,116
[ "Cluster chemistry", "Organometallic chemistry" ]
25,458,359
https://en.wikipedia.org/wiki/Cray%20MTA
The Cray MTA, formerly known as the Tera MTA, is a supercomputer architecture based on thousands of independent threads, fine-grain communication and synchronization between threads, and latency tolerance for irregular computations. Each MTA processor (CPU) has a high-performance ALU with many independent register sets, each running an independent thread. For example, the Cray MTA-2 uses 128 register sets and thus 128 threads per CPU/ALU. All MTAs to date use a barrel processor arrangement, with a thread switch on every cycle, with blocked (stalled) threads skipped to avoid wasting ALU cycles. When a thread performs a memory read, execution blocks until data returns; meanwhile, other threads continue executing. With enough threads (concurrency), there are nearly always runnable threads to "cover" for blocked threads, and the ALUs stay busy. The memory system uses full/empty bits to ensure correct ordering. For example, an array is initially written with "empty" bits, and any thread reading a value from blocks until another thread writes a value. This ensures correct ordering, but allows fine-grained interleaving and provides a simple programming model. The memory system is also "randomized", with adjacent physical addresses going to different memory banks. Thus, when two threads access memory simultaneously, they rarely conflict unless they are accessing the same location. A goal of the MTA is that porting codes from other machines is straightforward, but gives good performance. A parallelizing FORTRAN compiler can produce high performance for some codes with little manual intervention. Where manual porting is required, the simple and fine-grained synchronization model often allows programmers to write code the "obvious" way yet achieve good performance. A further goal is that programs for the MTA will be scalable that is, when run on an MTA with twice as many CPUs, the same program will have nearly twice the performance. Both of these are challenges for many other high-performance computer systems. An uncommon feature of the MTA is several workloads can be interleaved with good performance. Typically, supercomputers are dedicated to a task at a time. The MTA allows idle threads to be allocated to other tasks with very little effect on the main calculations. Implementations There have been three MTA implementations and as of 2009 a fourth is planned. The implementations are: MTA-1 The MTA-1 uses a GaAs processor and was installed at the San Diego Supercomputer Center. It used four processors (512 threads) MTA-2 The MTA-2 uses a CMOS processor and was installed at the Naval Research Laboratory. It was reportedly unstable, but being inside a secure facility was not available for debugging or repair. MTA-3 The MTA-3 uses the same CPU as the MTA-2 but a dramatically cheaper and slower network interface. About six Cray XMT systems have been sold (2009) using the MTA-3. MTA-4 The MTA-4 is a planned system (2009) that is architecturally similar but will use limited data caching and a faster network interface than the MTA-3. Performance Only a few systems have been deployed, and only MTA-2 benchmarks have been reported widely, making performance comparisons difficult. Across several benchmarks, a 2-CPU MTA-2 shows performance similar to a 2-processor Cray T90. For the specific application of ray tracing, a 4-CPU MTA-2 was about 5x faster than a 4-CPU Cray T3E, and in scaling from 1 CPU to 4 CPUs the Tera performance improved by 3.8x, while the T3E going from 1 to 4 CPUs improved by only 3.0x. Architectural considerations Another way to compare systems is by inherent overheads and bottlenecks of the design. The MTA uses many register sets, thus each register access is slow. Although concurrency (running other threads) typically hides latency, slow register file access limits performance when there are few runable threads. In existing MTA implementations, single-thread performance is 21 cycles per instruction, so performance suffers when there are fewer than 21 threads per CPU. The MTA-1, -2, and -3 use no data caches. This reduces CPU complexity and avoids cache coherency problems. However, no data caching introduces two performance problems. First, the memory system must support the full data access bandwidth of all threads, even for unshared and thus cacheable data. Thus, good system performance requires very high memory bandwidth. Second, memory references take 150-170 cycles, a much higher latency than even a slow cache, thus increasing the number of runable threads required to keep the ALU busy. The MTA-4 will have a non-coherent cache, which can be used for read-only and unshared data (such as non-shared stack frames), but which requires software coherency e.g., if a thread is migrated between CPUs. Data cache competition is often a performance bottleneck for highly-concurrent processors, and sometimes even for 2-core systems; however, by using the cache for data that is either highly shared or has very high locality (stack frames), competition between threads can be kept low. Full/empty status changes use polling, with a timeout for threads that poll too long. A timed-out thread may be descheduled and the hardware context used to run another thread; the OS scheduler sets a "trap on write" bit so the waited-for write will trap and put the descheduled thread back in the run queue. Where the descheduled thread is on the critical path, performance may suffer substantially. The MTA is latency-tolerant, including irregular latency, giving good performance on irregular computations if there is enough concurrency to "cover" delays. The latency-tolerance hardware may be wasted on regular calculations, including those with latency that is high but which can be scheduled easily. See also Heterogeneous Element Processor References External links – a Cray XMT overview Mta Supercomputers
Cray MTA
[ "Technology" ]
1,280
[ "Supercomputers", "Supercomputing" ]
25,459,420
https://en.wikipedia.org/wiki/Tilt%20tray%20sorter
A tilt-tray sorter is a mechanical assembly similar to a conveyor belt but instead of a continuous belt, it consists of individual trays traveling in the same direction. A tilt-tray sorter can be configured in an inline (AKA over/under) formation, or in a continuous-loop. Items are loaded onto the passing trays at the front end of the sorter and travel towards a series of destinations on either side of the sorter. Items are loaded on to trays individually and their sort destination is determined in advance. As the tray with an item approaches its destination the tray is tilted to slide the object into the chute. The empty tray will then return to the load section before it is loaded again with a new item. A tilt-tray sorter is a continuous-loop sortation conveyor that uses a technique of tilting a tray at a chute to slide the object into the chute. References Industrial machinery
Tilt tray sorter
[ "Engineering" ]
193
[ "Industrial machinery" ]
25,463,466
https://en.wikipedia.org/wiki/Electron%20nuclear%20double%20resonance
Electron nuclear double resonance (ENDOR) is a magnetic resonance technique for elucidating the molecular and electronic structure of paramagnetic species. The technique was first introduced to resolve interactions in electron paramagnetic resonance (EPR) spectra. It is currently practiced in a variety of modalities, mainly in the areas of biophysics and heterogeneous catalysis. CW experiment In the standard continuous wave (cwENDOR) experiment, a sample is placed in a magnetic field and irradiated sequentially with a microwave followed by radio frequency. The changes are then detected by monitoring variations in the polarization of the saturated electron paramagnetic resonance (EPR) transition. Theory ENDOR is illustrated by a two spin system involving one electron (S=1/2) and one proton (I=1/2) interacting with an applied magnetic field. The Hamiltonian for the system The Hamiltonian for the two-spin system mentioned above can be described as The four terms in this equation describe the electron Zeeman interaction (EZ), the nuclear Zeeman interaction (NZ), the hyperfine interaction (HFS), and the nuclear quadrupole interaction (Q), respectively. The electron Zeeman interaction describes the interaction between an electron spin and the applied magnetic field. The nuclear Zeeman interaction is the interaction of the magnetic moment of the proton with an applied magnetic field. The hyperfine interaction is the coupling between the electron spin and the proton's nuclear spin. The nuclear quadrupole interaction is present only in nuclei with I>1/2. ENDOR spectra contain information on the type of nuclei in the vicinity of the unpaired electron (NZ and EZ), on the distances between nuclei and on the spin density distribution (HFS) and on the electric field gradient at the nuclei (Q). Principle of the ENDOR method The right figure illustrates the energy diagram of the simplest spin system where a is the isotropic hyperfine coupling constant in hertz (Hz). This diagram indicates the electron Zeeman, nuclear Zeeman and hyperfine splittings. In a steady state ENDOR experiment, an EPR transition (A, D), called the observer, is partly saturated by microwave radiation of amplitude while a driving radio frequency (rf) field of amplitude , called the pump, induces nuclear transitions. Transitions happen at frequencies and and obey the NMR selection rules and . It is these NMR transitions that are detected by ENDOR via the intensity changes to the simultaneously irradiated EPR transition. Both the hyperfine coupling constant (a) and the nuclear Larmor frequencies () are determined when using the ENDOR method. Requirement for ENDOR One requirement for ENDOR is the partial saturation of both the EPR and the NMR transitions defined by and where and are the gyromagnetic ratio of the electron and the nucleus respectively. is the magnetic field of the observer which is microwave radiation while is the magnetic field of the pump which is radio frequency radiation. and are the spin-lattice relaxation time for the electron and the nucleus respectively. and are the spin-spin relaxation time for the electron and the nucleus respectively. ENDOR spectroscopy EI-EPR ENDOR-induced EPR (EI-EPR) displays ENDOR transitions as a function of the magnetic field. While the magnetic field is swept through the EPR spectrum, the frequency follows the Zeeman frequency of the nucleus. The EI-EPR spectra can be collected in two ways: (1) difference spectra (2) frequency modulated rf field without Zeeman modulation. This technique was established by Hyde and is especially useful for separating overlapping EPR signals which result from different radicals, molecular conformations or magnetic sites. EI-EPR spectra monitor changes in the amplitude of an ENDOR line of the paramagnetic sample, displayed as a function of the magnetic field. Because of this, the spectra corresponds to one species only. Double ENDOR Double electron-nuclear-double resonance (Double ENDOR) requires the application of two rf (RF1 and RF2) fields to the sample. The change in signal intensity of RF1 is observed while RF2 is swept through the spectrum. The two fields are perpendicularly oriented and are controlled by two tunable resonance circuits which can be adjusted independent of each other. In spin decoupling experiments, the amplitude of the decoupling field should be as large as possible. However, in multiple quantum transition studies, both rf fields should be maximized. This technique was first introduced by Cook and Whiffen and was designed so that the relative signs of hf coupling constants in crystals as well as separating overlapping signals could be determined. CP-ENDOR and PM-ENDOR The CP-ENDOR technique makes use of circularly polarized rf fields. Two linearly polarized fields are generated by rf currents in two wires which are oriented parallel to the magnetic field. The wires are then connected into half loops which then cross at a 90 degree angle. This technique was developed by Schweiger and Gunthard so that the density of ENDOR lines in a paramagnetic spectrum could be simplified. Polarization modulated ENDOR (PM-ENDOR) uses two perpendicular rf fields with similar phase control units to CP-ENDOR. However, a linearly polarized rf field which rotates in the xy-plane at a frequency less than the modulation frequency of the rf carrier is used. Applications In polycrystalline media or frozen solution, ENDOR can provide spatial relationships between the coupled nuclei and electron spins. This is possible in solid phases where the EPR spectrum arises from the observance of all orientations of paramagnetic species; as such the EPR spectrum is dominated by large anisotropic interactions. This is not so in liquid phase samples where spatial relationships are not possible. Such spatial arrangements require that the ENDOR spectra are recorded at different magnetic field settings within the EPR powder pattern. The traditional convention of magnetic resonance envisions the paramagnets aligning with the external magnetic field; however, in practice it is simpler to treat the paramagnets as fixed and the external magnetic field as a vector. Specifying positional relationships requires three separate but related pieces of information: an origin, the distance from said origin, and a direction of that distance. The origin, for purposes of this explanation, can be thought of as the position of a molecule's localized unpaired electron. To determine the direction to the spin active nucleus from the localized unpaired electron (remember: unpaired electrons are, themselves, spin active) one employs the principle of magnetic angle selection. The exact value of θ is calculated as follows to the right: At θ = 0˚ the ENDOR spectra contain only the component of hyperfine coupling that is parallel to the axial protons and perpendicular to the equatorial protons. At θ = 90˚ ENDOR spectra contain only the component of hyperfine coupling that is perpendicular to the axial protons and parallel to the equatorial protons. The electron nuclear distance (R), in meters, along the direction of the interaction is determined by point-dipole approximation. Such approximation takes into account the through-space magnetic interactions of the two magnetic dipoles. Isolation of R gives the distance from the origin (localized unpaired electron) to the spin active nucleus. Point-dipole approximations are calculated using the following equation on the right: ENDOR technique has been used to characterize of spatial and electronic structure of metal-containing sites. paramagnetic metal ions/complexes introduced for catalysis; metal clusters producing magnetic materials; trapped radicals introduced as probes for disclosing the surface acid/base properties; color centers and defects as in ultramarine blue and other gems; and catalytically formed trapped reaction intermediates that detail the mechanism. The application of pulsed ENDOR to solid samples provides for many advantages compared to CW ENDOR. Such advantages are the generation of distortion-less line shapes, manipulation of spins through a variety of pulse sequences, and the lack of dependence on a sensitive balance between electron and nuclear spin relaxation rates and applied power (given long enough relaxation rates). HF pulsed ENDOR is generally applied to biological and related model systems. Applications have been primarily to biology with a heavy focus on photosynthesis related radicals or paramagnetic metal ions centers in matalloenzymes or metalloproteins. Additional applications have been to magnetic resonance imaging contrast agents. HF ENDOR has been used as a characterization tool for porous materials, for the electronic properties of donors/acceptors in semiconductors, and for electronic properties of endohedral fullerenes. Framework Substitution with W-band ENDOR has been used to provide experimental evidence that a metal ion is located in the tetrahedral framework and not in a cation exchange position. Incorporation of transition metal complexes into the framework of molecular sieves is of consequence as it could lead to the development of new materials with catalytic properties. ENDOR as applied to trapped radicals has been used to study NO with metal ions in coordination chemistry, catalysis and biochemistry. See also Electron paramagnetic resonance Pulsed EPR Spin echo Nuclear magnetic resonance References Electron paramagnetic resonance Quantum mechanics
Electron nuclear double resonance
[ "Physics", "Chemistry" ]
1,905
[ "Spectrum (physical sciences)", "Theoretical physics", "Quantum mechanics", "Spectroscopy", "Electron paramagnetic resonance" ]
13,110,036
https://en.wikipedia.org/wiki/Metaserver
A MetaServer is a central broker providing a collated view (similar to a database view) for dispersed web resources. It is used to collect data from various web services, web pages, databases, or other online resources/repositories and then present the combined results to the client using a standard web protocol (e.g. HTTP with HTML, REST, SOAP, XML-RPC, etc.). Styles of use The purpose of such a system is to provide one or several of the following: a unified view on multiple resources easy comparison of the data standardized access to different repositories calibration of the data determining the data consensus Example MetaServer projects Typical, widespread implementations of MetaServers are: Meta-Search-Engines DNS MetaServers Protein Structure and Function Prediction Gateways Computer Game MetaServers Text Mining MetaServers (e.g. BioCreative Metaserver - BCMS) Enterprise application integration Internet architecture
Metaserver
[ "Technology" ]
198
[ "Internet architecture", "IT infrastructure" ]
13,111,519
https://en.wikipedia.org/wiki/Cargo%20scanning
Cargo scanning or non-intrusive inspection (NII) refers to non-destructive methods of inspecting and identifying goods in transportation systems. It is often used for scanning of intermodal freight shipping containers. In the US, it is spearheaded by the Department of Homeland Security and its Container Security Initiative (CSI) trying to achieve one hundred percent cargo scanning by 2012 as required by the US Congress and recommended by the 9/11 Commission. In the US the main purpose of scanning is to detect special nuclear materials (SNMs), with the added bonus of detecting other types of suspicious cargo. In other countries the emphasis is on manifest verification, tariff collection and the identification of contraband. In February 2009, approximately 80% of US incoming containers were scanned. To bring that number to 100% researchers are evaluating numerous technologies, described in the following sections. Radiography Gamma-ray radiography Gamma-ray radiography systems capable of scanning trucks usually use cobalt-60 or caesium-137 as a radioactive source and a vertical tower of gamma detectors. This gamma camera is able to produce one column of an image. The horizontal dimension of the image is produced by moving either the truck or the scanning hardware. The cobalt-60 units use gamma photons with a mean energy 1.25 MeV, which can penetrate up to 15–18 cm of steel. The systems provide good quality images which can be used for identifying cargo and comparing it with the manifest, in an attempt to detect anomalies. It can also identify high-density regions too thick to penetrate, which would be the most likely to hide nuclear threats. X-ray radiography X-ray radiography is similar to gamma-ray radiography but instead of using a radioactive source, it uses a high-energy bremsstrahlung spectrum with energy in the 5–10 MeV range created by a linear particle accelerator (LINAC). Such X-ray systems can penetrate up to 30–40 cm of steel in vehicles moving with velocities up to 13 km/h. They provide higher penetration but also cost more to buy and operate. They are more suitable for the detection of special nuclear materials than gamma-ray systems. They also deliver about 1000 times higher dose of radiation to potential stowaways. Dual-energy X-ray radiography Dual-energy X-ray radiography Backscatter X-ray radiography Backscatter X-ray radiography Neutron activation systems Examples of neutron activation systems include: pulsed fast neutron analysis (PFNA), fast neutron analysis (FNA), and thermal neutron analysis (TNA). All three systems are based on neutron interactions with the inspected items and examining the resultant gamma rays to determine the elements being radiated. TNA uses thermal neutron capture to generate the gamma rays. FNA and PFNA use fast neutron scattering to generate the gamma rays. Additionally, PFNA uses a pulsed collimated neutron beam. With this, PFNA generates a three-dimensional elemental image of the inspected item. Passive radiation detectors Muon tomography Muon tomography is a technique that uses cosmic ray muons to generate three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Since muons are much more deeply penetrating than X-rays, muon tomography can be used to image through much thicker material than x-ray based tomography such as CT scanning. The muon flux at the Earth's surface is such that a single muon passes through a volume the size of a human hand per second. Muon imaging was originally proposed and demonstrated by Alvarez. The method was re-discovered and improved upon by a research team at Los Alamos National Laboratory, muon tomography is completely passive, exploiting naturally occurring cosmic radiation. This makes the technology ideal for high throughput scanning of volume material where operators are present, such as at a marine cargo terminal. In these cases, truck drivers and customs personnel do not have to leave the vehicle or exit an exclusion zone during scanning, expediting cargo throughput. Multi-mode passive detection systems (MMPDS), based upon muon tomography, are currently in use by Decision Sciences International Corporation at Freeport, Bahamas, and the Atomic Weapons Establishment in the United Kingdom. An MMPDS system has also been contracted by Toshiba to determine the location and the condition of the nuclear fuel in the Fukushima Daiichi Nuclear Power Plant. Gamma radiation detectors Radiological materials emit gamma photons, which gamma radiation detectors, also called radiation portal monitors (RPM), are good at detecting. Systems currently used in US ports (and steel mills) use several (usually 4) large PVT panels as scintillators and can be used on vehicles moving up to 16 km/h. They provide very little information on energy of detected photons, and as a result, they were criticized for their inability to distinguish gammas originating from nuclear sources from gammas originating from a large variety of benign cargo types that naturally emit radioactivity, including bananas, cat litter, granite, porcelain, stoneware, etc. Those naturally occurring radioactive materials, called NORMs account for 99% of nuisance alarms. Some radiation, like in the case of large loads of bananas is due to potassium and its rarely occurring (0.0117%) radioactive isotope potassium-40, other is due to radium or uranium that occur naturally in earth and rock, and cargo types made out of them, like cat litter or porcelain. Radiation originating from earth is also a major contributor to background radiation. Another limitation of gamma radiation detectors is that gamma photons can be easily suppressed by high-density shields made from lead or steel, preventing detection of nuclear sources. Those types of shields do not stop fission neutrons produced by plutonium sources, however. As a result, radiation detectors usually combine gamma and neutron detectors, making shielding only effective for certain uranium sources. Neutron radiation detectors Fissile materials emit neutrons. Some nuclear materials, such as the weapons usable plutonium-239, emit large quantities of neutrons, making neutron detection a useful tool to search for such contraband. Radiation Portal Monitors often use Helium-3 based detectors to search for neutron signatures. However, a global supply shortage of He-3 has led to the search for other technologies for neutron detection. See also Industrial radiography Gamma spectroscopy References Special nuclear materials Freight transport Electromagnetic spectrum Radioactivity Radiography United States Department of Homeland Security X-rays
Cargo scanning
[ "Physics", "Chemistry" ]
1,324
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Nuclear physics", "Radioactivity" ]
13,113,563
https://en.wikipedia.org/wiki/Plutonium%28III%29%20chloride
Plutonium(III) chloride is a chemical compound with the formula PuCl3. This ionic plutonium salt can be prepared by reacting the metal with hydrochloric acid. Structure Plutonium atoms in crystalline PuCl3 are 9 coordinate, and the structure is tricapped trigonal prismatic. It crystallizes as the trihydrate, and forms lavender-blue solutions in water. Safety As with all plutonium compounds, it is subject to control under the Nuclear Non-Proliferation Treaty. Due to the radioactivity of plutonium, all of its compounds, PuCl3 included, are warm to the touch. Such contact is not recommended, since touching the material may result in serious injury. References Plutonium(III) compounds Nuclear materials Chlorides Actinide halides
Plutonium(III) chloride
[ "Physics", "Chemistry" ]
162
[ "Chlorides", "Inorganic compounds", "Inorganic compound stubs", "Salts", "Materials", "Nuclear materials", "Matter" ]
19,968,510
https://en.wikipedia.org/wiki/Bohr%E2%80%93Van%20Leeuwen%20theorem
The Bohr–Van Leeuwen theorem states that when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. This makes magnetism in solids solely a quantum mechanical effect and means that classical physics cannot account for paramagnetism, diamagnetism and ferromagnetism. Inability of classical physics to explain triboelectricity also stems from the Bohr–Van Leeuwen theorem. History What is today known as the Bohr–Van Leeuwen theorem was discovered by Niels Bohr in 1911 in his doctoral dissertation and was later rediscovered by Hendrika Johanna van Leeuwen in her doctoral thesis in 1919. In 1932, J. H. Van Vleck formalized and expanded upon Bohr's initial theorem in a book he wrote on electric and magnetic susceptibilities. The significance of this discovery is that classical physics does not allow for such things as paramagnetism, diamagnetism and ferromagnetism and thus quantum physics is needed to explain the magnetic events. This result, "perhaps the most deflationary publication of all time," may have contributed to Bohr's development of a quasi-classical theory of the hydrogen atom in 1913. Case of classical paramagnetism The Langevin function is often seen as the classical theory of paramagnetism, while the Brillouin function is the quantum theory of paramagnetism. When Langevin published the theory paramagnetism in 1905 it was before the adoption of quantum physics. Meaning that Langevin only used concepts of classical physics. Still, Niels Bohr showed in his thesis that classical statistical mechanics can not be used to explain paramagnetism, and that quantum theory has to be used (in what would become the Bohr–Van Leeuwen theorem). This would later lead to explanation of magnetization based on quantum theory, such as the Brillouin function that uses the Bohr magneton (), and considers that the energy of a system is not continously variable. It could be noted that there is a difference in the approaches of Langevin and Bohr, since Langevin assumes a magnetic polarization as the basis for the derivation, while Bohr start the derivation from motions of electrons and a model of an atom. Meaning that Langevin is still assuming a quatified fix magnetic dipole. This could be expressed as by J. H. Van Vleck: "When Langevin assumed that the magnetic moment of the atom or molecule had a fixed value , he was quantizing the system without realizing it". This makes the Langevin function to be in the borderland between classical statisitcal mechanics and quantum theory (as either semi-classical or semi-quantum). Proof An intuitive proof The Bohr–Van Leeuwen theorem applies to an isolated system that cannot rotate. If the isolated system is allowed to rotate in response to an externally applied magnetic field, then this theorem does not apply. If, in addition, there is only one state of thermal equilibrium in a given temperature and field, and the system is allowed time to return to equilibrium after a field is applied, then there will be no magnetization. The probability that the system will be in a given state of motion is predicted by Maxwell–Boltzmann statistics to be proportional to , where is the energy of the system, is the Boltzmann constant, and is the absolute temperature. This energy is equal to the sum of the kinetic energy ( for a particle with mass and speed ) and the potential energy. The magnetic field does not contribute to the potential energy. The Lorentz force on a particle with charge and velocity is where is the electric field and is the magnetic flux density. The rate of work done is and does not depend on . Therefore, the energy does not depend on the magnetic field, so the distribution of motions does not depend on the magnetic field. In zero field, there will be no net motion of charged particles because the system is not able to rotate. There will therefore be an average magnetic moment of zero. Since the distribution of motions does not depend on the magnetic field, the moment in thermal equilibrium remains zero in any magnetic field. A more formal proof So as to lower the complexity of the proof, a system with electrons will be used. This is appropriate, since most of the magnetism in a solid is carried by electrons, and the proof is easily generalized to more than one type of charged particle. Each electron has a negative charge and mass . If its position is and velocity is , it produces a current and a magnetic moment The above equation shows that the magnetic moment is a linear function of the velocity coordinates, so the total magnetic moment in a given direction must be a linear function of the form where the dot represents a time derivative and are vector coefficients depending on the position coordinates . Maxwell–Boltzmann statistics gives the probability that the nth particle has momentum and coordinate as where is the Hamiltonian, the total energy of the system. The thermal average of any function of these generalized coordinates is then In the presence of a magnetic field, where is the magnetic vector potential and is the electric scalar potential. For each particle the components of the momentum and position are related by the equations of Hamiltonian mechanics: Therefore, so the moment is a linear function of the momenta . The thermally averaged moment, is the sum of terms proportional to integrals of the form where represents one of the momentum coordinates. The integrand is an odd function of , so it vanishes. Therefore, . Applications The Bohr–Van Leeuwen theorem is useful in several applications including plasma physics: "All these references base their discussion of the Bohr–Van Leeuwen theorem on Niels Bohr's physical model, in which perfectly reflecting walls are necessary to provide the currents that cancel the net contribution from the interior of an element of plasma, and result in zero net diamagnetism for the plasma element." Diamagnetism of a purely classical nature occurs in plasmas but is a consequence of thermal disequilibrium, such as a gradient in plasma density. Electromechanics and electrical engineering also see practical benefit from the Bohr–Van Leeuwen theorem. References External links The early 20th century: Relativity and quantum mechanics bring understanding at last Classical mechanics Electric and magnetic fields in matter Eponymous theorems of physics Statistical mechanics Articles containing proofs Statistical mechanics theorems
Bohr–Van Leeuwen theorem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,322
[ "Theorems in dynamical systems", "Equations of physics", "Classical mechanics", "Statistical mechanics theorems", "Electric and magnetic fields in matter", "Eponymous theorems of physics", "Theorems in mathematical physics", "Materials science", "Mechanics", "Condensed matter physics", "Articles...
19,973,412
https://en.wikipedia.org/wiki/Isotope-coded%20affinity%20tag
An isotope-coded affinity tag (ICAT) is an in-vitro isotopic labeling method used for quantitative proteomics by mass spectrometry that uses chemical labeling reagents. These chemical probes consist of three elements: a reactive group for labeling an amino acid side chain (e.g., iodoacetamide to modify cysteine residues), an isotopically coded linker, and a tag (e.g., biotin) for the affinity isolation of labeled proteins/peptides. The samples are combined and then separated through chromatography, then sent through a mass spectrometer to determine the mass-to-charge ratio between the proteins. Only cysteine containing peptides can be analysed. Since only cysteine containing peptides are analysed, often the post translational modification is lost. Development The original tags were developed using deuterium, but later the same group redesigned the tags using 13C instead to circumvent issues of peak separation during liquid chromatography due to the deuterium interacting with the stationary phase of the column. Quantitative proteomics For the quantitative comparison of two proteomes, one sample is labeled with the isotopically light (d0) probe and the other with the isotopically heavy (d8) version. To minimize error, both samples are then combined, digested with a protease (i.e., trypsin), and subjected to avidin affinity chromatography to isolate peptides labeled with isotope-coded tagging reagents. These peptides are then analyzed by liquid chromatography-mass spectrometry (LC-MS). The ratios of signal intensities of differentially mass-tagged peptide pairs are quantified to determine the relative levels of proteins in the two samples. References Biochemistry detection methods Proteomics
Isotope-coded affinity tag
[ "Chemistry", "Biology" ]
393
[ "Biochemistry methods", "Chemical tests", "Biochemistry detection methods" ]
19,974,016
https://en.wikipedia.org/wiki/Diphenidol
Diphenidol is a muscarinic antagonist employed as an antiemetic and as an antivertigo agent. It is not marketed in the United States or Canada. Although the mechanism of action of diphenidol on the vestibular system has not yet been elucidated, it exerts an anticholinergic effect due to interactions with mACh receptors, particularly M1, M2, M3 and M4. Hence, its actions may take place at the vestibular nuclei, where a significant excitatory input is mediated by ACh receptors, and also at the vestibular periphery where mACh receptors are expressed at efferent synapses. A series of selective mACh-receptor antagonists based on the diphenidol molecule has been synthesized, but they have not yet been the subject of clinical trials. Synthesis Alkylation of 1-Bromo-3-chloropropane [109-70-6] (1) with piperidine (2) gives 3-Piperidinopropyl chloride [1458-63-5] (3). The Grignard reaction of this intermediate with benzophenone [119-61-9] gives the benzhydrol and hence, Diphenidol (4). References Antiemetics
Diphenidol
[ "Chemistry" ]
269
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
18,806,728
https://en.wikipedia.org/wiki/Effects%20of%20nuclear%20explosions%20on%20human%20health
The medical effects of the atomic bomb upon humans can be put into the four categories below, with the effects of larger thermonuclear weapons producing blast and thermal effects so large that there would be a negligible number of survivors close enough to the center of the blast who would experience prompt/acute radiation effects, which were observed after the 16 kiloton yield Hiroshima bomb, due to its relatively low yield: Initial stage—the first 1–9 weeks, in which are the greatest number of deaths, with 90% due to thermal injury and/or blast effects and 10% due to super-lethal radiation exposure. Intermediate stage—from 10 to 12 weeks. The deaths in this period are from ionizing radiation in the median lethal range - LD50 Late period—lasting from 13 to 20 weeks. This period has some improvement in survivors' condition. Delayed period—from 20+ weeks. Characterized by numerous complications, mostly related to healing of thermal and mechanical injuries, and if the individual was exposed to a few hundred to a thousand millisieverts of radiation, it is coupled with infertility, sub-fertility and blood disorders. Furthermore, ionizing radiation above a dose of around 50-100 millisievert exposure has been shown to statistically begin increasing a person's chance of dying of cancer sometime in their lifetime over the normal unexposed rate of c. 25%, in the long term, a heightened rate of cancer, proportional to the dose received, would begin to be observed after c. 5+ years, with lesser problems, such as eye cataracts, and other more minor effects in other organs and tissue also being observed over the long term. Depending on whether individuals further afield shelter in place or evacuate perpendicular to the direction of the wind, and therefore avoid contact with the fallout plume, and stay there for the days and weeks after the nuclear explosion, their exposure to fallout, and therefore their total dose, will vary. With those who do shelter in place, and or evacuate, experiencing a total dose that would be negligible in comparison to someone who just went about their life as normal. Staying indoors until after the most hazardous fallout isotope, I-131 decays away to 0.1% of its initial quantity after ten half-lives – which is represented by 80 days in the care of I-131 cases, would make the difference between likely contracting thyroid cancer or escaping completely from this substance depending on the actions of the individual. Some scientists estimate that if there were a nuclear war resulting in 100 Hiroshima-size nuclear explosions on cities, it could cause significant loss of life in the tens of millions from long term climatic effects alone. The climatology hypothesis is that if each city firestorms, a great deal of soot could be thrown up into the atmosphere which could blanket the earth, cutting out sunlight for years on end, causing the disruption of food chains, in what is termed a nuclear winter scenario. Blast effects — the initial stage Immediate post-attack period The main causes of death and disablement in this state are thermal burns and the failure of structures resulting from the blast effect. Injury from the pressure wave is minimal in contrast because the human body can survive up to 2 bar (30 psi) while most buildings can withstand only a 0.8 bar (12 psi) blast. Therefore, the fate of humans is closely related to the survival of the buildings around them. Fate within certain peak overpressure over 0.8 bar (12 psi) - 98% dead, 2% injured 0.3 - 0.8 bar (5-12 psi) - 50% dead, 40% injured, 10% safe 0.14 - 0.3 bar (2-5 psi) - 5% dead, 45% injured, 50% safe Types of radioactive exposure after a nuclear attack In a nuclear explosion, the human body can experience varying types of radiation. This radiation can be classified into two groups: initial radiation and residual radiation. Initial radiation is emitted during the initial explosion, which releases short-term radionuclides. The residual radiation is emitted after the initial attack from materials that were impacted by the detonation. These materials let off nuclear radiation in the form of residual radiation. In the event of a nuclear attack, a human body can be irradiated by at least three processes. The first, and most significant, cause of burns is thermal radiation and not caused by ionizing radiation. Thermal burns from infrared heat radiation, these would be the most common burn type experienced by people. If people come in direct contact with fallout, beta burns from shallow ionizing beta radiation will be experienced, the largest particles (visible to the naked eye) in local fallout would be likely to have very high radioactivity because they would be deposited so soon after detonation; this fraction of the total fallout is called the prompt or local fallout fraction. It is likely that one such particle upon the skin would be able to cause a localized beta burn. This local fallout, termed Bikini snow after the Pacific island weapon tests, was experienced by the crew on the deck of the Lucky Dragon fishing ship following the explosion of the 15 megaton Shrimp device in the Castle Bravo event. However, these particular decay particles (beta particles) are very weakly penetrating and have a short range, requiring almost direct contact between fallout and personnel to be harmful. Rarer still would be personnel who experience radiation burns from highly penetrating gamma radiation. This would likely cause deep gamma penetration within the body, which would result in uniform whole body irradiation rather than only a surface burn. In cases of whole body gamma irradiation (c. 10 Gy) due to accidents involving medical product irradiators, some of the human subjects have developed injuries to their skin between the time of irradiation and death. In the picture above, the normal clothing (a kimono) that the woman was wearing attenuated the far reaching thermal radiation; the kimono, however, would naturally have been unable to attenuate any gamma radiation, if she were close enough to the weapon to have experienced any, and it would be likely that any such penetrating radiation effect would be evenly applied to her entire body. Beta burns would likely be all over the body if there was contact with fallout after the explosion, unlike thermal burns, which are only ever on one side of the body, as heat radiation infrared naturally does not penetrate the human body. In addition, the pattern on her clothing has been burnt into the skin by the thermal radiation. This is because white fabric reflects more visible and infrared light than dark fabric. As a result, the skin underneath dark fabric is burned more than the skin covered by white clothing. There is also the risk of internal radiation poisoning by ingestion of fallout particles, if one is in a fallout zone. Radiation poisoning Radiation poisoning, also called "radiation sickness" or a "creeping dose", is a form of damage to organ tissue due to excessive exposure to ionizing radiation. The term is generally used to refer to acute problems caused by a large dosage of radiation in a short period, though this also has occurred with long-term exposure to low-level radiation. Many of the symptoms of radiation poisoning occur as ionizing radiation interferes with cell division. There are numerous lethal radiation syndromes, including prodromal syndrome, bone marrow death, central nervous system death and gastrointestinal death. Prodromal syndrome The "prodromal syndrome" is not a diagnosis, but the technical term used by health professionals to describe a specific group of symptoms that may precede the onset of an illness. For example, a fever is "prodromal" to measles, which means that a fever may be a risk factor for developing this illness. The prodromal symptoms for radiation poisoning can include symptoms such as feelings of nausea, increased thirst, loss of appetite, discomfort, fever, and diarrhea. Bone marrow death Bone marrow death is caused by a dose of radiation between 2 and 10 Gray and is characterized by the part of the bone marrow that makes the blood being broken down. Therefore, production of red and white blood cells and platelets is stopped due to loss of the blood-making stem cells (4.5 Gray kills 95% of stem cells). The loss of platelets greatly increases the chance of fatal hemorrhage, while the lack of white blood cells causes infections; the fall in red blood cells is minimal, and only causes mild anemia. The exposure to 4.5 Gray of penetrating gamma rays has many effects that occur at different times: In 24 hours: vomiting diarrhea These will usually abate after 6–7 days. Within 3–4 weeks there is a period of extreme illness. severe bloody diarrhea, indicating intestinal disorders causing fluid imbalance extensive internal bleeding sepsis infections The peak incidence of acute BM death corresponds to the 30-day nadir in blood cell numbers. The number of deaths then falls progressively until it reaches 0 at 60 days after irradiation. The amount of radiation greatly affects the probability of death. For example, over the range of 2 to 6 Gray the probability of death in untreated adults goes from about 1% to 99%, but these figures are for healthy adults. Therefore, results may differ, because of the thermal and mechanical injuries and infectious conditions. Gastrointestinal death Gastrointestinal death is caused by a dose of radiation between 10 and 50 Gray. Whole body doses cause damage to epithelial cells lining the gastrointestinal tract and this combined with the bone marrow damage is fatal. All symptoms become increasingly severe, causing exhaustion and emaciation in a few days and death within 7–14 days from loss of water and electrolytes. The symptoms of gastrointestinal death are: gastrointestinal pain anorexia nausea vomiting diarrhea Central nervous system death Central nervous system death is the main cause of death in 24–48 hours among those exposed to 50 Gray. The symptoms are: vomiting nausea diarrhea drowsiness lethargy tremors delirium frequent seizures convulsions heat prostration coma respiratory failure death Short-term effects (6–8 weeks) Skin The skin is susceptible to beta-emitting radioactive fallout. The principal site of damage is the germinal layer, and often the initial response is erythema (reddening) due to blood vessels congestion and edema. Erythema lasting more than 10 days occurs in 50% of people exposed to 5-6 Gray. Other effects with exposure include: 2–3 Gray—temporary hair loss 7 Gray—permanent epilation occurs 10 Gray—itching and flaking occurs 10–20 Gray—weeping blistering and ulceration will occur Lungs The lungs are the most radiosensitive organ, and radiation pneumonitis can occur leading to pulmonary insufficiency and death (100% after exposure to 50 Gray of radiation), in a few months. Radiation pneumonitis is characterized by: Loss of epithelial cells Edema Inflammation Occlusions of airways, air sacs and blood vessels Fibrosis Ovaries A single dose of 1–2 Gray will cause temporary damage and suppress menstruation for periods up to 3 years; a dose of 4 Gray will cause permanent sterility. Testicles A dose of 0.1 Gray will cause low sperm counts for up to a year; 2.5 Gray will cause sterility for 2 to 3 years or more. 4 Gray will cause permanent sterility. Long-term effects Cataract induction The timespan for developing this symptom ranges from 6 months to 30 years to develop but the median time for developing them is 2–3 years. 2 Gray of gamma rays cause opacities in a few percent 6-7 Gray can seriously impair vision and cause cataracts Cancer induction Cancer induction is the most significant long-term risk of exposure to a nuclear bomb. Approximately 1 out of every 80 people exposed to 1 Gray will die from cancer, in addition to the normal rate of 20 out of 80. About 1 in 40 people will get cancer, in addition to the typical rates of 16-20 out of 40. Different types of cancer take different times for them to appear: 2 years for leukemia to appear 20 or more years for skin cancer or lung cancer In utero effects on human development A 1 Gy dose of radiation will cause between 0 and 20 extra cases of perinatal mortality, per 1,000 births and 0-20 cases per 1000 births of severe mental sub-normality. A 0.05 Gy dose will increase death due to cancer 10 fold, from the normal 0.5 per 1000 birth rate to a rate of 5 per 1,000. An antenatal dose of 1 Gy in the first trimester causes the lifetime risk of fatal cancer sometime in the child's life to increase from c. 25% in non-exposed humans to 100% in the first trimester after exposure. Transgenerational genetic damage Exposure to even relatively low doses of radiation generates genetic damage in the progeny of irradiated rodents. This damage can accumulate over several generations. No statistically demonstrable increase of congenital malformations was found among the later conceived children born to survivors of the Nuclear weapons at Hiroshima and Nagasaki. The surviving women of Hiroshima and Nagasaki, that could conceive, who were exposed to substantial amounts of radiation, went on and had children with no higher incidence of abnormalities than the Japanese average. Infectious diseases resulting from nuclear attack It was assumed in the 1983 book Medical Consequences of Radiation Following a Global Nuclear War that, although not caused by radiation, one of the long-term effects of a nuclear war would be a massive increase in infectious diseases caused by fecal matter contaminated water from untreated sewage, crowded living conditions, poor standard of living, and lack of vaccines in the aftermath of a nuclear war, with the following list of diseases being cited: Dysentery Typhoid Infectious hepatitis Salmonellosis Cholera Meningococcal meningitis Tuberculosis Diphtheria Whooping cough Polio Pneumonia There would be billions of disease carrying vectors, in the form of city residents, lying deceased in cities caused by the direct nuclear weapons effects alone, with the surviving few billion people spread out in rural communities living agrarian lifestyles, with the survivors therefore posing a way of living far less prone to creating the crowded slum living conditions required for infectious diseases to spread. Moreover, as reported in a paper published in the journal Public Health Reports, it is also one of a number of prevalent myths that infectious diseases always occur after a disaster in cities. See also Blast shelter Fallout shelter Textbook of Military Medicine Notes Nuclear weapons Radiobiology Radiation health effects
Effects of nuclear explosions on human health
[ "Chemistry", "Materials_science", "Biology" ]
3,017
[ "Radiobiology", "Radiation effects", "Radiation health effects", "Radioactivity" ]
18,812,368
https://en.wikipedia.org/wiki/Phalanx%20Biotech%20Group
Phalanx Biotech Group was founded in 2002 as a result of collaboration between Taiwan's Industrial Technology Research Institute (ITRI) and several private companies and research institutes. It is a manufacturer of DNA microarrays and a provider of gene expression profiling and microRNA profiling services based in Hsinchu, Taiwan, San Diego, California, Shanghai, China, and in Beijing, China. The company sells its DNA microarrays and service platform under the registered trademark name OneArray. Phalanx Biotech Group is a member of the FDA-led Microarray Quality Control Project. Description of Products and Services Phalanx Biotech Group is a manufacturer and provider of DNA microarray products and services used for gene expression profiling and miRNA profiling. Human, Mouse, Rat and Yeast whole genome OneArray DNA microarrays are manufactured and used for gene expression profiling products and services. The miRNA profiling products and services include miRNA OneArray microarrays and related services for Human, Rodent, and many Model organism and Plant species. Other than the OneArray services, Phalanx also offers Agilent microarray services, qPCR services, PCR array profiling services, and NGS services. Each one of these services can be accompanied by an extensive, customizable bioinformatics package. Manufacturing The DNA microarrays are produced using a patented non-contact inkjet deposition of intact oligonucleotides. This is performed using a patented inkjet dispensing apparatus. The oligonucleotides are deposited on a standard size 25mm X 75mm glass slide. Milestones See also List of companies of Taiwan References External links Phalanx Biotech Group Company Website 2002 establishments in Taiwan Companies based in Hsinchu Biotechnology companies established in 2002 Biotechnology companies of Taiwan Taiwanese brands Microarrays
Phalanx Biotech Group
[ "Chemistry", "Materials_science", "Biology" ]
392
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
18,814,533
https://en.wikipedia.org/wiki/Pulse-swallowing%20counter
A pulse-swallowing counter is a component in an all-digital feedback system. The divider produces one output pulse for every N counts (N is usually a power of 2) when not swallowing, and per N+1 pulses when the 'swallow' signal is active. The overall pulse-swallowing system is used as part of a fractional-N frequency divider. The overall pulse-swallowing system cancels beatnotes created when switching between N, N+1, or N−1 in a fractional-N synthesizer. References Control theory
Pulse-swallowing counter
[ "Mathematics" ]
111
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
26,951,189
https://en.wikipedia.org/wiki/Jen%C5%91%20Ruffinyi
Jenő Ruffinyi (; 1 March 1846 in Dobsina, Hungary – 13 January 1924 in Dobšiná, Czechoslovakia) was a Hungarian mining engineer and amateur speleologist who, together with Gustav Lang und Andreas Méga, was the first to explore Dobšiná Ice Cave (). Background and education The Ruffinyi family had moved to Dobsina (today Dobšiná, Slovakia) from Italy because Eugene's father accepted a position as a mining engineer in this city. The original name of his family was Ruffini. Jenő attended primary school in Dobsina, and high school in Késmárk (today Kežmarok, Slovakia). He then took up mining studies in Selmecbánya (today Banská Štiavnica, Slovakia), where he earned his degree in 1869. On his return to Dobsina, he became a mining overseer and metallurgical inspector. Exploration of the Ice Cave While touring Ducsa Hill near Dobsina with his friend, Lt. Gustav Lang, in 1869, Ruffinyi threw a stone into an opening in the rock that was known as Cold Hole () in local lore. From the multiple and delayed echos, the men got the impression that a substantial cave must extend beyond the entrance and decided to return for an exploration. On June 15, 1870, Ruffinyi was the first to enter the cave, sliding down on a hemp rope. Honors The elementary school in Dobšiná, Jenő Ruffinyi Elementary School (), is named after Ruffinyi. References Sources Krenner J.S., Die Eishöhle von Dobschau. K. Ungar. Naturwissenschaftlichen Gesellschaft, 1874 https://web.archive.org/web/20160307081514/http://www.macse.org/gravestones/byname.aspx?i=R&i2=Ru&l=Ruffinyi&pn=0&pg=0&id=2838 1846 births 1924 deaths People from Rožňava District 19th-century Hungarian people 20th-century Hungarian people Speleologists Mining engineers Hungarian people of Italian descent Engineers from Austria-Hungary
Jenő Ruffinyi
[ "Engineering" ]
465
[ "Mining engineering", "Mining engineers" ]
26,952,327
https://en.wikipedia.org/wiki/Release%20management
Release management is the process of managing, planning, scheduling and controlling a software build through different stages and environments; it includes testing and deploying software releases. Relationship with processes Organizations that have adopted agile software development are seeing much higher quantities of releases. With the increasing popularity of agile development a new approach to software releases known as continuous delivery is starting to influence how software transitions from development to a release. One goal of continuous delivery and DevOps is to release more reliable applications faster and more frequently. The movement of the application from a "build" through different environments to production as a "release" is part of the continuous delivery pipeline. Release managers are beginning to utilize tools such as application release automation and continuous integration tools to help advance the process of continuous delivery and incorporate a culture of DevOps by automating a task so that it can be done more quickly, reliably, and is repeatable. More software releases have led to increased reliance on release management and automation tools to execute these complex application release processes. Relationship with ITIL/ITSM In organizations that manage IT operations using the IT service management paradigm, specifically the ITIL framework, release management will be guided by ITIL concepts and principles. There are several formal ITIL processes that are related to release management, primarily the release and deployment management process, which "aims to plan, schedule and control the movement of releases to test and live environments", and the change enablement process. In ITIL organizations, releases tend to be less frequent than in an agile development environment. Release processes are managed by IT operations teams using IT service management ticketing systems, with less focus on automation of release processes. References External links "Current Trends in Release Engineering 2016" - Academic Course by Software Construction Research Group, RWTH Aachen, Germany# Release and Deployment Management in the ITIL Framework Software project management Version control Software release
Release management
[ "Engineering" ]
378
[ "Software engineering", "Version control" ]
26,954,391
https://en.wikipedia.org/wiki/FoldX
FoldX is a protein design algorithm that uses an empirical force field. It can determine the energetic effect of point mutations as well as the interaction energy of protein complexes (including Protein-DNA). FoldX can mutate protein and DNA side chains using a probability-based rotamer library, while exploring alternative conformations of the surrounding side chains. Applications Prediction of the effect of point mutations or human SNPs on protein stability or protein complexes Protein design to improve stability or modify affinity or specificity Homology modeling The FoldX force field The energy function includes terms that have been found to be important for protein stability, where the energy of unfolding (∆G) of a target protein is calculated using the equation: ∆G = ∆Gvdw + ∆GsolvH + ∆GsolvP + ∆Ghbond + ∆Gwb + ∆Gel + ∆Smc + ∆Ssc Where ∆Gvdw is the sum of the Van der Waals contributions of all atoms with respect to the same interactions with the solvent. ∆GsolvH and ∆GsolvP is the difference in solvation energy for apolar and polar groups, respectively, when going from the unfolded to the folded state. ∆Ghbond is the free energy difference between the formation of an intra-molecular hydrogen-bond compared to inter-molecular hydrogen-bond formation (with solvent). ∆Gwb is the extra stabilizing free energy provided by a water molecule making more than one hydrogen-bond to the protein (water bridges) that cannot be taken into account with non-explicit solvent approximations. ∆Gel is the electrostatic contribution of charged groups, including the helix dipole. ∆Smc is the entropy cost for fixing the backbone in the folded state. This term is dependent on the intrinsic tendency of a particular amino acid to adopt certain dihedral angles. ∆Ssc is the entropic cost of fixing a side chain in a particular conformation. The energy values of ∆Gvdw, ∆GsolvH, ∆GsolvP and ∆Ghbond attributed to each atom type have been derived from a set of experimental data, and ∆Smc and ∆Ssc have been taken from theoretical estimates. The Van der Waals contributions are derived from vapor to water energy transfer, while in the protein we are going from solvent to protein. For protein-protein interactions, or protein-DNA interactions FoldX calculates ∆∆G of interaction : ∆∆Gab = ∆Gab- (∆Ga + ∆Gb) + ∆Gkon + ∆Ssc ∆Gkon reflects the effect of electrostatic interactions on the kon. ∆Ssc is the loss of translational and rotational entropy upon making the complex. Key features RepairPDB: energy minimization of a protein structure BuildModel: in silico mutagenesis or homology modeling with predicted energy changes AnalyseComplex: interaction energy calculation Stability: prediction of free energy changes between alternative structures AlaScan: in silico alanine scan of a protein structure with predicted energy changes SequenceDetail: per residue free energy decomposition into separate energy terms (hydrogen bonding, Van der Waals energy, electrostatics, ...) Graphical interface Native FoldX is run from the command line. A FoldX plugin for the YASARA molecular graphics program has been developed to access various FoldX tools inside a graphical environment. The results of e.g. in silico mutations or homology modeling with FoldX can be directly analyzed on screen. Molecule Parametrization In version 5.0, the possibility to parametrize previously not recognized molecules in JSON format was added into the software. Further reading External links http://foldx.crg.es FoldX website http://foldxyasara.switchlab.org FoldX plugin for YASARA Molecular modelling software
FoldX
[ "Chemistry" ]
800
[ "Molecular modelling", "Molecular modelling software", "Computational chemistry software" ]
26,955,958
https://en.wikipedia.org/wiki/Counterfeit%20electronic%20component
Counterfeit electronic components are electronic parts whose origin or quality is deliberately misrepresented. Counterfeiting of electronic components can infringe on the legitimate producer's trademark rights. The marketing of electronic components has been commoditized, making it easier for counterfeiters to make it out into the supply chain. Trends According to a January 2010 study by the US Department of Commerce Bureau of Industry and Security, the number of counterfeit incidents reported grew from 3,868 in 2005 to 9,356 in 2008. 387 respondents to the survey cited the two most common types of counterfeit components: 'blatant' fakes and used products re-marked as higher grade. The World Semiconductor Trade Statistics estimate that the global total addressable market (TAM) for semiconductors is in excess of $200 billion. This increase in instances of counterfeit products entering the supply chain is characterized by globalization and the industries in China. On December 11, 2001, China was admitted to the WTO, which lifted the ban on exports by non-government owned and controlled business entities. In late 1989, the Basel Convention was adopted in Basel, Switzerland. Most developed countries have adopted this convention, with the major exception of the US. During this period, the United States has primarily exported its e-waste to China, where e-waste is recycled. Counterfeiting techniques The alteration of existing units is done through sanding and re-marking, blacktopping and re-marking, or similar methods of concealing the original manufacturer. Other strategies involve device substitution and die salvaging, where cheaper or used components are passed off as new or more expensive ones. Manufacturing rejects may also be repurposed and sold as new, and component leads may be re-attached to give the illusion of a new, unused product. Packaging can also be relabeled. Avoidance strategies Some known counterfeiting-detecting strategies include: DNA marking – Botanical DNA as developed by Applied DNA Sciences and required by the DoD's Defense Logistics Agency for certain 'high-risk' microcircuits. X-Ray inspection X-RF Inspection X-ray fluorescence spectroscopy can be used to confirm RoHS status. Decapsulation – By removing the external packaging on a semiconductor and exposing the semiconductor wafer, microscopic inspection of brand marks and trademarks, and laser die etching. SAM (scanning acoustic microscope) Parametric testing, a.k.a., curve tracing Leak testing (gross leaks and fine leaks) of hermetically sealed components Stereo microscope, metallurgical microscope Solderability testing For military products: QPL – Qualified Product List QML – Qualified Manufacturers List QSLD – Qualified Suppliers List of Distributors QTSL – Qualified Testing Suppliers List Policies The formation of the G-19 Counterfeit Electronic Components Committee was introduced. In April 2009, SAE International released AS5553 Counterfeit Electronic Parts; Avoidance, Detection, Mitigation, and Disposition. AS6081 was issued in November 2012 and adopted by the DoD. AS6081 requires the purchased products to go through external visual inspections and radiological examinations. Originally implemented in January 2013, AS5553A was expanded. See also Capacitor plague Counterfeit consumer goods Supply-chain security References Electrical components Forgery
Counterfeit electronic component
[ "Technology", "Engineering" ]
671
[ "Electrical engineering", "Electrical components", "Components" ]
26,956,225
https://en.wikipedia.org/wiki/Riparian-zone%20restoration
Riparian-zone restoration is the ecological restoration of riparian-zone habitats of streams, rivers, springs, lakes, floodplains, and other hydrologic ecologies. A riparian zone or riparian area is the interface between land and a river or stream. Riparian is also the proper nomenclature for one of the fifteen terrestrial biomes of the earth; the habitats of plant and animal communities along the margins and river banks are called riparian vegetation, characterized by aquatic plants and animals that favor them. Riparian zones are significant in ecology, environmental management, and civil engineering because of their role in soil conservation, their habitat biodiversity, and the influence they have on fauna and aquatic ecosystems, including grassland, woodland, wetland or sub-surface features such as water tables. In some regions the terms riparian woodland, riparian forest, riparian buffer zone, or riparian strip are used to characterize a riparian zone. The perceived need for riparian-zone restoration has come about because riparian zones have been altered and/or degraded throughout much of the world by the activities of mankind affecting natural geologic forces. The unique biodiversity of riparian ecosystems and the potential benefits that natural, vegetated riparian have to offer in preventing erosion, maintaining water quality that ranges from being decent to completely healthy, providing habitat and wildlife corridors, and maintaining the health of in-stream biota (aquatic organisms) has led to a surge of restoration activities aimed at riparian ecosystems in the last few decades. Restoration efforts are typically guided by an ecological understanding of riparian-zone processes and knowledge of the causes of degradation. They are often interdependent with stream restoration projects. Causes of riparian-zone degradation Riparian-zone disturbance falls into two main categories: hydrologic modifications that indirectly impact riparian communities through changes in stream morphology and hydrologic processes, and habitat alterations that result in direct modification of riparian communities through land clearing or disturbance. Hydrologic modifications Dams and diversions Dams are built on rivers primarily to store water for human use, generate hydroelectric power, and/or control flooding. Natural riparian ecosystems upstream of dams can be destroyed when newly created reservoirs inundate riparian habitat. Dams can also cause substantial changes in downstream riparian communities by altering the magnitude, frequency, and timing of flood events and reducing the amount of sediment and nutrients delivered from upstream. Diverting water from stream channels for agricultural, industrial, and human use reduces the volume of water flowing downstream, and can have similar effects. In a natural riparian system, periodic flooding can remove sections of riparian vegetation. This leaves portions of the floodplain available for regeneration and effectively “resets” the successional timeline. Frequent disturbance naturally favors many early-successional (pioneer) riparian species. Many studies show that a reduction in flooding due to dams and diversions can allow community succession to progress beyond a typical stage, causing changes in community structure. Changing flood regimes can be especially problematic when exotic species are favored by altered conditions. For example, dam regulation changes floodplain hydrology in the southwest US by impeding annual flooding cycles. This modification has been implicated in the dominance of saltcedar (Tamarix chinensis) over the native cottonwood (Populus deltoides). Cottonwoods were found to be competitively superior to saltcedar when flooding allowed seeds of both species to cogerminate. However, the lack of flooding caused by altered hydrology creates more favorable conditions for the germination of saltcedar over cottonwoods. Groundwater withdrawals Riparian zones are characterized by a distinct community of plant species that are physiologically adapted to a greater amount of freshwater than upland species. In addition to having frequent direct contact with surface water through periodic rises in stream water levels and flooding, riparian zones are also characterized by their proximity to groundwater. Particularly in arid regions, shallow groundwater, seeps, and springs provide a more constant source of water to riparian vegetation than occasional flooding. By reducing the availability of water, groundwater withdrawals can impact the health of riparian vegetation. For example, Fremont cottonwood (Populus fremontii), and San Joaquin willow (Salix gooddingii), common riparian species in Arizona, were found to have more dead branches and experienced greater mortality with decreasing groundwater levels. Plant community composition can change dramatically over a gradient of groundwater depth: plants that can only survive in wetland conditions can be replaced by plants that are tolerant of drier conditions as groundwater levels are reduced, causing habitat community shifts and in some cases complete loss of riparian species. Studies have also shown that decreases in groundwater levels may favor the invasion and persistence of certain exotic invasive species such as Saltcedar (Tamarix chinensis), which do not appear to show the same degree of physiologic water stress as native species when subjected to lower groundwater levels. Stream channelization and levee construction Stream channelization is the process of engineering straighter, wider, and deeper stream channels, usually for improved navigation, wetland drainage, and/or faster transport of flood waters downstream. Levees are often constructed in conjunction with channelization to protect human development and agricultural fields from flooding. Riparian vegetation can be directly removed or damaged during and after the channelization process. In addition, channelization and levee construction modify the natural hydrology of a stream system. As water flows through a natural stream, meanders are created when faster flowing water erodes outer banks and slower flowing water deposits sediment on inner banks. Many riparian plant species depend on these areas of new sediment deposition for germination and establishment of seedlings. Channel straightening and levee construction eliminate these areas of deposition, creating unfavorable conditions for riparian vegetation recruitment. By preventing overbank flooding, levees reduce the amount of water available to riparian vegetation in the floodplain, which alters the types of vegetation that can persist in these conditions. A lack of flooding has been shown to decrease the amount of habitat heterogeneity in riparian ecosystems as wetland depressions in the floodplain no longer fill and hold water. Because habitat heterogeneity is correlated with species diversity, levees can cause reductions in the overall biodiversity of riparian ecosystems. Habitat alteration Land clearing In many places around the world, vegetation within riparian zones has been completely removed as humans have cleared land for raising crops, growing timber, and developing land for commercial or residential purposes. Removing riparian vegetation increases the erodibility of stream banks, and can also speed the rate of channel migration (unless the newly cleared banks are lined with riprap, retaining walls, or concrete). In addition, removal of riparian vegetation fragments the remaining riparian ecosystem, which can prevent or hinder dispersal of species between habitat patches. This can diminish riparian plant diversity, as well as decrease abundances and diversity of migratory birds or other species that depend on large, undisturbed areas of habitat. Fragmentation can also prevent gene flow between isolated riparian patches, reducing genetic diversity. Livestock grazing Cattle have a propensity to aggregate around water, which can be detrimental to riparian ecosystems. While native ungulates such as deer are commonly found in riparian zones, livestock may trample or graze down native plants, creating an unnatural amount and type of disturbance that riparian species have not evolved to tolerate. Livestock grazing has been shown to reduce areal cover of native plant species, create disturbance frequencies that favor exotic annual weeds, and alter plant community composition. For example, in an arid South African ecosystem, grazing was found to cause a reduction of grasses, sedges, and tree species and an increase in non-succulent shrubs. On agricultural land, fencing off waterways and riparian restoration has been shown to improve water quality, though this is more effective at reducing pollution from surface runoff (such as from phosphorus) rather than contaminants such as nitrogen which reach the waterway by seeping through the soil. Fencing prevent stock from depositing feces directly into waterways and trampling the banks; planting reduces surface runoff. Trampling can increase erosion and decrease the filtration capacity of the soil, especially where animals create tracks, and fences can encourage the creation of tracks and wallows, creating a conduit for pollution that can overwhelm the effects of riparian restoration. One study of fencing a waterway on a deer farm reduced contaminants, including the indicator bacterium E. coli, by 55–84%, but nitrate concentrations doubled, and suspended sediment was increased from animals creating tracks along the fences. Mining Mining stream channels for sand and gravel can impact riparian zones by destroying habitat directly, removing groundwater through pumping, altering stream channel morphology, and changing sediment flow regimes. Conversely, mining activities in the floodplain can create favorable areas for the establishment of riparian vegetation (e.g., cottonwoods) along streams where natural recruitment processes have been impacted through other forms of human activity. Mining for metals can impact riparian zones when toxic materials accumulate in sediments. Invasive exotics The number and diversity of invasive exotic species in riparian ecosystems is increasing worldwide. Riparian zones may be particularly vulnerable to invasion due to frequent habitat disturbance (both natural and anthropogenic) and the efficiency of rivers and streams in dispersing propagules. Invasive species can greatly impact the ecosystem structure and function of riparian zones. For example, the higher biomass of dense stands of the invasive Acacia mearnsii and Eucalyptus species causes greater water consumption and thus lower water levels in streams in South Africa. Invasive plants can also cause changes in the amount of sediment that is trapped by vegetation, altering channel morphology, and can increase the flammability of the vegetation, increasing fire frequency. Exotic animals can also impact riparian zones. For example, feral burros along the Santa Maria river strip bark and cambium off native cottonwoods, causing tree mortality. Methods Methods for restoring riparian zones are often determined by the cause of degradation. Two main approaches are used in riparian-zone restoration: restoring hydrologic processes and geomorphic features, and reestablishing native riparian vegetation. Restoring hydrologic processes and geomorphic features When altered flow regimes have impacted riparian zone health, re-establishing natural streamflow may be the best solution to effectively restore riparian ecosystems. The complete removal of dams and flow-altering structures may be required to fully restore historic conditions, but this is not always realistic or feasible. An alternative to dam removal is for periodic flood pulses consistent with historical magnitude and timing to be simulated by releasing large amounts of water at once instead of maintaining more consistent flows throughout the year. This would allow overbank flooding, which is vital for maintaining the health of many riparian ecosystems. However, simply restoring a more natural flow regime also has logistical constraints, as legally appropriated water rights may not include the maintenance of such ecologically important factors. Reductions in groundwater pumping may also help restore riparian ecosystems by reestablishing groundwater levels that favor riparian vegetation; however, this too can be hampered by the fact that groundwater withdrawal regulations do not usually incorporate provisions for riparian protection. The negative effects of channelization on stream and riparian health can be lessened through physical restoration of the stream channel. This can be accomplished by restoring flow to historic channels, or through the creation of new channels. In order for restoration to be successful, particularly for the creation of entirely new channels, restoration plans must take into account the geomorphic potential of the individual stream and tailor restoration methods accordingly. This is typically done through examination of reference streams (physically and ecologically similar streams in stable, natural condition) and by methods of stream classification based on morphological features. Stream channels are typically designed to be narrow enough to overflow into the floodplain on a 1.5 to 2 year timescale. The goal of geomorphic restoration is to eventually restore hydrologic processes important to riparian and instream ecosystems. However, this type of restoration can be logistically difficult: in many cases, the initial straightening or modification of the channel has resulted in humans encroaching into the former floodplain through development, agriculture, etc. In addition, stream channel modification can be extremely costly. One well-known example of a large-scale stream restoration project is the Kissimmee River Restoration Project in central Florida. The Kissimmee River was channelized between 1962 and 1971 for flood control, turning a meandering of river into a drainage canal. This effectively eliminated seasonal inundation of the floodplain, causing a conversion from wetland to upland communities. A restoration plan began in 1999 with the goals of reestablishing ecological integrity of the river-floodplain system. The project involves dechannelizing major sections of the river, directing water into reconstructed channels, removing water control structures, and changing flow regimes to restore seasonal flooding to the floodplain. Since the completion of the first phase of restoration, a number of improvements in vegetation and wildlife communities have been documented as the conversion from uplands back to wetlands has begun to take place. Breaching levees to reconnect streams to their floodplains can be an effective form of restoration as well. On the Cosumnes River in central California, for example, the return of seasonal flooding to the floodplain as a result of levee breaching was found to result in the reestablishment of primarily native riparian plant communities. Dechannelisation of shorter reach ( long) and lowered levee are also been proved to be an effective restoration approach together with natural (or near natural) flooding regime in order to improve soil processes spatial and temporal heterogeneity typical of natural floodplains Stream channels will often recover from channelization without human intervention, provided that humans do not continue to maintain or modify the channel. Gradually, channel beds and stream banks will begin to accumulate sediment, meanders will form, and woody vegetation will take hold, stabilizing the banks. However, this process may take decades: a study found stream channel regeneration took approximately 65 years in channelized streams in West Tennessee. More active methods of restoration may speed the process along. Restoration of riparian vegetation The revegetation of degraded riparian zones is a common practice in riparian restoration. Revegetation can be accomplished through active or passive means, or a combination of the two. Active vegetation restoration A lack of naturally available propagules can be a major limiting factor in restoration success. Therefore, actively planting native vegetation is often crucial for the successful establishment of riparian species. Common methods for actively restoring vegetation include broadcast sowing seed and directly planting seeds, plugs, or seedlings. Reestablishing clonal species such as willows can often be accomplished by simply putting cuttings directly into the ground. To increase survival rates, young plants may need to be protected from herbivory with fencing or tree shelters. Preliminary research suggests that direct-seeding woody species may be more cost-effective than planting container stock. Reference sites are often used to determine appropriate species to plant and may be used as sources for seeds or cuttings. Reference communities serve as models for what restoration sites should ideally look like after restoration is complete. Concerns about using reference sites have been raised however, as conditions at the restored and reference sites may not be similar enough to support the same species. Also, restored riparian zones may be able to support a variety of possible species combinations, therefore the Society for Ecological Restoration recommends using multiple reference sites to formulate restoration goals. A practical question in active vegetation restoration is whether certain plants facilitate the recruitment and persistence of other plants (as predicted by theories of succession), or whether initial community composition determines long-term community composition (priority effects). If the former applies, it may be more effective to plant facilitative species first, and wait to plant dependent species as conditions become appropriate (e.g., when enough shade is provided by overstory species). If the latter applies, it is probably best to plant all desired species at the outset. As a critical component of restoring native riparian communities, restoration practitioners often have to remove invasive species and prevent them from reestablishing. This can be accomplished through herbicide application, mechanical removal, etc. When restoration is to be done on long stretches of rivers and streams, it is often useful to begin the project upstream and work downstream so that propagules from exotic species upstream will not hamper restoration attempts. Ensuring the establishment of native species is considered vital in preventing future colonizations of exotic plants. Passive vegetation restoration Active planting of riparian vegetation may be the fastest way to reestablish riparian ecosystems, but methods may be prohibitively resource-intensive. Riparian vegetation may come back on its own if human-induced disturbances are stopped and/or hydrologic processes are restored. For example, many studies show that preventing cattle grazing in riparian zones through exclusion fencing can allow riparian vegetation to rapidly increase in robustness and cover, and also shift to a more natural community composition. By simply restoring hydrologic processes such as periodic flooding that favor riparian vegetation, native communities may regenerate on their own (e.g., the Cosumnes River floodplain). The successful recruitment of native species will depend on whether local or upstream seed sources can successfully disperse propagules to the restoration site, or whether a native seed bank is present. One potential hindrance to passive vegetation restoration is that exotic species may preferentially colonize the riparian zone. Active weeding may improve the chances that the desired native plant community will reestablish. Restoring animal life Restoration often focuses on reestablishing plant communities, probably because plants form the foundation for other organisms within the community. Restoration of faunal communities often follows the “Field of Dreams” hypothesis: “if you build it, they will come”. Many animal species have been found to naturally recolonize areas where habitat has been restored. For example, abundances of several bird species showed marked increases after riparian vegetation had been reestablished in a riparian corridor in Iowa. Some riparian restoration efforts may be aimed at conserving particular animal species of concern, such as the Valley elderberry longhorn beetle in central California, which is dependent on a riparian tree species (blue elderberry, Sambucus mexicana) as its sole host plant. When restoration efforts target key species, consideration for individual species’ needs (e.g., minimum width or extent of riparian vegetation) are important for ensuring restoration success. Ecosystem perspectives Restoration failures may occur when appropriate ecosystem conditions are not reestablished, such as soil characteristics (e.g., salinity, pH, beneficial soil biota, etc.), surface water and groundwater levels, and flow regimes. Therefore, successful restoration may be dependent on taking a number of both biotic and abiotic factors into account. For example, restoration of soil biota, including symbiotic myccorhizae, invertebrates, and microorganisms may improve nutrient cycling dynamics. Restoration of physical processes may be a prerequisite to the reestablishment of healthy riparian communities. Ultimately, a combination of approaches taking into account causes for degradation and targeting both hydrology and the reestablishment of vegetation and other life forms may be most effective in riparian zone restoration. See also Buffer strip Canebrake Constructed wetland Drainage system (agriculture) Environmental restoration Infiltration (hydrology) Land rehabilitation Limnology Restoration ecology Revetment Riprap Watertable control Notes References Ecological connectivity Ecological restoration Habitat Hydrology and urban planning Riparian zone Water and the environment Water streams
Riparian-zone restoration
[ "Chemistry", "Engineering", "Environmental_science" ]
4,002
[ "Hydrology", "Ecological restoration", "Hydrology and urban planning", "Environmental engineering", "Riparian zone" ]
26,957,755
https://en.wikipedia.org/wiki/Space%20travel%20under%20constant%20acceleration
Space travel under constant acceleration is a hypothetical method of space travel that involves the use of a propulsion system that generates a constant acceleration rather than the short, impulsive thrusts produced by traditional chemical rockets. For the first half of the journey the propulsion system would constantly accelerate the spacecraft toward its destination, and for the second half of the journey it would constantly decelerate the spaceship. Constant acceleration could be used to achieve relativistic speeds, making it a potential means of achieving human interstellar travel. This mode of travel has yet to be used in practice. Constant-acceleration drives Constant acceleration has two main advantages: It is the fastest form of interplanetary and interstellar travel. It creates its own artificial gravity, potentially sparing passengers from the effects of microgravity. Constant thrust versus constant acceleration Constant-thrust and constant-acceleration trajectories both involve a spacecraft firing its engine continuously. In a constant-thrust trajectory, the vehicle's acceleration increases during thrusting period, since the use of fuel decreases the vehicle mass. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust decreases during the journey. The spacecraft must flip its orientation halfway through the journey and decelerate the rest of the way, if it is required to rendezvous with its destination (as opposed to a flyby). Interstellar travel A spaceship using significant constant acceleration will approach the speed of light over interstellar distances, so special relativity effects including time dilation (the difference in time flow between ship time and local time) become important. Expressions for covered distance and elapsed time The distance traveled, under constant proper acceleration, from the point of view of Earth as a function of the traveler's time is expressed by the coordinate distance x as a function of proper time τ at constant proper acceleration a. It is given by: where c is the speed of light. Under the same circumstances, the time elapsed on Earth (the coordinate time) as a function of the traveler's time is given by: Feasibility A limitation of constant acceleration is adequate fuel. Constant acceleration is only feasible with the development of fuels with a much higher specific impulse than presently available. There are two broad approaches to higher specific impulse propulsion: Higher efficiency fuel (the motor ship approach). Two possibilities for the motor ship approach are nuclear and matter–antimatter based fuels. Drawing propulsion energy from the environment as the ship passes through it (the sailing ship approach). One hypothetical sailing ship approach is discovering something equivalent to the parallelogram of force between wind and water which allows sails to propel a sailing ship. Picking up fuel along the way — the ramjet approach — will lose efficiency as the space craft's speed increases relative to the planetary reference. This happens because the fuel must be accelerated to the spaceship's velocity before its energy can be extracted, and that will cut the fuel efficiency dramatically. A related issue is drag. If the near-light-speed space craft is interacting with matter that is moving slowly in the planetary reference frame, this will cause drag which will bleed off a portion of the engine's acceleration. A second big issue facing ships using constant acceleration for interstellar travel is colliding with matter and radiation while en route. In mid-journey any such impact will be at near light speed, so the result will be dramatic. Interstellar traveling speeds If a space ship is using constant acceleration over interstellar distances, it will approach the speed of light for the middle part of its journey when viewed from the planetary frame of reference. This means that the effects of relativity will become important. The most important effect is that time will appear to pass at different rates in the ship frame and the planetary frame, and this means that the ship's speed and journey time will appear different in the two frames. Planetary reference frame From the planetary frame of reference, the ship's speed will appear to be limited by the speed of light — it can approach the speed of light, but never reach it. If a ship is using 1 g constant acceleration, it will appear to get near the speed of light in about a year, and have traveled about half a light year in distance. For the middle of the journey the ship's speed will be roughly the speed of light, and it will slow down again to zero over a year at the end of the journey. As a rule of thumb, for a constant acceleration at 1 g (Earth gravity), the journey time, as measured on Earth, will be the distance in light years to the destination, plus 1 year. This rule of thumb will give answers that are slightly shorter than the exact calculated answer, but reasonably accurate. Ship reference frame From the frame of reference of those on the ship the acceleration will not change as the journey goes on. Instead the planetary reference frame will look more and more relativistic. This means that for voyagers on the ship the journey will appear to be much shorter than what planetary observers see. At a constant acceleration of 1 g, a rocket could travel the diameter of our galaxy in about 12 years ship time, and about 113,000 years planetary time. If the last half of the trip involves deceleration at 1 g, the trip would take about 24 years. If the trip is merely to the nearest star, with deceleration the last half of the way, it would take 3.6 years. In fiction The spacecraft of George O. Smith's Venus Equilateral stories are all constant acceleration ships. Normal acceleration is 1 g, but in "The External Triangle" it is mentioned that accelerations of up to 5 g are possible if the crew is drugged with gravanol to counteract the effects of the g-load. "Sky Lift" is a science fiction short story by Robert A. Heinlein, first published 1953. In the story, a torchship pilot lights out from Earth orbit to Pluto on a mission to deliver a cure to a plague ravaging a research station. Tau Zero, a hard science fiction novel by Poul Anderson, has a spaceship using a constant acceleration drive. Spacecraft in Joe Haldeman's 1974 novel The Forever War make extensive use of constant acceleration; they require elaborate safety equipment to keep their occupants alive at high acceleration (up to 25 g), and accelerate at 1 g even when "at rest" to provide humans with a comfortable level of gravity. In the Known Space universe, constructed by Larry Niven, Earth uses constant acceleration drives in the form of Bussard ramjets to help colonize the nearest planetary systems. In the non-known space novel A World Out of Time, Jerome Branch Corbell (for himself), "takes" a ramjet to the Galactic Center and back in 150 years ships time (most of it in cold sleep), but 3 million years passes on Earth. In The Sparrow, by Mary Doria Russell, interstellar travel is achieved by converting a small asteroid into a constant acceleration spacecraft. Force is applied by ion engines fed with material mined from the asteroid itself. In the Revelation Space series by Alastair Reynolds, interstellar commerce depends upon "lighthugger" starships which can accelerate indefinitely at 1 g, with superseded antimatter powered constant acceleration drives. The effects of relativistic travel are an important plot point in several stories, informing the psychologies and politics of the lighthuggers' "ultranaut" crews for example. In the novel 2061: Odyssey Three by Arthur C. Clarke, the spaceship Universe, using a muon-catalyzed fusion rocket, is capable of constant acceleration at 0.2 g under full thrust. Clarke's novel "Imperial Earth" features an "asymptotic drive", which utilises a microscopic black hole and hydrogen propellant, to achieve a similar acceleration travelling from Titan to Earth. The UET and Hidden Worlds spaceships of F.M. Busby's Rissa Kerguelen saga utilize a constant acceleration drive that can accelerate at 1 g or even a little more. Ships in the Expanse series by James S. A. Corey make use of constant acceleration drives, which also provide artificial gravity for the occupants. In The Martian, by Andy Weir, the spaceship Hermes uses a constant thrust ion engine to transport astronauts between Earth and Mars. In Project Hail Mary, also by Weir, the protagonist's spaceship uses a constant 1.5 g acceleration spin drive to travel between the Solar System, Tau Ceti and 40 Eridani. Explorers on the Moon, one of the Adventures of Tintin series of comic albums by Hergé, features a crewed Moon rocket with an unspecified 'atomic rocket motor'. The ship constantly accelerates from takeoff to provide occupants with consistent gravity, until a mid-way point is reached where the ship is turned around to constantly decelerate towards the Moon. The Lost Fleet, written by John G. Hemry under the pen name Jack Campbell, is a military science fiction series which various ships of all sizes utilize constant acceleration propulsion to travel distances within star systems. Taking into account relativistic effects on space combat, communication, and timing, the ships work in various formations to maximize firepower while minimizing damage taken. The series also features the use of Jump Drives for travel between stars using gravitational jump points as well as the use of Hypernets, which utilizes quantum entanglement and probability wave principles for long distance travel between massively constructed gates. References Interstellar travel Space colonization Special relativity Acceleration
Space travel under constant acceleration
[ "Physics", "Astronomy", "Mathematics" ]
1,936
[ "Astronomical hypotheses", "Physical quantities", "Acceleration", "Quantity", "Special relativity", "Interstellar travel", "Theory of relativity", "Wikipedia categories named after physical quantities" ]
26,960,936
https://en.wikipedia.org/wiki/Quantitative%20precipitation%20estimation
Quantitative precipitation estimation or QPE is a method of approximating the amount of precipitation that has fallen at a location or across a region. Maps of the estimated amount of precipitation to have fallen over a certain area and time span are compiled using several different data sources including manual and automatic field observations and radar and satellite data. This process is undertaken every day across the United States at Weather Forecast Offices (WFOs) run by the National Weather Service (NWS). A number of different algorithms can be used to estimate precipitation amounts from data collected by radar, satellites, or other remote sensing platforms. Research in the fields of QPE and quantitative precipitation forecasting (QPF) is ongoing. Recent research in the field suggests using commercial microwave links for environmental monitoring in general and precipitation measurements in particular. References Precipitation Hydrology
Quantitative precipitation estimation
[ "Chemistry", "Engineering", "Environmental_science" ]
165
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
21,070,925
https://en.wikipedia.org/wiki/Biostrophin
Biostrophin is a drug which may serve as a vehicle for gene therapy, in the treatment of Duchenne and Becker muscular dystrophy. As mutations in the gene which codes for the protein dystrophin is the underlying defect responsible for both disorders, biostrophin will deliver a genetically-engineered, functional copy of the gene at the molecular level to affected muscle cells. Dosage, as well as a viable means for systemic release of the drug in patients, is currently being investigated with the use of both canine and primate animal models. Biostrophin is being manufactured by Asklepios BioPharmaceuticals, Inc., with funding provided by the Muscular Dystrophy Association. See also Other drugs for Duchenne muscular dystrophy Ataluren Rimeporide (experimental) References External links Parent Project MD Muscular dystrophy Genetic engineering
Biostrophin
[ "Chemistry", "Engineering", "Biology" ]
186
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Genetic engineering", "Molecular biology" ]
21,072,589
https://en.wikipedia.org/wiki/Carbon%20dioxide%20removal
Carbon dioxide removal (CDR) is a process in which carbon dioxide () is removed from the atmosphere by deliberate human activities and durably stored in geological, terrestrial, or ocean reservoirs, or in products. This process is also known as carbon removal, greenhouse gas removal or negative emissions. CDR is more and more often integrated into climate policy, as an element of climate change mitigation strategies. Achieving net zero emissions will require first and foremost deep and sustained cuts in emissions, and then—in addition—the use of CDR ("CDR is what puts the net into net zero emissions"). In the future, CDR may be able to counterbalance emissions that are technically difficult to eliminate, such as some agricultural and industrial emissions. CDR includes methods that are implemented on land or in aquatic systems. Land-based methods include afforestation, reforestation, agricultural practices that sequester carbon in soils (carbon farming), bioenergy with carbon capture and storage (BECCS), and direct air capture combined with storage. There are also CDR methods that use oceans and other water bodies. Those are called ocean fertilization, ocean alkalinity enhancement, wetland restoration and blue carbon approaches. A detailed analysis needs to be performed to assess how much negative emissions a particular process achieves. This analysis includes life cycle analysis and "monitoring, reporting, and verification" (MRV) of the entire process. Carbon capture and storage (CCS) are not regarded as CDR because CCS does not reduce the amount of carbon dioxide already in the atmosphere. As of 2023, CDR is estimated to remove around 2 gigatons of per year. This is equivalent to about 4% of the greenhouse gases emitted per year by human activities. There is potential to remove and sequester up to 10 gigatons of carbon dioxide per year by using those CDR methods which can be safely and economically deployed now. However, quantifying the exact amount of carbon dioxide removed from the atmosphere by CDR is difficult. Definition Carbon dioxide removal (CDR) is defined by the IPCC as: "Anthropogenic activities removing from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical sinks and direct air capture and storage, but excludes natural uptake not directly caused by human activities." Synonyms for CDR include greenhouse gas removal (GGR), negative emissions technology, and carbon removal. Technologies have been proposed for removing non- greenhouse gases such as methane from the atmosphere, but only carbon dioxide is currently feasible to remove at scale. Therefore, in most contexts, greenhouse gas removal means carbon dioxide removal. The term geoengineering (or climate engineering) is sometimes used in the scientific literature for both CDR or SRM (solar radiation management), if the techniques are used at a global scale. The terms geoengineering or climate engineering are no longer used in IPCC reports. Categories CDR methods can be placed in different categories that are based on different criteria: Role in the carbon cycle (land-based biological; ocean-based biological; geochemical; chemical); or Timescale of storage (decades to centuries; centuries to millennia; thousand years or longer) Concepts using similar terminology CDR can be confused with carbon capture and storage (CCS), a process in which carbon dioxide is collected from point-sources such as gas-fired power plants, whose smokestacks emit in a concentrated stream. The is then compressed and sequestered or utilized. When used to sequester the carbon from a gas-fired power plant, CCS reduces emissions from continued use of the point source, but does not reduce the amount of carbon dioxide already in the atmosphere. Role in climate change mitigation Use of CDR reduces the overall rate at which humans are adding carbon dioxide to the atmosphere. The Earth's surface temperature will stabilize only after global emissions have been reduced to net zero, which will require both aggressive efforts to reduce emissions and deployment of CDR. It is not feasible to bring net emissions to zero without CDR as certain types of emissions are technically difficult to eliminate. Emissions that are difficult to eliminate include nitrous oxide emissions from agriculture, aviation emissions, and some industrial emissions. In climate change mitigation strategies, the use of CDR counterbalances those emissions. After net zero emissions have been achieved, CDR could be used to reduce atmospheric concentrations, which could partially reverse the warming that has already occurred by that date. All emission pathways that limit global warming to 1.5 °C or 2 °C by the year 2100 assume the use of CDR in combination with emission reductions. Critique and risks Critics point out that CDR must not be regarded as a substitute for the required cuts in greenhouse gas emissions. Oceanographer David Ho formulated it like this in 2023 "We must stop talking about deploying CDR as a solution today, when emissions remain high—as if it somehow replaces radical, immediate emission cuts. Reliance on large-scale deployment of CDR was regarded in 2018 as a "major risk" to achieving the goal of less than 1.5 °C of warming, given the uncertainties in how quickly CDR can be deployed at scale. Strategies for mitigating climate change that rely less on CDR and more on sustainable use of energy carry less of this risk. The possibility of large-scale future CDR deployment has been described as a moral hazard, as it could lead to a reduction in near-term efforts to mitigate climate change. However, the 2019 NASEM report concludes: "Any argument to delay mitigation efforts because NETs will provide a backstop drastically misrepresents their current capacities and the likely pace of research progress." CDR is meant to complement efforts in hard-to-abate sectors rather than replace mitigation. Limiting climate change to 1.5°C and achieving net-zero emissions would entail substantial carbon dioxide removal (CDR) from the atmosphere by the mid-century, but how much CDR is needed at country level over time is unclear. Equitable allocations of CDR, in many cases, exceed implied land and carbon storage capacities. Many countries have either insufficient land to contribute an equitable share of global CDR or insufficient geological storage capacity. Experts also highlight social and ecological limits for carbon dioxide removal, such as the land area required. For example, the combined land requirements of removal plans as per the global Nationally Determined Contributions in 2023 amounted to 1.2 billion hectares, which is equal to the combined size of global croplands. Permanence Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Biomass, such as trees, can be directly stored into the Earth's subsurface. Furthermore carbon dioxide that has been removed from the atmosphere can be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts. This is because they are removing carbon from the atmosphere and sequestering it indefinitely and presumably for a considerable duration (thousands to millions of years). Current and potential scale As of 2023, CDR is estimated to remove about 2 gigatons of per year, almost entirely by low-tech methods like reforestation and the creation of new forests. This is equivalent to 4% of the greenhouse gases emitted per year by human activities. A 2019 consensus study report by NASEM assessed the potential of all forms of CDR other than ocean fertilization that could be deployed safely and economically using current technologies, and estimated that they could remove up to 10 gigatons of per year if fully deployed worldwide. In 2018, all analyzed mitigation pathways that would prevent more than 1.5 °C of warming included CDR measures. Some mitigation pathways propose achieving higher rates of CDR through massive deployment of one technology, however these pathways assume that hundreds of millions of hectares of cropland are converted to growing biofuel crops. Further research in the areas of direct air capture, geologic sequestration of carbon dioxide, and carbon mineralization could potentially yield technological advancements that make higher rates of CDR economically feasible. Methods Overview listing based on technology readiness level The following is a list of known CDR methods in the order of their technology readiness level (TRL). The ones at the top have a high TRL of 8 to 9 (9 being the maximum possible value, meaning the technology is proven), the ones at the bottom have a low TRL of 1 to 2, meaning the technology is not proven or only validated at laboratory scale. Afforestation/ reforestation Soil carbon sequestration in croplands and grasslands Peatland and coastal wetland restoration Agroforestry, improved forest management Biochar carbon removal (BCR) Direct air carbon capture and storage (DACCS) Bioenergy with carbon capture and storage (BECCS) Enhanced weathering (alkalinity enhancement) Blue carbon management in coastal wetlands (restoration of vegetated coastal ecosystems; an ocean-based biological CDR method which encompasses mangroves, salt marshes and seagrass beds) Ocean fertilization, ocean alkalinity enhancement that amplifies the oceanic carbon cycle The CDR methods with the greatest potential to contribute to climate change mitigation efforts as per illustrative mitigation pathways are the land-based biological CDR methods (primarily afforestation/reforestation (A/R)) and/or bioenergy with carbon capture and storage (BECCS). Some of the pathways also include direct air capture and storage (DACCS). Afforestation, reforestation, and forestry management Trees use photosynthesis to absorb carbon dioxide and store the carbon in wood and soils. Afforestation is the establishment of a forest in an area where there was previously no forest. Reforestation is the re-establishment of a forest that has been previously cleared. Forests are vital for human society, animals and plant species. This is because trees keep air clean, regulate the local climate and provide a habitat for numerous species. As trees grow they absorb from the atmosphere and store it in living biomass, dead organic matter and soils. Afforestation and reforestation – sometimes referred to collectively as 'forestation' – facilitate this process of carbon removal by establishing or re-establishing forest areas. It takes forests approximately 10 years to ramp- up to the maximum sequestration rate. Depending on the species, the trees will reach maturity after around 20 to 100 years, after which they store carbon but do not actively remove it from the atmosphere. Carbon can be stored in forests indefinitely, but the storage can also be much more short-lived as trees are vulnerable to being cut, burned, or killed by disease or drought. Once mature, forest products can be harvested and the biomass stored in long-lived wood products, or used for bioenergy or biochar. Consequent forest regrowth then allows continuing removal. Risks to deployment of new forest include the availability of land, competition with other land uses, and the comparatively long time from planting to maturity. Agricultural practices (carbon farming) Carbon farming is a set of agricultural methods that aim to store carbon in the soil, crop roots, wood and leaves. The overall goal of carbon farming is to create a net loss of carbon from the atmosphere. This is done by increasing the rate at which carbon is sequestered into soil and plant material. One option is to increase the soil's organic matter content. This can also aid plant growth, improve soil water retention capacity and reduce fertilizer use. Sustainable forest management is another tool that is used in carbon farming. Agricultural methods for carbon farming include adjusting how tillage and livestock grazing is done, using organic mulch or compost, working with biochar and terra preta, and changing the crop types. Methods used in forestry include for example reforestation and bamboo farming. Carbon farming is not without its challenges or disadvantages. This is because some of its methods can affect ecosystem services. For example, carbon farming could cause an increase of land clearing, monocultures and biodiversity loss. Bioenergy with carbon capture & storage (BECCS) Biochar carbon removal (BCR) Biochar is created by the pyrolysis of biomass, and is under investigation as a method of carbon sequestration. Biochar is a charcoal that is used for agricultural purposes which also aids in carbon sequestration, the capture or hold of carbon. It is created using a process called pyrolysis, which is basically the act of high temperature heating biomass in an environment with low oxygen levels. What remains is a material known as char, similar to charcoal but is made through a sustainable process, thus the use of biomass. Biomass is organic matter produced by living organisms or recently living organisms, most commonly plants or plant based material. A study done by the UK Biochar Research Center has stated that, on a conservative level, biochar can store 1 gigaton of carbon per year. With greater effort in marketing and acceptance of biochar, the benefit of Biochar Carbon Removal could be the storage of 5–9 gigatons per year in soils. However, at the moment, biochar is restricted by the terrestrial carbon storage capacity, when the system reaches the state of equilibrium, and requires regulation because of threats of leakage. Direct air capture with carbon sequestration (DACCS) Marine carbon dioxide removal (mCDR) There are several methods of sequestering carbon from the ocean, where dissolved carbonate in the form of carbonic acid is in equilibrium with atmospheric carbon dioxide. These include ocean fertilization, the purposeful introduction of plant nutrients to the upper ocean. While one of the more well-researched carbon dioxide removal approaches, ocean fertilization would only sequester carbon on a timescale of 10-100 years. While surface ocean acidity may decrease as a result of nutrient fertilization, sinking organic matter will remineralize, increasing deep ocean acidity. A 2021 report on CDR indicates that there is medium-high confidence that the technique could be efficient and scalable at low cost, with medium environmental risks. Ocean fertilization is estimated to be able to sequester 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $8 to $80 per tonne. Ocean alkalinity enhancement involves grinding, dispersing, and dissolving minerals such as olivine, limestone, silicates, or calcium hydroxide to precipitate carbonate sequestered as deposits on the ocean floor. The removal potential of alkalinity enhancement is uncertain, and estimated at between 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $100 to $150 per tonne. Electrochemical techniques such as electrodialysis can remove carbonate from seawater using electricity. While such techniques used in isolation are estimated to be able to remove 0.1 to 1 gigatonnes of carbon dioxide per year at a cost of USD $150 to $2,500 per tonne, these methods are much less expensive when performed in conjunction with seawater processing such as desalination, where salt and carbonate are simultaneously removed. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct. Costs and economics The cost of CDR differs substantially depending on the maturity of the technology employed as well as the economics of both voluntary carbon removal markets and the physical output; for example, the pyrolysis of biomass produces biochar that has various commercial applications, including soil regeneration and wastewater treatment. In 2021 DAC cost from $250 to $600 per ton, compared to $100 for biochar and less than $50 for nature-based solutions, such as reforestation and afforestation. The fact that biochar commands a higher price in the carbon removal market than nature-based solutions reflects the fact that it is a more durable sink with carbon being sequestered for hundreds or even thousands of years while nature-based solutions represent a more volatile form of storage, which risks related to forest fires, pests, economic pressures and changing political priorities. The Oxford Principles for Net Zero Aligned Carbon Offsetting states that to be compatible with the Paris Agreement: "...organizations must commit to gradually increase the percentage of carbon removal offsets they procure with the view of exclusively sourcing carbon removals by mid-century." These initiatives along with the development of new industry standards for engineered carbon removal, such as the Puro Standard, will help to support the growth of the carbon removal market. Although CDR is not covered by the EU Allowance as of 2021, the European Commission is preparing for carbon removal certification and considering carbon contracts for difference. CDR might also in future be added to the UK Emissions Trading Scheme. As of end 2021 carbon prices for both these cap-and-trade schemes currently based on carbon reductions, as opposed to carbon removals, remained below $100. After the diffusion of net-zero targets, CDR plays a more important role in key emerging economies (e.g. Brazil, China, and India). As of early 2023, financing has fell short of the sums required for high-tech CDR methods to contribute significantly to climate change mitigation. Though available funds have recently increased substantially. Most of this increase has been from voluntary private sector initiatives. Such as a private sector alliance led by Stripe with prominent members including Meta, Google and Shopify, which in April 2022 revealed a nearly $1 billion fund to reward companies able to permanently capture & store carbon. According to senior Stripe employee Nan Ransohoff, the fund was "roughly 30 times the carbon-removal market that existed in 2021. But it's still 1,000 times short of the market we need by 2050." The predominance of private sector funding has raised concerns as historically, voluntary markets have proved "orders of magnitude" smaller than those brought about by government policy. As of 2023 however, various governments have increased their support for CDR; these include Sweden, Switzerland, and the US. Recent activity from the US government includes the June 2022 Notice of Intent to fund the Bipartisan Infrastructure Law's $3.5 billion CDR program, and the signing into law of the Inflation Reduction Act of 2022, which contains the 45Q tax to enhance the CDR market. Removal of other greenhouse gases Although some researchers have suggested methods for removing methane, others say that nitrous oxide would be a better subject for research due to its longer lifetime in the atmosphere. See also References External links Factsheet about CDR by IPCC Sixth Assessment Report WG III Deep Dives by Carbon180. Info about carbon removal solutions. The Road to Ten Gigatons - Carbon Removal Scale Up Challenge Game. The State of Carbon Dioxide Removal report. 2023. Land - the planet's carbon sink, United Nations. Climate engineering Climate change policy Carbon dioxide removal Free content from The Royal Society
Carbon dioxide removal
[ "Engineering" ]
4,024
[ "Planetary engineering", "Geoengineering" ]
21,076,467
https://en.wikipedia.org/wiki/Thermal%20vacuum%20chamber
A thermal vacuum chamber (TVAC) is a vacuum chamber in which the radiative thermal environment is controlled. Typically the thermal environment is achieved by passing liquids or fluids through thermal shrouds for cold temperatures or through the application of thermal lamps for high temperatures. Thermal vacuum chambers are frequently used for testing spacecraft or parts thereof under a simulated space environment. Examples Thermal vacuum chambers can be found at: NASA's Space Environment Simulation Laboratory at the Johnson Space Center NASA's Space Power Facility, Spacecraft Propulsion Research Facility and Cryogenic Propellant Tank Facility (K-Site) at the Glenn Research Center NASA's Space Environment Simulator at Goddard Space Flight Center NASA's DynaVac 36" T/V Chamber The ESA Large Space Simulator See also Vacuum engineering References Laboratory equipment Vacuum systems
Thermal vacuum chamber
[ "Physics", "Engineering" ]
159
[ "Vacuum systems", "Vacuum", "Matter" ]
21,078,746
https://en.wikipedia.org/wiki/Ice%20circle
Ice discs, ice circles, ice pans, ice pancakes or ice crepes are a very rare natural phenomenon that occurs in slow moving water in cold climates. They are thin circular slabs of ice that rotate slowly on a body of water's surface. Types Ice discs Ice discs form on the outer bends in a river where the accelerating water creates a force called 'rotational shear', which breaks off a chunk of ice and twists it around. As the disc rotates, it grinds against surrounding ice — smoothing into a circle. A relatively uncommon phenomenon, one of the earliest recordings is of a slowly revolving disc spotted on the Mianus River and reported in an 1895 edition of Scientific American. Ice pans River specialist and geography professor Joe Desloges states that ice pans are "surface slabs of ice that form in the center of a lake or creek, instead of along the water’s edge". As water cools, ice crystals form into 'frazil ice' and can cluster together into a pan-shaped formation. If an ice pan accumulates enough frazil ice and the current remains slow, the pan may transform into a 'hanging dam', a heavy block of ice with high ridges and a low centre. Formation Conditions It is believed that ice circles form in eddy currents. It has been shown that existing ice discs can maintain their rotation due to melting. Physics Ice circles tend to rotate even when they form in water that is not moving. The ice circle lowers the temperature of the water around it, which causes the water to become denser than the slightly warmer water around it. The dense water then sinks and creates its own circular motion, causing the ice circle to rotate. Size An unusual natural phenomenon, ice disks occur in slowly moving water in cold climates and can vary in size, with circles more than in diameter observed. Ice Circle of Vana-Vigala in Estonia is reported to have had a diameter of over 20 meters, whilst one approximately in diameter appeared in Westbrook, Maine in January 2019. Notable examples Ice discs have most frequently been observed in Scandinavia and North America. An ice disc was observed in Wales in December 2008 and another one in England in January 2009. An ice disc was observed on the Sheyenne River in North Dakota in December 2013. An ice circle of approximately in diameter was observed and photographed in Lake Katrine, New York on the Esopus Creek around 23 January 2014. In Idaho, extreme weather led to a rare sighting of an ice disc on the Snake River on 22 January 2014. On 14 January 2019, an ice disc approximately wide on the Presumpscot River in Westbrook, Maine, United States drew wide media attention. A smaller disc was reported by park rangers in Baxter State Park, in northern Maine, the same month. In January 2020, an ice disc appeared on the Kennebec River in Skowhegan, Maine, United States In January 2021 a large ice circle was discovered via satellite imagery and on 23 February 2021, an ice disc estimated to be wide was confirmed on the Taltson River, Northwest Territories (just below Tsu Lake). It was estimated to be rotating at approximately 20–25 minutes per rotation. Artificial ice circles Artificial ice circles have also been created by cutting a large circle in a sheet of ice. These artificial creations are called "ice carousels". Record setting ice carousels are recorded by the World Ice Carousel Association. See also El Ojo - Rotating floating circular island in Argentina's Paraná Delta, consisting of vegetation and soil References Further reading Theories abound on how the river got those patterns – MIT News External links Video of ice disc in Germany: Video of ice disc in Ontario, Canada: Bodies of ice Water ice River morphology Vortices Circles
Ice circle
[ "Chemistry", "Mathematics" ]
764
[ "Vortices", "Fluid dynamics", "Circles", "Pi", "Dynamical systems" ]
21,079,741
https://en.wikipedia.org/wiki/Electric%20vehicle%20charging%20network
An electric vehicle charging network is an infrastructure system of charging stations to recharge electric vehicles. The term electric vehicle infrastructure (EVI) may refer to charging stations in general or the network of charging stations across a nation or region. The proliferation of charging stations can be driven by charging station providers or government investment, and is a key influence on consumer behaviour in the transition from internal combustion engine vehicles to electric vehicles. While charging network vendors have in the past offered proprietary solutions limited to specific manufacturers (ex. Tesla), vendors now usually supply energy to electric vehicles regardless of manufacturer. Maps Charging station mapping services typically give the location, power, network, and connector type of publicly available charging stations, while more advanced services give the price and live availability of stations. Large charging networks provide maps of their own stations for customers. PlugShare is a crowdsourced map of public, private and residential charging locations. The site uses Google Maps to provide a map of charging locations and their own database to filter by charging type. Public chargers, private chargers, and residential charging locations are listed. The service provides an app for iOS and Android which allows users to locate chargers near their current location. An account is needed to view private persons' charging locations, as these locations are at the homes or businesses of Plugshare members. Plugshare was acquired by EVgo in 2021. Plugshare is one of many leading sources of where charge stations exist, however plenty public and private stations are not updated in the app. Open Charge Map is a non-commercial EV charging data service. They state the aim of providing a single point of reference, in a field of independent, conflicting charging data services. Zapmap is an electric vehicle charging mapping and payment service in the UK. They share their statistics with the Department for Transport, and many local councils direct their residents to the service for locating charging stations. OpenStreetMap is an open source map of the world which includes rich support of descriptions on charging stations. The map can be used by anyone under the ODbL license. Infrastructure providers Blink Charging operates a network with over 50,000 publicly available charging connectors in the US, Europe, and the UK. Also produces chargers for use in private settings. ChargePoint includes public charging stations, a consumer subscription plan and utility grid management technology to help electric utility companies to smooth electrical demands on the grid. As of 2023, the ChargePoint network consisted of over 27,000 locations in the United States, plus additional chargers in other countries. Connected Kerb is a UK-based provider of electric vehicle charging infrastructure, founded in 2017, which aims to make EV charging affordable, sustainable and accessible for all regardless of social status, location or physical ability. In November 2021, the company announced plans to install 190,000 on-street residential charging points across the UK by 2030 to help EV users without access to home charging facilities. The CEO is Chris Pateman-Jones. ChargeFinder is a mobile and web application designed to assist electric vehicle (EV) drivers in locating and accessing public charging stations for electric automobiles. Elektromotive was a UK-based company that manufactured and installed charging infrastructure for electric cars and other electric vehicles using their patented Elektrobay stations. The company has partnerships with major corporations including EDF Energy and Mercedes-Benz to supply charging posts and data services. They have since been absorbed into Chargemaster, now BP Pulse. Electrify America is a DC fast charging station network based only in the United States with approximately 730 charging locations as of March 2022, the company was created by the Volkswagen Group after the United States EPA accused them of using defeat devices in its diesel-fueled vehicles. EVgo is one of America's largest EV charging networks. EV Trail is an American charging infrastructure company. Their aim is to cut rural charging gaps in Colorado. FLO is a North American electric vehicle charging network operator and a smart charging solutions provider. FLO operates in the United States and Canada. Their charging stations are assembled in Michigan and Quebec. Francis Energy is a Tulsa, Oklahoma-based EV charge point operator with plans to expand into 40 states in 2023, with plans to install 50,000 EV charging ports by 2030 in partnership with municipalities, auto dealers, Tribal Nations, and private businesses. Gridserve is a network of rapid chargers at service stations in UK. They acquired many of these chargepoints from Ecotricity's Electric Highway brand. Hypercharge is a North American smart EV charging network and solutions provider serving single-family homes, multi-family residential & commercial buildings, and fleet applications. Hypercharge was the first Canada-founded EV charging network to IPO. Park & Charge is a European Charging infrastructure for electric vehicles. Park & Charge was founded in 1992 by members of the electric car club in Switzerland (ECS). Today there is a Park & Charge at nearly 500 locations in Switzerland, Germany, Austria, the Netherlands and Italy, offering a safe and easy way for drivers of electric vehicles to charge their vehicle batteries. The locations of the charging stations in Europe will be published in LEMnet internet database which is operated by Park & Charge. Automobile manufacturers The Renault–Nissan Alliance made agreements by 2010 to promote emission-free mobility in France, Israel, Portugal, Denmark and the U.S. state of Tennessee. As of 2010, Nissan planned to install 200-volt level 2 charging stations at 2,200 Nissan dealers in Japan, and level 3 fast charging stations at 200 dealers. Tesla Motors, in March 2009, announced that they were "working with a government-affiliated partner to set up battery changing stations at various locations" to service their Model S platform cars. The first Tesla Supercharger stations were unveiled 24 September 2012. As of Q4 2021, Tesla reported 3,476 supercharging stations and 31,498 supercharging connectors (about 9 connectors per station on average) in 44 countries worldwide. Initiatives by region Europe The AVERE / European Association for Battery, Hybrid and Fuel Cell Electric Vehicles was founded in 1978 and is a member of the World Electric Vehicle Association. AVERE is also the parent organization of CITELEC / Association of European Cities interested in Electric Vehicles and Eurelectric. The European Commission has funded the "Green Cars Initiative" since November 2008. In March 2011, the European Commission along with forty two partners from the industries, utilities, electric car manufacturers, municipalities, universities and technology and research institutions founded the "Green eMotion" initiative funded with €41.8 million under the Seventh Research and Development Framework Programme. The defined goal is to provide an interoperable electromobility framework to align the ongoing regional and national electromobility initiatives. At the same time the partners unveiled the "Transport 2050" plan which includes the aim to half the number of conventionally fuelled cars in cities by 2030 and phase them out by 2050. In the second position paper (March 2011) of the European Automobile Manufacturers Association it is recommended to equip public charging stations with IEC 62196 Type 2 Mode 3 connectors with transitional solutions to be allowed up to 2017. Nevertheless, multiple socket types (IEC 60309-2 Mode 2 types, IEC 62196 Mode 3 types, Chademo and standard home socket outlets Mode 2) have been deployed already. Politics have called for single European-wide standard and in case of a market failure the EU will define the infrastructure side requirements by law in 2013. As expected from lobbying the European Commission has proposed in January 2013 to only use the Type 2 connector type as the single standard to end prior uncertainty about the charging station equipment in Europe. Common standards for electric charging points across Europe must be designed and implemented by December 2015. Czech Republic Power supplier ČEZ has announced to have 50 recharging stations to be ready by the end of 2011. By June 2012 the company had 14 public and 6 private charging stations installed with more to come in Mlada Boleslav at the Skoda facilities. These charging stations use a combination of 230 V mains connector (Type E) at 16 A and a 400 V three-phase Mennekes connector (Type 2) at 16 A or 32 A. Denmark / Norway There are 2 major charge point operators in Denmark, E.on are operating mostly fast chargers, only installing rapids at freeways, while Clever is installing both fast and rapids in city centers. Both E.on and Clever is taking part in installing rapid chargers at freeway lay-bys, with Clever installing 4 of them and E.on a total of 20 at 10 different locations. Besides E.on and Clever, local energy companies are installing free-to-use charge points, often only consisting of a CEE plug, so the users have to bring their own EVSE box. Infrastructure was planned by Better Place and has been installed by Coulomb Technologies for Copenhagen. Denmark has enacted policies that create a tax differential between zero-emission vehicles and traditional cars to accelerate the transition to electric cars. Better Place had announced the network to be complete in December 2012, however the stations and chargers have been switched off due to the bankruptcy of Better Place Danmark A/S in June 2013. By April 2013 the network had consisted of 700 public charging spots, 18 battery switch stations and 8 fast charger stations. In 2013 E.on bought the charge points from Better Place and restarted the network, without the battery swap system. Norway has a tradition in building electric vehicles based on the Think Car. It is popular in Southern Norway (Oslo), Southern Sweden (Gothenburg) and Eastern Denmark (Copenhagen). The concept of the "Move About" project will provide 60 new Think cars in a test including charging stations in 50 towns in the area until 2013. The MoveAbout concept is actually derived from a car sharing system where cars are not offered for purchase but for leasing. Estonia Estonia became the first country to complete the deployment of a nationwide electric car charging network, and , is the only country with such geographical coverage. The Estonian network has the highest concentration of DC chargers in Europe. The Estonian government and Kredex launched the charging station network project in 2011 in cooperation with ABB, funded partially by the Mitsubishi Corporation. The nationwide electric car charging network officially opened with 165 fast chargers on 20 February 2013. These chargers were installed in all urban settlements with more than 5,000 inhabitants. In addition, chargers are installed on the all major roads at intervals of no more than . That makes it possible to reach every point within the country without a supply interruption. All of the Terra 51 CHAdeMO-type DC chargers are fast-charging, only needing between 15 and 30 minutes to fully charge a car's battery. France In France, Électricité de France (EDF) and Toyota are installing recharging points for PHEVs on roads, streets and parking lots. The Renault–Nissan Alliance and the largest French electric utility, Electricite de France (EDF) have signed an agreement to promote emission-free mobility in France. The move aims at offering all-electric volume vehicles from 2011 — including a countrywide network of battery charging stations. The partner Vinci Autoroutes has announced to rebuild 738 car parks along motorways with at least 5 parking lots for charging electric vehicles – construction will start at the end of 2011 and the full extent will be reached in 2013. The Environment Ministry of France, led by Jean-Louis Borloo has announced the goal to install 400,000 charging points in France by 2015. Jean-Louis Borloo has assigned 1.5 billion Euros in 2009 to support research and preparations for the first part of the electric vehicle network with 75,000 charging stations. Meanwhile, the pilot project in Paris has started with the introduction of 100 Z.E. cars. The map of charging stations can be downloaded from the city website. There are 101 locations with 178 charging points across the town and its suburbs (May 2010). The charging points have either Schuko-like sockets (Type E / 2P+T) or a Marechal plug on spiral cord where both variants are rated at 230 V/16 A (mains). Schneider Electric supports test drives in France with its charging stations that include a Type 3 (EV Plug Alliance) connector. In Strasbourg 100 Toyota Prius were tested with 135 recharging spots beginning Q1 2010 (Type 3 single-phase). In the suburbs of Paris there will be 300 recharging spots to be installed in Q1 2011. In the "Projet Klébér" the Strasbourg vehicle fleet may use the charging stations of EnBW in Mannheim, Karlsruhe, Stuttgart and vice versa. In Yvelines near Paris the fleet test SAVE (Seine Aval Véhicule Électrique) was started in April 2011 – until September 2011 a number of 200 charging stations will be built. The Monaco government has sketched a plan to run a fleet test in 2011 including 300 charging stations and 3 fast-charge stations. Italy The Renault–Nissan group – including EDF – has enlarged its scope with partnering to the Italian utility Enel and Spanish utility Endesa in March 2010. Renault–Nissan offers a broader range by providing 60 all-electric vehicles – the Kangoo Express Z.E. and the Renault Fluence Z.E – to the new pilot project "E-Moving" in Italy. The project will start to install 270 charge points in the Lombardy region (including the cities of Milan and Brescia) up to June 2010. This "E-Moving" network will contain 150 public charging stations to be put up until the end of 2010. The Italian Enel company had an early agreement with Daimler to run a test with their Smart line of cars. Enel has started the "emobility Italy" program in cooperation with Daimler in 2008 – this program will put up 400 public and private charging stations in Rome, Milan and Pisa with charging stations to be built since September 2010. The project was supposed to start in 2011 with a test run going for 48 months – since 5. April 2012 the Smart drive E-Mobility program is ready. The 100 public charging stations in Rome are built with a three-phase Type 2 Mennekes connector while the additional 50 home charging stations are built with a single-phase Type 3 Scame connector. The E-Move charging stations around Bolzano allow for a test drive of connecting solar panels directly to light vehicles for charging. The Zero Emission City Parma is a regional project with a 9 million Euro funding to create 300 charging stations along with 900 electric vehicles until 2015. The project is expected to go fully operational by the end of 2012, with 300 points of charge installed and 400/450 electric vehicles circulating. Germany Germany has four major transmission system operators (50Hertz, Amprion, TenneT, TransnetBW). They try to set themselves into the position to sell electricity power to electric vehicle owners by becoming also the operators of the upcoming electric vehicle networks. To that avail, they offered partnerships to the German car makers, where they provided charging stations for field tests. Carmaker Daimler AG and utility RWE AG are running a joint electric car and charging station test project in the German capital, Berlin, called "E-Mobility Berlin.". They have set up 60 charging stations in Berlin (September 2009) and are in the process of extending the system to include 500 charging stations. Daimler has provided for 100 Smart electric drive cars to the project. The second phase started in November 2010. The RWE subsidiary "RWE Mobility" has created cooperations with the automobilist club ADAC, car rental service Sixt and car park provider APCOA to equip all locations with charging stations. since mid of 2009. Renault joined the RWE Mobility program in September 2009 whereby the project goals of erecting charging stations were enlarged to mid of 2011 Renault's partner Nissan has joined the RWE-mobility program on 21. June 2010 announcing that RWE will create a network of 1000 charging stations until the end of the year 2010 focusing on the Berlin and Rhein-Ruhr region. On 28. August a cooperation with fuel retailer PKN Orlen (owning 2700 gasoline stations in Poland, Czech Republic and Germany) was announced – they are starting to equip 30 gasoline stations in Hamburg with charging points for electric vehicles. The current list of RWE-mobility charging stations contains 500 locations in Germany, 50 locations in the Netherlands, 11 in Poland and Austria plus a few stations in other neighbouring countries – also RWE has switched all of its charging stations to Type 2 sockets. Carmaker BMW and utility Vattenfall run a joint electric car and charging test project called "MINI E" in the German capital, Berlin. They are in the process of erecting 50 charging stations and the project lends 50 BMW Mini cars to citizens. The project started in June 2009 and a second phase has been started in December 2009. Up to June 2011 there were 42 public charge points by Vattenfall in Berlin and the company is in the process of building 50 public charge points in Hamburg. While the earliest charging stations were using CEEplus sockets the newer charging stations are built to Type 2 Mode 3 sockets. Carmaker VW and utility E.ON run a joint electric car and charging station test project in the German capital, Berlin and in Wolfsburg. The "Electric Mobility Fleet Test" was started as a research project with mostly partners in German universities using the VW hybrid cars (to be tested in 2010). E.ON has later joined also in the MINI E project providing the infrastructure in Munich which was started in Juli 2009. erecting an initial series of 11 charging stations (May 2010) enlarging it continuously (21 locations in December 2010). The region test in Munich has been extended by BMW i3 and BMW i8 prototypes (project i) as well as Audi e-tron models (project eflott) in 2011. E.ON has announced to provide the eflott project with 200 public charging stations the Munich region. Carmaker Daimler, the utility EnBW and the government of Baden-Württemberg have announced on 18. June 2010 to enlarge the "Landesinitiative Elektromobilität" program with the "e-mobility Baden-Württemberg" project that includes erecting 700 charging stations in the state until the end of 2011. Additionally there will be 200 electric vehicles added to the test including some electric trucks. The government of Baden-Württemberg has assigned 28.5 million Euros to support EV research up to 2014. Meanwhile, EnBW has sponsored 500 E-Bikes in the Elektronauten project in 2010 which can use 13 charging stations in the Stuttgart region. EnBW has claimed to offer 250 charging stations for the Elektronauten 500 project in May 2011 although the map has not been updated. Bosch has developed a new charging station type for EnBW that is capable for 63A – the station was certified on 11. April 2011 by DEKRA and EnBW has announced to install 260 charge stations in the following weeks for MeRegioMobil project in Stuttgart and Karlsruhe. In November 2011 the car2go project announced to go to Stuttgart in 2012 – EnBW reassured to have 500 charging spots ready in time with the roll out of the car2go vehicles in the second half of 2012. The German government has announced to support a fleet of 1 million electric cars in Germany by 2020. There are 500 million Euros assigned to the Federal Ministry of Economics and Technology (Germany) to support research and pilot projects in Germany. The ministry has created a dedicated coordination office in the "Gemeinsame Geschäftsstelle Elektromobilität der Bundesregierung (GGEMO)" (Joint Agency for Electric Mobility (of the Federal Government)) which was opened in February 2010. The GGEMO has coordinated a partnership program with the German car industry named "Nationale Plattform Elektromobilität (NPE)" inaugurated on 3 May 2010, in the German Chancellery. The NPE partnership is supposed to detail the plans for network evolution. The technical standardization part is mostly concentrated in the Deutsche Kommission Elektrotechnik (DKE) of the Association for Electrical, Electronic and Information Technologies (VDE) – the "Standardization Overview on E-Mobility" shows a wide range of efforts from electric grid management to the charging station infrastructure to the car charger electronics. The NPE partnership has published an interim report on 30 November 2010, showing a test fleet of 2800 electric vehicles and 2500 charging stations in 8 test regions. The German government has announced that it will not install a rebate system for the introduction of electric cars but that it will reshape the legal provisions to quickly create a charging station network in Germany. Bernd Pischetsrieder (formerly Volkswagen) points to studies saying that most of the current buyers of electric cars did already own multiple cars so that a rebate plan would merely come out as a subvention of a consumer class that can afford the expense anyway. The VDE E-mobility congress on the subject was held in Leipzig on 8./9. November 2010. During the congress a large consumer study was presented that showed some 64 percent want to buy an electric car. The study did also look at the requirements to the charging process – 51 percent of consumers in Germany expect a car to be charged in less than 2 hours, up to 4 hours is acceptable to 60 percent of consumers. 64 percent of consumers expect to charge in their own garage, 21 percent want to frequent a central charging station while casual charging in parking lots of shops and company grounds is expected by a mere 6 and 4 percent respectively. The maximum travel distance shows mixed results – while 53 percent say that 300 km is enough there are also 31 percent who like to travel 450 to 1000 km until required to recharge. The interim report of the NPE partnership classifies electric vehicles in 3 categories, all-electric city cars, family cars and light trucks with an electric range for city transport. Development is sketched in phases 2010–2013, 2014–2017, 2018–2020 and post-2020 with the government goal to get 1 million electric cars up to 2020 and 6 million electric cars up to 2030 (for comparison there are 44 million cars in Germany in 2010). Batteries are not expected to show great advancements in terms of capacity but the safety will increase and the prices will fall to 250-€300 /kWh in the 2018-2020 time frame. In the post-2020 time frame new battery types are expected – instead of lithium-ion the fourth generation batteries will be introduced to the mobility market including lithium-air, lithium-sulfur and zinc-air batteries. As for charging stations a wide network of fast-charging points is considered possible with 22 kW (400 V 32 A) stations to be introduced in 2010-2013 and 44 kW (400 V 63 A) stations to be introduced in 2014–2017. For the time beyond 2020 there is an expectation of charging stations at 60 kW (400 V DC 150 A) allowing to charge the standard 20kWh battery pack to 80% in less than 10 minutes whereas this station type requires integration with smart grid technology and a strict worldwide standard (including SAE procedures). The "early adoptors" of electric vehicles are identified to be from the middle class owning multiple cars as well as owning a garage – the existence of a public network of charging stations is considered to be not (sic!) a prerequisite for market introduction in the first phases. Instead government funds should back the investments in privately owned charging stations for example with faster tax write off and cheap credits from the government KfW bank. A preliminary review of the Mercedes / RWE test drive in the smart ed project shows the importance of vehicle-to-grid communications in charging stations as an incentive to charge at night times. While most US households own a garage even for small cars the situation is different in Central Europe where public charging stations are needed. Switzerland The charging station network in Switzerland is derived from research in solar cars. In 1992 the government decided to support a charging station network. The network has since extended to neighbouring countries – in 2010 the Park & Charge network in Switzerland, Germany and Austria did encompass 500 charging locations, additionally there a few charging locations in the Netherlands and Italy. Plug’n Roll: the smart EV charging network in Switzerland. Iceland Iceland has two major operators of public electric charging stations, Ísorka and Orka náttúrunnar. At launch, both stations did not charge for electricity at their stations, but Orka náttúrunnar started charging on 1 February 2018 and Ísorka started charging on 18 August 2017. A payment card is needed from these vendors to use their stations. there are 31 stations from Orka náttúrunnar, there are 408 stations from Ísorka and there are 23 towns and hotels with independent public charging stations. Ireland In 2009–2010 the Irish Government, and electric utility Electricity Supply Board (ESB) entered into tripartite agreements with major electric vehicle manufacturers (Renault–Nissan , Mitsubishi Motors , Toyota , and PSA Peugeot Citroën ) to promote the uptake of electric vehicles in Ireland. The Irish government has instituted a package of measures, including a €5,000 grant (US$7,158) to assist with purchasing the vehicle, exemption from vehicle registration tax, and accelerated capital allowances to promote electric vehicle purchase. In 2013, the Irish government withdrew the EV vehicle registration tax exemption and replaced it with a €5,000 discount on the tax. New conditions were also added to the SEAI EV Grant which reclassified private EV purchases via Hire Purchase or car loan as a commercial purchase, effectively reducing the EV grant to €3,500 for all non-cash buyers. As a result of these changes, EV sales fell in 2013 to only 58 units. ESB is providing the charging network, which will be made up of 46 fast-charging (50 kW DC) stations located at intervals on inter-urban national primary routes, 1,500 medium-speed(22 kW AC) public charging points distributed across all towns with population over 1500, and home chargers (3.6 kW 1Φ, 16A) at no cost to the first 2,000 grant qualifying electric car owners. The first station of the charger network was commissioned in August 2010. At the end of 2011 the charging station map shows 50 AC charging places plus 10 DC stations – the AC chargers will be built to Type 2 sockets however some older charging spots still need to be rebuilt. As of 2014 all 46 CHAdeMO fast chargers are operational and are slowly being replaced by tri-standard units capable of CHAdeMO, CCS and 44 kW AC power. Analysis of the charge station network as of 2016 showed that the installed network coverage was extensive but weak with respect to fault resilience. Luxembourg There is a national network of accelerated charging stations under the brand name of Chargy. A network of fast chargers is being constructed under the name of SuperChargy. The Netherlands In 2009 the City of Amsterdam announced it will set up 200 charging stations by 2012. In the first step the city will put up 100 stations from Coulomb Technologies in cooperation with Dutch utility Nuon and grid company Alliander. The project "Amsterdam Elektrisch" project includes 100 street-side charging stations plus 100 charging stations at car parks The first one was put up on 6. November 2009, the 100th street-side charging station became operational on 4. March 2011, with also over 100 charging stations at car parks. In April 2011, the City of Amsterdam announced the expansion of the street-side charging network with another 1000 charging stations, to be installed by Essent and a joint venture of Nuon and Heijmans. The Dutch government created the "Formula E Team," a working group collaborating with local governments, private companies and research institutes to create national and regional electric vehicle initiatives. The Foundation E-Laad.nl has the ambitious plan to put up 10,000 charging points by 2012. The Dutch government and the regional grid companies help Foundation E-laad.nl to put up a charging station network adding 65 million Euro investment support in the timeframe 2009 to 2011. The point of 500 charging stations (distributed over 125 communities) was reached on 24. June. The point of 1000 charging stations was reached on 8. December 2011, 1500 on 2. May 2012. and 2500 on 22. August 2013. According to the roadmap of Formula E-Team the office has been created and the first RFI has started in August 2010; the results will be published in early November for comments and proposals with a definite guide for the infrastructure to be published in March 2011. The integration tests will run in mid of 2011 and the back office system for the networked charging stations to go live in late 2011 along with the "Charge Authority Board" for further development. On 19. July 2010 the Formula E-Team has resolved that charge points in the Netherlands will be equipped with Type 2 Mode 3 sockets, based on a decision by providers from 9. April 2010 that will replace the earlier 5-pin CEE red sockets. The Netherlands is one of the first European markets for the Nissan Leaf; It is also the first European country to adopt stations for the "level 3" fast-charging supported by the Leaf. Epyon has unveiled the first charging station at a gasoline station in Leeuwarden, in the northern province of Friesland. Poland RWE and the "Green Stream Cluster" have started in June 2010 to put up a network of 130 charging stations in Warsaw. The Grean Stream Cluster project will run until mid of 2011. The Green Stream Cluster will put up overall 330 charging stations in five cities: Warsaw, Gdansk, Katowice, Kraków and Mielec. "Ekoenergetyka-Zachod" works on an electric vehicle network in the western cities of Zielona Gora (Grünberg), Sulechow, Pila (Schneidemühl) und Sieradz. Portugal Renault–Nissan have signed a contract with MOBIE.Tech that was started back in 2008. There shall be 1300 new charging stations and 50 fastcharge stations within the 2011 timeframe. The government wants to enlarge to renewable energy sector up to 60% and usage of electric vehicles is considered an important strategy to cut dependency on imports. The MOBI.E network has installed 100 charging stations and it is deploying 1300 charging stations as well as 50 fast-charge stations in 25 cities up to June 2011. The MOBI.E stations work with magnetic stripe card and bills are sent to the cell phone – the government hopes to export the concept to other countries. Slovenia An overview of available charging stations is provided by polni.si, the biggest providers are Dravske Elektrarne Maribor, Elektro Celje, Elektro Gorenjska, Elektro Ljubljana, Elektro Maribor, Elektro Primorska and Petrol. The municipal works Elektro Ljubljana provides a number of public charging stations in the elektro-crpalke network based on 400 V/32 A type or the domestic socket type (Schuko). Spain In Madrid, Spain, a trial project will convert 30 former telephone boxes into charging points for electric cars. They are considered suitable, since telephone boxes are generally located at the roadside and are already connected to the electricity supply network. They would form part of a planned network of 546 charging points in Madrid, Barcelona and Seville, subsidised by the Spanish Government. The charging grid is created for the MOVELE pilot project of the Institute for Diversification and Saving of Energy (Instituto para la Diversificación y Ahorro de la Energía, IDAE) that is also providing for 2,000 electric vehicles to the field test. The Spanish government has committed itself to have 1 million electric vehicles (fully electric and hybrid cars) in Spain by 2014. The Chairman of Endesa, Borja Prado, together with a former mayor of Madrid, Alberto Ruiz Gallardón, and the Chairman of Telefónica, César Alierta, have the phone booth in Madrid which can also be used for recharging electric vehicles. Reserved parking spaces will be located next to this and all other booths set up in Metropolitan areas where users will be able to park their EVs and recharge at no cost once they have obtained their free "zero emissions" pre-paid card from the Madrid city council. The "Live Barcelona" map (sponsored by the Barcelona city council, the Energy state department of Catalonia, utility Endesa, car maker Seat) lists 138 charging spots in Barcelona with 55 of them functional (February 2011). In September 2011 Endesa signed agreements with Mitsubishi, Renault–Nissan and the Japanese Chademo Foundation on the promotion of fast-charge stations. Endesa will hold the Chademo Europe chair. As a consequence, Endesa will deploy two types of public charging spots – conventional charging (16 A, 230 V AC, Schuko type) and rapid charging (125 A, 400 V DC, Chademo type). In October 2011 Endesa ordered 53 rapid charging stations to be built by GE in strategic places in Spain. The Galicia region is creating a research cluster (Clúster de Empresas de Automoción de Galicia / Ceaga). The infrastructure side (Plan Mobega – Plan de movilidad eléctrica de Galicia) includes the implementation of a network of multifunctional electromobility stations located at rent-a-car stations. The current installation includes 7 Multifunctional Electromobility stations which are located in the main metropolitan areas of Galicia and a fleet of 28 electric vehicles. The project was started in September 2011 and will continue until Enero 2013. United Kingdom Électricité de France is partnering with Elektromotive, Ltd. to install 250 new charging points over six months from October 2007 in London and elsewhere in the UK. By November 2011 there are 687 Electrobay charging stations (200 in London) and it plans to build 4000 charging points throughout 2012. Elektromotive has provided 400 public access charge points to the "Charge your Car" network of One NorthEast in 2010 and it has installed more than 120 charge points across Scotland. The Scottish Government funded network, ChargePlace Scotland, was transferred to operate under the SWARCO (a Swarovski company brand) since 2021 following a competitive tender. The Renault–Nissan Alliance and UK company Elektromotive, a provider of electric vehicle recharging stations, are collaborating in the Partnership for Zero-Emission-Mobility, with the aim of accelerating the installation of charging networks for plug-in vehicles in cities. The Alliance and Elektromotive have signed a Memorandum of Understanding. A fleet of electric cars and charge points will be rolled out across Coventry (England) as part of a multimillion-pound pilot project. The Department for Transport (DfT) announced in April 2009 that £230 million would be allocated to incentivise the market uptake of EVs in the UK. The scheme will become operational in 2011 and each EV purchaser could receive a rebate of between £2,000 -£5,000. Electric vehicles are exempt from purchase and annual vehicle tax. From April 2010, purchasers of an average new car (Band G) will pay a one off £155 showroom tax and an annual vehicle tax of £155.EVs are tax free. On 25 February 2010, London, the North East region and Milton Keynes were selected to be the lead places for electric vehicle infrastructure. In total, their plans will result in over 2,500 charge points in the first year and over 11,000 in the next three years, at a variety of publicly accessible car parks, transport hubs and workplaces. The London mayor called for an E-revolution in March 2009 and he presented the "Electric Delivery Plan for London" in May 2009. The plan projects 25,000 charging points London by 2015 including 500 on-street, 2000 off-street in car-parks and 22,000 privately owned locations. London itself will buy 1000 electric vehicles up to 2015. Owners of an electric car will not need to pay the Congestion Charge for the city of London being worth up to £1,700 a year. At that point (May 2009) London already had 100 charge points in public places which will be increased to 250 by 2012. Beginning 2011 20% of new lots in car parks must have access to a charging outlet. Additionally, the parking in the Westminster boroughs will be free for electric vehicles saving the user up to £6,000 a year and a flat rate of £200 electricity cost is charged for the usage of public outlets in Westminster. As of February 2011 the "Source London" project has contracted Siemens to build a network of public charging stations in London. At least 1,300 charging points will be installed by the end of 2013 in public locations and streets across the Capital. Transport for London (TfL) has also finalised a contract that will see Siemens manage the operation of the network and registration of drivers. The deal between TfL and Siemens will see Siemens run the Source London back office to March 2014 at no cost. Up to July 2011 there were 180 charging stations. and in November 2011 more than 200. The number of charging stations reached 790 in October 2012 with plans to increase that to 1300 in 2013. The goal of 1300 publicly accessible charging stations was met on 16 May 2013. In July 2011 a charity called Zero Carbon World announced their Charge Points Everywhere network, their aim is to help business's install free to use charge points on their business's with the implicit understanding the person using the charge point will use the services of that business. The connections themselves are standard 32 and 13amp connectors and the inclusion of the 32amp connector means that car with powerful chargers such as Tesla can charge much faster than with the 13a connectors on the majority of chargers On 15 February 2012, the alliance announce to donate 1000 charging stations for free adding up on the existing 76 charging stations that are already deployed. The "Plugged-In Places" program of the Department of Transport offers grants for charging station networks in the United Kingdom. The development plan identifies 8 regions to be in a strategic focus – Central Scotland, the East of England, Greater Manchester, Milton Keynes, the North East of England and Northern Ireland – with a target of 8500 chargepoints. Following the ACEA position paper the government program favours moving to a dedicated recharging connector of Type 2 Mode 3. Referring to the PIP program an open tender in Newcastle upon Tyne identifies the goal to have 75% of the charging stations to offer Type 2 Mode 3 sockets including to switch over existing charging stations to that type. The UK has limited electric vehicle charging network roaming providers. Paua and AllStar provides this service for businesses. North America United States Infrastructure has been installed by Coulomb Technologies in Arizona; California – San Francisco, San Jose, Walnut Creek, and Sonoma; Colorado; Washington, D.C.; Florida; Chicago, Illinois; Massachusetts; Detroit, Michigan; Minneapolis, Minnesota; New York City; Cary, North Carolina; Ohio; Portland, Oregon; Nashville, Tennessee; Texas; Seattle, Washington; Wisconsin. Gilbarco Veeder-Root are partnering with Coulomb to advance public charging facilities. Gilbarco exhibited Coulomb Technologies' Smartlet Charging Station at the National Association of Convenience Stores (NACS) show in October 2008. At the end of 2008, Coulomb Technologies planned to roll out five curbside charging stations in downtown San Jose that drivers can access through a prepaid plan. The company was working with entities in Las Vegas Nevada, New York and Florida to do something similar there. Coulomb Technologies has announced to provide 1000 free public charging stations until December 2010. They also plan to expand its "ChargePoint America network" to 4600 free home and public level-2 charging stations until October 2011 in nine regions: Austin, Texas; Detroit, Michigan; Los Angeles, California; New York, New York; Orlando, Florida; Sacramento, California; the San Jose/San Francisco Bay Area, California; Redmond, Washington; and Washington DC. The $37 million ChargePoint America program is made possible by a $15M grant funded by the American Recovery and Reinvestment Act through the Transportation Electrification Initiative administered by the Department of Energy. So far 149 stations are operational according to the ChargePoint map, 51 stations are in California. New York joins the ChargePoint network building more than 100 charging stations in public places until October 2011. In April 2012 the first milestone of the Chargepoint America program has been reached with Colulomb Technologies having delivered 2400 public and commercial charging stations, the actual installation of its Level 2 (240 V 30 A) stations in the 10 participating regions will continue. Infrastructure is planned by Better Place for Hawaii, Oregon, and California – the San Francisco Bay area, Sacramento, San Jose, Los Angeles, San Diego, and the highway and freeway corridors between them. Other companies that are building charging stations throughout the U.S. are ECOtality and SolarCity In the initial phase of "The EV Project" of ECOtality there are 11 participating cities: Phoenix (AZ), Tucson (AZ), San Diego (CA), Portland (OR), Eugene (OR), Salem (OR), Corvallis (OR), Seattle (WA), Nashville (TN), Knoxville (TN) and Chattanooga (TN). The contract for the "EV Project" was signed on 1 October 2009, with the US Department of Energy and it includes 8,300 Level 2 chargers installed in owner's homes; 6,350 Level 2 chargers installed in commercial and public locations; and 310 Level 3 DC fast-chargers. The EV project will run for 36 months. The public charging stations will be put up beginning in summer 2010. Texas has joined the EV Project in July 2010. San Diego will take a share of 1,500 public charging stations and 1,000 home base charging points. The first milestone of The EV Project has been reached in April 2012. Portland General Electric installs 12 electric vehicle charging stations in Portland and Salem, Oregon until September 2008 and it has installed 20 charging stations by 2010 as part of a demonstration project to develop the transportation infrastructure needed to support electric vehicles and plug-in cars. NRG Energy has announced to create a network of 50 charging stations in northern Texas under the "EVgo" brand. In March 2012 the company announced to build a network of 200 fast-charging stations in California over the next four years. By 30 Dec. 2015 EVgo has installed over 1,000 chargers in over 25 markets. NRG EVgo has developed partnerships to build infrastructure and offer complimentary charging with Nissan, BMW and Ford. In Virginia, with the participation of the Town of Wytheville, and several businesses, Plugless Power inductive charging stations began field testing in March 2010. South Carolina has unveiled its "Plug in Carolina" program including 100 public charging stations in December 2010 In San Antonio, TX, a downtown church (Travis Park United Methodist Church) made Level 1 charging available in its parking lot in 2009. The DBEDT ministry of Hawaii had a state rebate program "EV Ready Grant" that was funded by the American Recovery and Reinvestment Act – the program was offering $4500 for a full-speed commercially available electric vehicle and $500 for electric vehicle chargers. The "EV Ready Grant" program is followed by the "EV Ready Rebate" program offering 20% of the purchase price with a maximum of $4500 for a full-speed commercially available electric vehicle and 30% of the purchase prices with a maximum of $500 for electric vehicle chargers. Charging equipment is expected follow the standards including SAE J1772. The designated Transportation Working Group expects 200 charging stations to be available in 2010 In February 2012 it was announced to have Betterplace activate its multi-island network of 130 charging stations (Oahu, Maui, Kauai and the Big Island). The Hawaii rebate program is being continued with having reached a score of 372 funded vehicles and 246 chargers, and by April 2012 approximately 220 charging stations have been installed as part of the EV Ready Grant Program. The Hawaii station database lists the 200 public charging stations in 80 locations that were available up to March 2012, about 140 have been installed by BetterPlace. In California the car maker Tesla has put up 18 public charging stations. Within the SF Bay Area Activities & Coalition has identified 109 locations to put up public charging stations beginning 2009 based on funding by ARRA. The last California "ZEV Program Review symposium" was held on 23. September 2009, the next one is scheduled for late summer 2010. In the past there had been a charging station network to support the General Motors EV1 that had installed 500 public charging station. The U.S. Department of Energy offers a list of locations of the available alternative fuel infrastructure. The historic trend summary (1992–2010) shows a total of 541 electric charging locations by 2010 which had been still lower than the peak count of 873 charging locations in 2002. the total count of public electric charge stations in the United States had increased to 27,458. Electrify America operates one of the largest public electric vehicle DC fast charging networks in the United States, with more than 500 charging locations and over 2,200 individual charging units, as of 2020. The company expects to install or have under development approximately 800 stations with about 3,500 DC fast chargers by December 2021. Canada In 2012, a series of free public electric vehicle charging stations were installed along the main route of the Trans-Canada Highway by a private company, Sun Country Highway, permitting electric vehicle travel across the entire length, as demonstrated by the company's president in a publicity trip in a Tesla Roadster. this made it the longest electric vehicle ready highway in the world. The same company also partnered with Canadian rural hardware retailer Peavey Mart to add free public charging stations to its 29 stores across Western Canada and includes chargers located at Best Western hotels in Canada and the US on its online map of EV charging stations. the company's total network was over 700 chargers with plans to reach 1000 chargers by year end. From 2011 to 2014, the City of Vancouver installed publicly accessible Level 2 charging stations in a variety of locations, including community centres, shopping malls, curbside, and other locations throughout the city. In 2008, the city changed the Building Bylaw to require 20% of parking stalls in apartments and condos, and all stalls in houses to be electric vehicle ready. In 2013, the bylaw was updated so that 10% of stalls in mixed-use and commercial buildings are also ready for electric vehicles. In a March 2016 news release, the Government of British Columbia stated that the CEV Program investments have supported over 550 public Level 2 charging stations, and 30 DC fast charging stations. South America Brazil As of 2022, Brazil had around 1,250 stations, with 47% stations concentrated in the state of São Paulo Uruguay In January 2016, UTE opened the first charging station in Montevideo, exclusive for taxis. In December 2017, UTE and Ancap opened a charging station network that connects Colonia del Sacramento, Rosario, Puntas de Valdez, Montevideo, San Luis and Punta del Este, with stations every 65 km. The stations at the Carrasco Airport and Colonia have 43 kW, whereas the other stations have 22 kW. Asia China China's first large electric charging station for electric vehicles—the Tangshan Nanhu EV Charging Station – was put into service on 31 March 2010. Five cities in northern Hebei province – Tangshan, Zhangjiakou, Qinhuangdao, Langfang and Chengde – want to build three charging stations and 100 charging poles in 2010. Shandong is the province with most car manufacturers in China. The province planned to start in May 2010 with a charging station for 45 cars. According to China's State Grid Corporation, 75 electric vehicle charging stations are planned in 27 cities across China by the end of 2010. Additionally 6,209 charging posts and some battery replacement stations had been planned for 2010. The State Grid Corporation China announced success in distributing 7,031 charge poles in Hangzhou-Jinhua and wants to add 211 additional charging poles in 2011 along with 173 charging stations. China is planning on installing 10 million electric vehicle charging stations by 2020. In the 12th Five Year Plan (2011–2015) China wants to deploy 2,351 charge and replacement power stations and 220,000 charge spots. The reason is to get rid of crude oil imports which makes for 54 percent of the oil usage and cars accounted for 40 percent of national oil consumption (2010). According to the National Energy Administration, China had 10.2 million EV chargers nationwide as of June 2024, up 54% from the previous year. , charging infrastructure in China had exceeded a total of 11.88 million units, including 3.39 million public charger and 8.49 million private charger, according to a report by the China Electric Vehicle Charging Infrastructure Promotion Alliance. India India has a burgeoning EV charging ecosystem. Primary providers are Tata Power, Joulepoint, and Fortum. Japan Infrastructure is planned by Better Place and Nissan for Yokohama. Singapore Infrastructure is planned by Robert Bosch and Keppel Energy for Singapore Middle East Israel Israel has enacted policies that create a tax differential between zero-emission vehicles and traditional cars, to accelerate the transition to electric cars. Better Place began to build its first electric vehicle network in Israel in conjunction with French car-maker, Renault. The company conducted its first market tests in Israel, Denmark and Hawaii because their small size also made them suitable as test markets. Better Place opened its first functional charging station in Israel the first week of December 2008 at Cinema City in Pi-Glilot, and additional stations were planned in Tel Aviv, Haifa, Kfar Saba, Holon, and Jerusalem. In March 2011 Better Place presented a detailed plan for network construction, including 40 battery swap stations and 400 charging stations across Israel. 200 locations were said to be under construction or planned at the end of 2011, but that goal was not reached. On 26 May 2013, Better Place filed for bankruptcy in Israel, having terminated its projects in most markets. Gnrgy, originally a producer of mobile charging solutions, entered the market as an alternative to Betterplace. On 29 February 2012 it partnered with Pango, provider of parking billing solutions, to set up a series of charging stations throughout Israel. Oceania Australia Australia currently has thirteen electric vehicle charging stations across Sydney, Melbourne and Canberra from Coulomb Technologies. They opened in 2010 and 2011. One charge point from ECOtality has been installed in the car park at 140 William Street in Melbourne CBD with Exigency providing project management and metering. ChargePoint has expanded its service to eight cities by 2012 (Perth 3, Adelaide 5, Melbourne 10, Canberra 2, Sydney 8, Brisbane 6, Townsville 3, Hobart 1). Construction of infrastructure (charging spots and battery switching stations) had been proposed by Better Place for the major cities Melbourne, Sydney and Brisbane. Australia would have become the third country in the world to have an electric car network in a bid to run the country's 15 million cars on batteries powered by green energy under a plan announced in October 2008. Better Place filed for bankruptcy in Israel on 25 May, shortly after pulling out of Australia. The original plan to deploy as much as 200,000 charging stations was stopped in January 2013, after just 20 public charge spots had been installed. In May 2011 has completed a fast recharge network in test city Perth. Electromotive has provided 11 dual-headed IEC-compatible fast-charge stations at 32A to be used with the of the test fleet. In the test drive the European connectors have been preferred over the American connectors since Australia (like Europe) does have three-phase power (at 415 V) in most home locations. The fast-charge outlets connect with a special 8-pin IEC-compatible round connector integrating single-phase and three-phase power The project of the University of Western Australia was continued with 23 public charging stations available by September 2012 featuring Type 2 connectors at 32 A. United States EV manufacturer Tesla Motors formally launched in Australia in December 2014, announcing their intention to build their supercharger network along the highway between Melbourne, Canberra and Sydney by the end of 2015, and extending to Brisbane by the end of 2016. See also Better Place Charging station Coulomb Technologies SemaConnect Tesla Supercharger Tritium Charging DCFC Limited References Electric vehicles Transport in Australia Transport in Denmark Transport in Japan Transport infrastructure Transport in Canada Transport in Spain Road transport in Portugal
Electric vehicle charging network
[ "Physics" ]
10,824
[ "Physical systems", "Transport", "Transport infrastructure" ]
21,080,132
https://en.wikipedia.org/wiki/C8H6O
{{DISPLAYTITLE:C8H6O}} The molecular formula C8H6O (molar mass: 118.13 g/mol, exact mass: 118.0419 u) may refer to: Benzofuran Isobenzofuran, or 2-Benzofuran Molecular formulas
C8H6O
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
3,654,070
https://en.wikipedia.org/wiki/Hazen%E2%80%93Williams%20equation
The Hazen–Williams equation is an empirical relationship that relates the flow of water in a pipe with the physical properties of the pipe and the pressure drop caused by friction. It is used in the design of water pipe systems such as fire sprinkler systems, water supply networks, and irrigation systems. It is named after Allen Hazen and Gardner Stewart Williams. The Hazen–Williams equation has the advantage that the coefficient C is not a function of the Reynolds number, but it has the disadvantage that it is only valid for water. Also, it does not account for the temperature or viscosity of the water, and therefore is only valid at room temperature and conventional velocities. General form Henri Pitot discovered that the velocity of a fluid was proportional to the square root of its head in the early 18th century. It takes energy to push a fluid through a pipe, and Antoine de Chézy discovered that the hydraulic head loss was proportional to the velocity squared. Consequently, the Chézy formula relates hydraulic slope S (head loss per unit length) to the fluid velocity V and hydraulic radius R: The variable C expresses the proportionality, but the value of C is not a constant. In 1838 and 1839, Gotthilf Hagen and Jean Léonard Marie Poiseuille independently determined a head loss equation for laminar flow, the Hagen–Poiseuille equation. Around 1845, Julius Weisbach and Henry Darcy developed the Darcy–Weisbach equation. The Darcy-Weisbach equation was difficult to use because the friction factor was difficult to estimate. In 1906, Hazen and Williams provided an empirical formula that was easy to use. The general form of the equation relates the mean velocity of water in a pipe with the geometric properties of the pipe and the slope of the energy line. where: V is velocity (in ft/s for US customary units, in m/s for SI units) k is a conversion factor for the unit system (k = 1.318 for US customary units, k = 0.849 for SI units) C is a roughness coefficient R is the hydraulic radius (in ft for US customary units, in m for SI units) S is the slope of the energy line (head loss per length of pipe or hf/L) The equation is similar to the Chézy formula but the exponents have been adjusted to better fit data from typical engineering situations. A result of adjusting the exponents is that the value of C appears more like a constant over a wide range of the other parameters. The conversion factor k was chosen so that the values for C were the same as in the Chézy formula for the typical hydraulic slope of S=0.001. The value of k is 0.001−0.04. Typical C factors used in design, which take into account some increase in roughness as pipe ages are as follows: Pipe equation The general form can be specialized for full pipe flows. Taking the general form and exponentiating each side by gives (rounding exponents to 3–4 decimals) Rearranging gives The flow rate , so The hydraulic radius (which is different from the geometric radius ) for a full pipe of geometric diameter is ; the pipe's cross sectional area is , so U.S. customary units (Imperial) When used to calculate the pressure drop using the US customary units system, the equation is: where: Spsi per foot = frictional resistance (pressure drop per foot of pipe) in psig/ft (pounds per square inch gauge pressure per foot) Sfoot of water per foot of pipe Pd = pressure drop over the length of pipe in psig (pounds per square inch gauge pressure)L = length of pipe in feetQ = flow, gpm (gallons per minute)C = pipe roughness coefficientd = inside pipe diameter, in (inches) Note: Caution with U S Customary Units is advised. The equation for head loss in pipes, also referred to as slope, S, expressed in "feet per foot of length" vs. in 'psi per foot of length' as described above, with the inside pipe diameter, d, being entered in feet vs. inches, and the flow rate, Q, being entered in cubic feet per second, cfs, vs. gallons per minute, gpm, appears very similar. However, the constant is 4.73 vs. the 4.52 constant as shown above in the formula as arranged by NFPA for sprinkler system design. The exponents and the Hazen-Williams "C" values are unchanged. SI units When used to calculate the head loss with the International System of Units, the equation will then become where: S = Hydraulic slope hf = head loss in meters (water) over the length of pipe L = length of pipe in meters Q = volumetric flow rate, m3/s (cubic meters per second) C = pipe roughness coefficient d'' = inside pipe diameter, m (meters) Note: pressure drop can be computed from head loss as hf × the unit weight of water (e.g., 9810 N/m3 at 4 deg C) See also Darcy–Weisbach equation and Prony equation for alternatives Fluid dynamics Friction Minor losses in pipe flow Plumbing Pressure Volumetric flow rate References Further reading Williams and Hazen, Second edition, 1909 External links Engineering Toolbox reference Engineering toolbox Hazen–Williams coefficients Online Hazen–Williams calculator for gravity-fed pipes. Online Hazen–Williams calculator for pressurized pipes. https://books.google.com/books?id=DxoMAQAAIAAJ&pg=PA736 https://books.google.com/books?id=RAMX5xuXSrUC&pg=PA145 States pocket calculators and computers make calculations easier. H-W is good for smooth pipes, but Manning better for rough pipes (compared to D-W model). Eponymous equations of physics Equations of fluid dynamics Piping Plumbing Hydraulics Hydrodynamics Irrigation
Hazen–Williams equation
[ "Physics", "Chemistry", "Engineering" ]
1,243
[ "Equations of fluid dynamics", "Equations of physics", "Building engineering", "Chemical engineering", "Hydrodynamics", "Plumbing", "Eponymous equations of physics", "Physical systems", "Construction", "Hydraulics", "Mechanical engineering", "Piping", "Fluid dynamics" ]
3,654,507
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy%20of%20proteins
Nuclear magnetic resonance spectroscopy of proteins (usually abbreviated protein NMR) is a field of structural biology in which NMR spectroscopy is used to obtain information about the structure and dynamics of proteins, and also nucleic acids, and their complexes. The field was pioneered by Richard R. Ernst and Kurt Wüthrich at the ETH, and by Ad Bax, Marius Clore, Angela Gronenborn at the NIH, and Gerhard Wagner at Harvard University, among others. Structure determination by NMR spectroscopy usually consists of several phases, each using a separate set of highly specialized techniques. The sample is prepared, measurements are made, interpretive approaches are applied, and a structure is calculated and validated. NMR involves the quantum-mechanical properties of the central core ("nucleus") of the atom. These properties depend on the local molecular environment, and their measurement provides a map of how the atoms are linked chemically, how close they are in space, and how rapidly they move with respect to each other. These properties are fundamentally the same as those used in the more familiar magnetic resonance imaging (MRI), but the molecular applications use a somewhat different approach, appropriate to the change of scale from millimeters (of interest to radiologists) to nanometers (bonded atoms are typically a fraction of a nanometer apart), a factor of a million. This change of scale requires much higher sensitivity of detection and stability for long term measurement. In contrast to MRI, structural biology studies do not directly generate an image, but rely on complex computer calculations to generate three-dimensional molecular models. Currently most samples are examined in a solution in water, but methods are being developed to also work with solid samples. Data collection relies on placing the sample inside a powerful magnet, sending radio frequency signals through the sample, and measuring the absorption of those signals. Depending on the environment of atoms within the protein, the nuclei of individual atoms will absorb different frequencies of radio signals. Furthermore, the absorption signals of different nuclei may be perturbed by adjacent nuclei. This information can be used to determine the distance between nuclei. These distances in turn can be used to determine the overall structure of the protein. A typical study might involve how two proteins interact with each other, possibly with a view to developing small molecules that can be used to probe the normal biology of the interaction ("chemical biology") or to provide possible leads for pharmaceutical use (drug development). Frequently, the interacting pair of proteins may have been identified by studies of human genetics, indicating the interaction can be disrupted by unfavorable mutations, or they may play a key role in the normal biology of a "model" organism like the fruit fly, yeast, the worm C. elegans, or mice. To prepare a sample, methods of molecular biology are typically used to make quantities by bacterial fermentation. This also permits changing the isotopic composition of the molecule, which is desirable because the isotopes behave differently and provide methods for identifying overlapping NMR signals. Sample preparation Protein nuclear magnetic resonance is performed on aqueous samples of highly purified protein. Usually, the sample consists of between 300 and 600 microlitres with a protein concentration in the range 0.1 – 3 millimolar. The source of the protein can be either natural or produced in a production system using recombinant DNA techniques through genetic engineering. Recombinantly expressed proteins are usually easier to produce in sufficient quantity, and this method makes isotopic labeling possible. The purified protein is usually dissolved in a buffer solution and adjusted to the desired solvent conditions. The NMR sample is prepared in a thin-walled glass tube. Data collection Protein NMR utilizes multidimensional nuclear magnetic resonance experiments to obtain information about the protein. Ideally, each distinct nucleus in the molecule experiences a distinct electronic environment and thus has a distinct chemical shift by which it can be recognized. However, in large molecules such as proteins the number of resonances can typically be several thousand and a one-dimensional spectrum inevitably has incidental overlaps. Therefore, multidimensional experiments that correlate the frequencies of distinct nuclei are performed. The additional dimensions decrease the chance of overlap and have a larger information content, since they correlate signals from nuclei within a specific part of the molecule. Magnetization is transferred into the sample using pulses of electromagnetic (radiofrequency) energy and between nuclei using delays; the process is described with so-called pulse sequences. Pulse sequences allow the experimenter to investigate and select specific types of connections between nuclei. The array of nuclear magnetic resonance experiments used on proteins fall in two main categories — one where magnetization is transferred through the chemical bonds, and one where the transfer is through space, irrespective of the bonding structure. The first category is used to assign the different chemical shifts to a specific nucleus, and the second is primarily used to generate the distance restraints used in the structure calculation, and in the assignment with unlabelled protein. Depending on the concentration of the sample, the magnetic field of the spectrometer, and the type of experiment, a single multidimensional nuclear magnetic resonance experiment on a protein sample may take hours or even several days to obtain suitable signal-to-noise ratio through signal averaging, and to allow for sufficient evolution of magnetization transfer through the various dimensions of the experiment. Other things being equal, higher-dimensional experiments will take longer than lower-dimensional experiments. Typically, the first experiment to be measured with an isotope-labelled protein is a 2D heteronuclear single quantum correlation (HSQC) spectrum, where "heteronuclear" refers to nuclei other than 1H. In theory, the heteronuclear single quantum correlation has one peak for each H bound to a heteronucleus. Thus, in the 15N-HSQC, with a 15N labelled protein, one signal is expected for each nitrogen atom in the back bone, with the exception of proline, which has no amide-hydrogen due to the cyclic nature of its backbone. Additional 15N-HSQC signals are contributed by each residue with a nitrogen-hydrogen bond in its side chain (W, N, Q, R, H, K). The 15N-HSQC is often referred to as the fingerprint of a protein because each protein has a unique pattern of signal positions. Analysis of the 15N-HSQC allows researchers to evaluate whether the expected number of peaks is present and thus to identify possible problems due to multiple conformations or sample heterogeneity. The relatively quick heteronuclear single quantum correlation experiment helps determine the feasibility of doing subsequent longer, more expensive, and more elaborate experiments. It is not possible to assign peaks to specific atoms from the heteronuclear single quantum correlation alone. Resonance assignment In order to analyze the nuclear magnetic resonance data, it is important to get a resonance assignment for the protein, that is to find out which chemical shift corresponds to which atom. This is typically achieved by sequential walking using information derived from several different types of NMR experiment. The exact procedure depends on whether the protein is isotopically labelled or not, since a lot of the assignment experiments depend on carbon-13 and nitrogen-15. Homonuclear nuclear magnetic resonance With unlabelled protein the usual procedure is to record a set of two-dimensional homonuclear nuclear magnetic resonance experiments through correlation spectroscopy (COSY), of which several types include conventional correlation spectroscopy, total correlation spectroscopy (TOCSY) and nuclear Overhauser effect spectroscopy (NOESY). A two-dimensional nuclear magnetic resonance experiment produces a two-dimensional spectrum. The units of both axes are chemical shifts. The COSY and TOCSY transfer magnetization through the chemical bonds between adjacent protons. The conventional correlation spectroscopy experiment is only able to transfer magnetization between protons on adjacent atoms, whereas in the total correlation spectroscopy experiment the protons are able to relay the magnetization, so it is transferred among all the protons that are connected by adjacent atoms. Thus in a conventional correlation spectroscopy, an alpha proton transfers magnetization to the beta protons, the beta protons transfers to the alpha and gamma protons, if any are present, then the gamma proton transfers to the beta and the delta protons, and the process continues. In total correlation spectroscopy, the alpha and all the other protons are able to transfer magnetization to the beta, gamma, delta, epsilon if they are connected by a continuous chain of protons. The continuous chain of protons are the sidechain of the individual amino acids. Thus these two experiments are used to build so called spin systems, that is build a list of resonances of the chemical shift of the peptide proton, the alpha protons and all the protons from each residue’s sidechain. Which chemical shifts corresponds to which nuclei in the spin system is determined by the conventional correlation spectroscopy connectivities and the fact that different types of protons have characteristic chemical shifts. To connect the different spinsystems in a sequential order, the nuclear Overhauser effect spectroscopy experiment has to be used. Because this experiment transfers magnetization through space, it will show crosspeaks for all protons that are close in space regardless of whether they are in the same spin system or not. The neighbouring residues are inherently close in space, so the assignments can be made by the peaks in the NOESY with other spin systems. One important problem using homonuclear nuclear magnetic resonance is overlap between peaks. This occurs when different protons have the same or very similar chemical shifts. This problem becomes greater as the protein becomes larger, so homonuclear nuclear magnetic resonance is usually restricted to small proteins or peptides. Nitrogen-15 nuclear magnetic resonance The most commonly performed 15N experiment is the 1H-15N HSQC. The experiment is highly sensitive and therefore can be performed relatively quickly. It is often used to check the suitability of a protein for structure determination using NMR, as well as for the optimization of the sample conditions. It is one of the standard suite of experiments used for the determination of the solution structure of protein. The HSQC can be further expanded into three- and four dimensional NMR experiments, such as 15N-TOCSY-HSQC and 15N-NOESY-HSQC. Carbon-13 and nitrogen-15 nuclear magnetic resonance When the protein is labelled with carbon-13 and nitrogen-15 it is possible to record triple resonance experiments that transfer magnetisation over the peptide bond, and thus connect different spin systems through bonds. This is usually done using some of the following experiments, , }, , , and . All six experiments consist of a 1H-15N plane (similar to a HSQC spectrum) expanded with a carbon dimension. In the , each HN plane contains the peaks from the carbonyl carbon from its residue as well the preceding one in the sequence. The contains the carbonyl carbon chemical shift from only the preceding residue, but is much more sensitive than . These experiments allow each 1H-15N peak to be linked to the preceding carbonyl carbon, and sequential assignment can then be undertaken by matching the shifts of each spin system's own and previous carbons. The and works similarly, just with the alpha carbons (Cα) rather than the carbonyls, and the and the contains both the alpha carbon and the beta carbon (Cβ). Usually several of these experiments are required to resolve overlap in the carbon dimension. This procedure is usually less ambiguous than the NOESY-based method since it is based on through bond transfer. In the NOESY-based methods, additional peaks corresponding to atoms that are close in space but that do not belong to sequential residues will appear, confusing the assignment process. Following the initial sequential resonance assignment, it is usually possible to extend the assignment from the Cα and Cβ to the rest of the sidechain using experiments such as HCCH-TOCSY, which is basically a TOCSY experiment resolved in an additional carbon dimension. Restraint generation In order to make structure calculations, a number of experimentally determined restraints have to be generated. These fall into different categories; the most widely used are distance restraints and angle restraints. Distance restraints A crosspeak in a NOESY experiment signifies spatial proximity between the two nuclei in question. Thus each peak can be converted into a maximum distance between the nuclei, usually between 1.8 and 6 angstroms. The intensity of a NOESY peak is proportional to the distance to the minus 6th power, so the distance is determined according to the intensity of the peak. The intensity-distance relationship is not exact, so usually a distance range is used. It is of great importance to assign the NOESY peaks to the correct nuclei based on the chemical shifts. If this task is performed manually it is usually very labor-intensive, since proteins usually have thousands of NOESY peaks. Some computer programs such as PASD/XPLOR-NIH, UNIO, CYANA, ARIA/CNS, and AUDANA/PONDEROSA-C/S in the Integrative NMR platform perform this task automatically on manually pre-processed listings of peak positions and peak volumes, coupled to a structure calculation. Direct access to the raw NOESY data without the cumbersome need of iteratively refined peak lists is so far only granted by the PASD algorithm implemented in XPLOR-NIH, the ATNOS/CANDID approach implemented in the UNIO software package, and the PONDEROSA-C/S and thus indeed guarantees objective and efficient NOESY spectral analysis. To obtain as accurate assignments as possible, it is a great advantage to have access to carbon-13 and nitrogen-15 NOESY experiments, since they help to resolve overlap in the proton dimension. This leads to faster and more reliable assignments, and in turn to better structures. Angle restraints In addition to distance restraints, restraints on the torsion angles of the chemical bonds, typically the psi and phi angles, can be generated. One approach is to use the Karplus equation, to generate angle restraints from coupling constants. Another approach uses the chemical shifts to generate angle restraints. Both methods use the fact that the geometry around the alpha carbon affects the coupling constants and chemical shifts, so given the coupling constants or the chemical shifts, a qualified guess can be made about the torsion angles. Orientation restraints The analyte molecules in a sample can be partially ordered with respect to the external magnetic field of the spectrometer by manipulating the sample conditions. Common techniques include addition of bacteriophages or bicelles to the sample, or preparation of the sample in a stretched polyacrylamide gel. This creates a local environment that favours certain orientations of nonspherical molecules. Normally in solution NMR the dipolar couplings between nuclei are averaged out because of the fast tumbling of the molecule. The slight overpopulation of one orientation means that a residual dipolar coupling remains to be observed. The dipolar coupling is commonly used in solid state NMR and provides information about the relative orientation of the bond vectors relative to a single global reference frame. Typically the orientation of the N-H vector is probed in an HSQC-like experiment. Initially, residual dipolar couplings were used for refinement of previously determined structures, but attempts at de novo structure determination have also been made. Hydrogen–deuterium exchange NMR spectroscopy is nucleus specific. Thus, it can distinguish between hydrogen and deuterium. The amide protons in the protein exchange readily with the solvent, and, if the solvent contains a different isotope, typically deuterium, the reaction can be monitored by NMR spectroscopy. How rapidly a given amide exchanges reflects its solvent accessibility. Thus amide exchange rates can give information on which parts of the protein are buried, hydrogen-bonded, etc. A common application is to compare the exchange of a free form versus a complex. The amides that become protected in the complex, are assumed to be in the interaction interface. Structure calculation The experimentally determined restraints can be used as input for the structure calculation process. Researchers, using computer programs such as XPLOR-NIH, CYANA, GeNMR, or RosettaNMR attempt to satisfy as many of the restraints as possible, in addition to general properties of proteins such as bond lengths and angles. The algorithms convert the restraints and the general protein properties into energy terms, and then try to minimize this energy. The process results in an ensemble of structures that, if the data were sufficient to dictate a certain fold, will converge. Structure validation The ensemble of structures obtained is an "experimental model", i.e., a representation of certain kind of experimental data. To acknowledge this fact is important because it means that the model could be a good or bad representation of that experimental data. In general, the quality of a model will depend on both the quantity and quality of experimental data used to generate it and the correct interpretation of such data. Every experiment has associated errors. Random errors will affect the reproducibility and precision of the resulting structures. If the errors are systematic, the accuracy of the model will be affected. The precision indicates the degree of reproducibility of the measurement and is often expressed as the variance of the measured data set under the same conditions. The accuracy, however, indicates the degree to which a measurement approaches its "true" value. Ideally, a model of a protein will be more accurate the more fit the actual molecule that represents and will be more precise as there is less uncertainty about the positions of their atoms. In practice there is no "standard molecule" against which to compare models of proteins, so the accuracy of a model is given by the degree of agreement between the model and a set of experimental data. Historically, the structures determined by NMR have been, in general, of lower quality than those determined by X-ray diffraction. This is due, in part, to the lower amount of information contained in data obtained by NMR. Because of this fact, it has become common practice to establish the quality of NMR ensembles, by comparing it against the unique conformation determined by X-ray diffraction, for the same protein. However, the X-ray diffraction structure may not exist, and, since the proteins in solution are flexible molecules, a protein represented by a single structure may lead to underestimate the intrinsic variation of the atomic positions of a protein. A set of conformations, determined by NMR or X-ray crystallography may be a better representation of the experimental data of a protein than a unique conformation. The utility of a model will be given, at least in part, by the degree of accuracy and precision of the model. An accurate model with relatively poor precision could be useful to study the evolutionary relationships between the structures of a set of proteins, whereas the rational drug design requires both precise and accurate models. A model that is not accurate, regardless of the degree of precision with which it was obtained will not be very useful. Since protein structures are experimental models that can contain errors, it is very important to be able to detect these errors. The process aimed at the detection of errors is known as validation. There are several methods to validate structures, some are statistical like PROCHECK and WHAT IF while others are based on physical principles as CheShift, or a mixture of statistical and physics principles PSVS. Dynamics In addition to structures, nuclear magnetic resonance can yield information on the dynamics of various parts of the protein. This usually involves measuring relaxation times such as T1 and T2 to determine order parameters, correlation times, and chemical exchange rates. NMR relaxation is a consequence of local fluctuating magnetic fields within a molecule. Local fluctuating magnetic fields are generated by molecular motions. In this way, measurements of relaxation times can provide information of motions within a molecule on the atomic level. In NMR studies of protein dynamics, the nitrogen-15 isotope is the preferred nucleus to study because its relaxation times are relatively simple to relate to molecular motions. This, however, requires isotope labeling of the protein. The T1 and T2 relaxation times can be measured using various types of HSQC-based experiments. The types of motions that can be detected are motions that occur on a time-scale ranging from about 10 picoseconds to about 10 nanoseconds. In addition, slower motions, which take place on a time-scale ranging from about 10 microseconds to 100 milliseconds, can also be studied. However, since nitrogen atoms are found mainly in the backbone of a protein, the results mainly reflect the motions of the backbone, which is the most rigid part of a protein molecule. Thus, the results obtained from nitrogen-15 relaxation measurements may not be representative of the whole protein. Therefore, techniques utilising relaxation measurements of carbon-13 and deuterium have recently been developed, which enables systematic studies of motions of the amino acid side-chains in proteins. A challenging and special case of study regarding dynamics and flexibility of peptides and full-length proteins is represented by disordered structures. Nowadays, it is an accepted concept that proteins can exhibit a more flexible behaviour known as disorder or lack of structure; however, it is possible to describe an ensemble of structures instead of a static picture representing a fully functional state of the protein. Many advances are represented in this field in particular in terms of new pulse sequences, technological improvement, and rigorous training of researchers in the field. NMR spectroscopy on large proteins Traditionally, nuclear magnetic resonance spectroscopy has been limited to relatively small proteins or protein domains. This is in part caused by problems resolving overlapping peaks in larger proteins, but this has been alleviated by the introduction of isotope labelling and multidimensional experiments. Another more serious problem is the fact that in large proteins the magnetization relaxes faster, which means there is less time to detect the signal. This in turn causes the peaks to become broader and weaker, and eventually disappear. Two techniques have been introduced to attenuate the relaxation: transverse relaxation optimized spectroscopy (TROSY) and deuteration of proteins. By using these techniques it has been possible to study proteins in complex with the 900 kDa chaperone GroES-GroEL. Automation of the process Structure determination by NMR has traditionally been a time-consuming process, requiring interactive analysis of the data by a highly trained scientist. There has been considerable interest in automating the process to increase the throughput of structure determination and to make protein NMR accessible to non-experts (See structural genomics). The two most time-consuming processes involved are the sequence-specific resonance assignment (backbone and side-chain assignment) and the NOE assignment tasks. Several different computer programs have been published that target individual parts of the overall NMR structure determination process in an automated fashion. Most progress has been achieved for the task of automated NOE assignment. So far, only the FLYA and the UNIO approach were proposed to perform the entire protein NMR structure determination process in an automated manner without any human intervention. Modules in the NMRFAM-SPARKY such as APES (two-letter-code: ae), I-PINE/PINE-SPARKY (two-letter-code: ep; I-PINE web server) and PONDEROSA (two-letter-code: c3, up; PONDEROSA web server) are integrated so that it offers full automation with visual verification capability in each step. Efforts have also been made to standardize the structure calculation protocol to make it quicker and more amenable to automation. Recently, the POKY suite, the successor of programs mentioned above, has been released to provide modern GUI tools and AI/ML features. See also NMR spectroscopy Nuclear magnetic resonance Nuclear magnetic resonance spectroscopy of carbohydrates Nuclear magnetic resonance spectroscopy of nucleic acids Protein crystallization Protein dynamics Relaxation (NMR) X-ray crystallography References Further reading External links NOESY-Based Strategy for Assignments of Backbone and Side Chain Resonances of Large Proteins without Deuteration (a protocol) relax Software for the analysis of NMR dynamics ProSA-web Web service for the recognition of errors in experimentally or theoretically determined protein structures Protein structure determination from sparse experimental data - an introductory presentation Protein NMR Protein NMR experiments Protein methods Biophysics Protein structure Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy of proteins
[ "Physics", "Chemistry", "Biology" ]
5,017
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Nuclear magnetic resonance", "Nuclear magnetic resonance spectroscopy", "Protein methods", "Protein biochemistry", "Biophysics", "Structural biology", "Protein structure", "Spectroscopy" ]
3,656,002
https://en.wikipedia.org/wiki/Solid-state%20electronics
Solid-state electronics are semiconductor electronics: electronic equipment that use semiconductor devices such as transistors, diodes and integrated circuits (ICs). The term is also used as an adjective for devices in which semiconductor electronics that have no moving parts replace devices with moving parts, such as the solid-state relay, in which transistor switches are used in place of a moving-arm electromechanical relay, or the solid-state drive (SSD), a type of semiconductor memory used in computers to replace hard disk drives, which store data on a rotating disk. History The term solid-state became popular at the beginning of the semiconductor era in the 1960s to distinguish this new technology. A semiconductor device works by controlling an electric current consisting of electrons or holes moving within a solid crystalline piece of semiconducting material such as silicon, while the thermionic vacuum tubes it replaced worked by controlling a current of electrons or ions in a vacuum within a sealed tube. Although the first solid-state electronic device was the cat's whisker detector, a crude semiconductor diode invented around 1904, solid-state electronics started with the invention of the transistor in 1947. Before that, all electronic equipment used vacuum tubes, because vacuum tubes were the only electronic components that could amplify—an essential capability in all electronics. The transistor, which was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Laboratories in 1947, could also amplify, and replaced vacuum tubes. The first transistor hi-fi system was developed by engineers at GE and demonstrated at the University of Philadelphia in 1955. In terms of commercial production, The Fisher TR-1 was the first "all transistor" preamplifier, which became available mid-1956. In 1961, a company named Transis-tronics released a solid-state amplifier, the TEC S-15. The replacement of bulky, fragile, energy-hungry vacuum tubes by transistors in the 1960s and 1970s created a revolution not just in technology but in people's habits, making possible the first truly portable consumer electronics such as the transistor radio, cassette tape player, walkie-talkie and quartz watch, as well as the first practical computers and mobile phones. Other examples of solid state electronic devices are the microprocessor chip, LED lamp, solar cell, charge coupled device (CCD) image sensor used in cameras, and semiconductor laser. Also during the 1960s and 1970s, television set manufacturers switched from vacuum tubes to semiconductors, and advertised sets as "100% solid state" even though the cathode-ray tube (CRT) was still a vacuum tube. It meant only the chassis was 100% solid-state, not including the CRT. Early advertisements spelled out this distinction, but later advertisements assumed the audience had already been educated about it and shortened it to just "100% solid state". LED displays can be said to be truly 100% solid-state. See also Condensed matter physics Laser diode Materials science Semiconductor device Solar cell Solid-state physics Power management integrated circuit References Electronics Semiconductors Solid state engineering
Solid-state electronics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
650
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
3,656,192
https://en.wikipedia.org/wiki/Screw%20axis
A screw axis (helical axis or twist axis) is a line that is simultaneously the axis of rotation and the line along which translation of a body occurs. Chasles' theorem shows that each Euclidean displacement in three-dimensional space has a screw axis, and the displacement can be decomposed into a rotation about and a slide along this screw axis. Plücker coordinates are used to locate a screw axis in space, and consist of a pair of three-dimensional vectors. The first vector identifies the direction of the axis, and the second locates its position. The special case when the first vector is zero is interpreted as a pure translation in the direction of the second vector. A screw axis is associated with each pair of vectors in the algebra of screws, also known as screw theory. The spatial movement of a body can be represented by a continuous set of displacements. Because each of these displacements has a screw axis, the movement has an associated ruled surface known as a screw surface. This surface is not the same as the axode, which is traced by the instantaneous screw axes of the movement of a body. The instantaneous screw axis, or 'instantaneous helical axis' (IHA), is the axis of the helicoidal field generated by the velocities of every point in a moving body. When a spatial displacement specializes to a planar displacement, the screw axis becomes the displacement pole, and the instantaneous screw axis becomes the velocity pole, or instantaneous center of rotation, also called an instant center. The term centro is also used for a velocity pole, and the locus of these points for a planar movement is called a centrode. History The proof that a spatial displacement can be decomposed into a rotation around, and translation along, a line in space is attributed to Michel Chasles in 1830. Recently the work of Giulio Mozzi has been identified as presenting a similar result in 1763. Screw axis symmetry A screw displacement (also screw operation or rotary translation) is the composition of a rotation by an angle φ about an axis (called the screw axis) with a translation by a distance d along this axis. A positive rotation direction usually means one that corresponds to the translation direction by the right-hand rule. This means that if the rotation is clockwise, the displacement is away from the viewer. Except for φ = 180°, we have to distinguish a screw displacement from its mirror image. Unlike for rotations, a righthand and lefthand screw operation generate different groups. The combination of a rotation about an axis and a translation in a direction perpendicular to that axis is a rotation about a parallel axis. However, a screw operation with a nonzero translation vector along the axis cannot be reduced like that. Thus the effect of a rotation combined with any translation is a screw operation in the general sense, with as special cases a pure translation, a pure rotation and the identity. Together these are all the direct isometries in 3D. In crystallography, a screw axis symmetry is a combination of rotation about an axis and a translation parallel to that axis which leaves a crystal unchanged. If φ = for some positive integer n, then screw axis symmetry implies translational symmetry with a translation vector which is n times that of the screw displacement. Applicable for space groups is a rotation by about an axis, combined with a translation along the axis by a multiple of the distance of the translational symmetry, divided by n. This multiple is indicated by a subscript. So, 63 is a rotation of 60° combined with a translation of one half of the lattice vector, implying that there is also 3-fold rotational symmetry about this axis. The possibilities are 21, 31, 41, 42, 61, 62, and 63, and the enantiomorphous 32, 43, 64, and 65. Considering a screw axis n, if g is the greatest common divisor of n and m, then there is also a g-fold rotation axis. When screw operations have been performed, the displacement will be , which since it is a whole number means one has moved to an equivalent point in the lattice, while carrying out a rotation by . So 4, 6 and 6 create two-fold rotation axes, while 6 creates a three-fold axis. A non-discrete screw axis isometry group contains all combinations of a rotation about some axis and a proportional translation along the axis (in rifling, the constant of proportionality is called the twist rate); in general this is combined with k-fold rotational isometries about the same axis (k ≥ 1); the set of images of a point under the isometries is a k-fold helix; in addition there may be a 2-fold rotation about a perpendicularly intersecting axis, and hence a k-fold helix of such axes. Screw axis of a spatial displacement Geometric argument Let be an orientation-preserving rigid motion of R3. The set of these transformations is a subgroup of Euclidean motions known as the special Euclidean group SE(3). These rigid motions are defined by transformations of x in R3 given by consisting of a three-dimensional rotation A followed by a translation by the vector d. A three-dimensional rotation A has a unique axis that defines a line L. Let the unit vector along this line be S so that the translation vector d can be resolved into a sum of two vectors, one parallel and one perpendicular to the axis L, that is, In this case, the rigid motion takes the form Now, the orientation preserving rigid motion D* = A(x) + d⊥ transforms all the points of R3 so that they remain in planes perpendicular to L. For a rigid motion of this type there is a unique point c in the plane P perpendicular to L through 0, such that The point C can be calculated as because d⊥ does not have a component in the direction of the axis of A. A rigid motion D* with a fixed point must be a rotation of around the axis Lc through the point c. Therefore, the rigid motion consists of a rotation about the line Lc followed by a translation by the vector dL in the direction of the line Lc. Conclusion: every rigid motion of R3 is the result of a rotation of R3 about a line Lc followed by a translation in the direction of the line. The combination of a rotation about a line and translation along the line is called a screw motion. Computing a point on the screw axis A point C on the screw axis satisfies the equation: Solve this equation for C using Cayley's formula for a rotation matrix where [B] is the skew-symmetric matrix constructed from Rodrigues' vector such that Use this form of the rotation A to obtain which becomes This equation can be solved for C on the screw axis P(t) to obtain, The screw axis of this spatial displacement has the Plücker coordinates . Dual quaternion The screw axis appears in the dual quaternion formulation of a spatial displacement . The dual quaternion is constructed from the dual vector defining the screw axis and the dual angle , where φ is the rotation about and d the slide along this axis, which defines the displacement D to obtain, A spatial displacement of points q represented as a vector quaternion can be defined using quaternions as the mapping where d is translation vector quaternion and S is a unit quaternion, also called a versor, given by that defines a rotation by 2θ around an axis S. In the proper Euclidean group E+(3) a rotation may be conjugated with a translation to move it to a parallel rotation axis. Such a conjugation, using quaternion homographies, produces the appropriate screw axis to express the given spatial displacement as a screw displacement, in accord with Chasles’ theorem. Mechanics The instantaneous motion of a rigid body may be the combination of rotation about an axis (the screw axis) and a translation along that axis. This screw move is characterized by the velocity vector for the translation and the angular velocity vector in the same or opposite direction. If these two vectors are constant and along one of the principal axes of the body, no external forces are needed for this motion (moving and spinning]]). As an example, if gravity and drag are ignored, this is the motion of a bullet fired from a rifled gun. Biomechanics This parameter is often used in biomechanics, when describing the motion of joints of the body. For any period of time, joint motion can be seen as the movement of a single point on one articulating surface with respect to the adjacent surface (usually distal with respect to proximal). The total translation and rotations along the path of motion can be defined as the time integrals of the instantaneous translation and rotation velocities at the IHA for a given reference time. In any single plane, the path formed by the locations of the moving instantaneous axis of rotation (IAR) is known as the 'centroid', and is used in the description of joint motion. See also Corkscrew (roller coaster element) Euler's rotation theorem – rotations without translation Glide reflection Helical symmetry Line group Screw theory Space group References Crystallography Euclidean geometry Kinematics Machines Rigid bodies Symmetry
Screw axis
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Technology", "Engineering" ]
1,915
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Materials science", "Physical systems", "Crystallography", "Motion (physics)", "Mechanics", "Condensed matter physics", "Geometry", "Mechanical engineering", "Symmetry" ]
2,692,613
https://en.wikipedia.org/wiki/Pleione%20%28star%29
Pleione is a binary star and the seventh-brightest star in the Pleiades star cluster (Messier 45). It has the variable star designation BU Tauri (BU Tau) and the Flamsteed designation 28 Tauri (28 Tau). The star is located approximately from the Sun, appearing in the constellation of Taurus. Pleione is located close on the sky to the brighter star Atlas, so is difficult for stargazers to distinguish with the naked eye despite being a fifth magnitude star. The brighter star of the Pleione binary pair, component A, is a hot type B star 184 times more luminous than the Sun. It is classified as Be star with certain distinguishing traits: periodic phase changes and a complex circumstellar environment composed of two gaseous disks at different angles to each other. The primary star rotates rapidly, close to its breakup velocity, even faster than Achernar. Although some research on the companion star has been performed, stellar characteristics of the orbiting B component are not well known. Nomenclature 28 Tauri is the star's Flamsteed designation and BU Tauri its variable star designation. The name Pleione originates with Greek mythology; she is the mother of seven daughters known as the Pleiades. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Pleione for this star. It is now so entered in the IAU Catalog of Star Names. Visibility With an apparent magnitude of +5.05 in V, the star is rather difficult to make out with the naked eye, especially since its close neighbour Atlas is 3.7 times brighter and located less than 5 arcminutes away. Beginning in October of each year, Pleione along with the rest of the cluster can be seen rising in the east in the early morning before dawn. To see it after sunset, one will need to wait until December. By mid-February, the star is visible to virtually every inhabited region of the globe, with only those south of 66° unable to see it. Even in cities like Cape Town, South Africa, at the tip of the African continent, the star rises almost 32° above the horizon. Due to its declination of roughly +24°, Pleione is circumpolar in the northern hemisphere at latitudes greater than 66° North. Once late April arrives, the cluster can be spotted briefly in the deepening twilight of the western horizon, soon to disappear with the other setting stars. Pleione is classified as a Gamma Cassiopeiae type variable star, with brightness fluctuations that range between a 4.8 and 5.5 visual magnitude. It has a spectral classification of B8Vne, a hot main sequence star with "nebulous" absorption lines due to its rapid rotation and emission lines from the surrounding circumstellar disks formed of material being ejected from the star. There has been significant debate as to the star's actual distance from Earth. The debate revolves around the different methodologies to measure distance—parallax being the most central, but photometric and spectroscopic observations yielding valuable insights as well. Before the Hipparcos mission, the estimated distance for the Pleiades star cluster was around 135 parsecs or 440 light years. When the Hipparcos Catalogue was published in 1997, the new parallax measurement indicated a much closer distance of about (), triggering substantial controversy among astronomers. The Hipparcos new reduction produced a broadly similar distance of . If the Hipparcos estimate were accurate, some astronomers contend, then stars in the cluster would have to be fainter than Sun-like stars—a notion that would challenge some of the fundamental precepts of stellar structure. Interferometric measurements taken in 2004 by the Hubble Telescope's Fine Guidance Sensors and corroborated by studies from Caltech and NASA's Jet Propulsion Laboratory showed the original estimate of 135 pc or 440 ly to be the correct figure. The Gaia EDR3 parallax is , indicating a distance around . This is relatively imprecise for a Gaia result due to the brightness of the star, but still with a statistical margin of error similar to the Hipparcos results. Properties In 1942 Otto Struve, one of the early researchers of Be Stars, stated that Pleione is "the most interesting member of the Pleiades cluster". Like many of the stars in the cluster, Pleione is a blue-white B-type main sequence dwarf star with a temperature of about . It has a bolometric luminosity of assuming a distance of roughly 130 pc. With a radius of and mass that is , Pleione is considerably smaller than the brightest stars in the Pleiades. Alcyone for instance has a radius that is with a luminosity , making it roughly 30 times more voluminous than Pleione and about 13 times brighter. Be star Pleione is a classical Be star, often referred to as an "active hot star". Classical Be stars are B-type stars close to the main sequence with the "e" in the spectral type signifying that Pleione exhibits emission lines in its spectrum, rather than the absorption lines typical of B-type stars. Emission lines usually indicate that a star is surrounded by gas. In the case of a Be star, the gas is typically in the form of an equatorial disk, resulting in electromagnetic radiation that emanates not just from the photosphere, but from the disk as well. The geometry and kinematics of this gaseous circumstellar environment are best explained by a Keplerian disk – one that is supported against gravity by rotation, rather than gas or radiation pressure. Circumstellar disks like this are sometimes referred to as "decretion disks", because they consist of material being thrown off the star (as opposed to accretion disks which comprise material falling toward the star). Be Stars are fast rotators (>200 km/s), causing them to be highly oblate, with a substantial stellar wind and high mass loss rate. Pleione's rotational velocity of is considerably faster than the of Achernar, a prototypical Be star. Pleione revolves on its axis once every 11.8 hours, compared to 48.4 hours for Achernar. For comparison, the Sun takes 25.3 days to rotate. Pleione is spinning so fast that it is close to the estimated breakup velocity for a B8V star of about 370–390 km/s, which is why it is losing so much mass. Pleione is unusual because it alternates between three different phases: 1) normal B star, 2) Be star and 3) Be shell star. The cause is changes in the decretion disc, which appears, disappears, and reforms. Material in the disc is pulled back towards the star by gravity, but if it has enough energy it can escape into space, contributing to the stellar wind. Sometimes, Be stars form multiple decretion discs simultaneously, producing complex circumstellar dynamics. As a result of such dynamics, Pleione exhibits prominent long-term photometric and spectroscopic variations encompassing a period of about 35 years. During the 20th century, Pleione went through several phase changes: it was in a Be phase until 1903, a B phase (1905–1936), a B-shell phase (1938–1954), followed by another Be phase (1955–1972). It then returned to the Be-shell phase in 1972, developing numerous shell absorption lines in its spectrum. At the same time, the star showed a decrease in brightness, beginning at the end of 1971. After reaching a minimum brightness in late 1973, the star gradually re-brightened. In 1989, Pleione entered a Be phase which lasted until the summer of 2005. These phase changes are ascribed to the evolution of a decretion disc that formed in 1972. Polarimetric observations show the intrinsic polarization angle has changed, indicating a change in orientation of the disc axis. Because Pleione has a stellar companion with a close orbit, the shift in the polarization angle has been attributed to the companion causing a precession (wobble) of the disk, with a precession period of roughly 81 years. Photometric and spectroscopic observations from 2005 to 2007 indicated that a new disc had formed around the equator – producing a two discs at different inclination angles (60° and 30°). Such a misaligned double-disc structure had not been observed around other Be stars. Star system Pleione is known to be a speckle binary, although its orbital parameters have yet to be fully established. In 1996 a group of Japanese and French astronomers discovered that Pleione is a single-lined spectroscopic binary with an orbital period of 218.0 days and a large eccentricity of 0.6. The Washington Double Star Catalogue lists an angular separation between the two components of 0.2 arcseconds—an angle which equates to a distance of about 24 AU, assuming a distance of 120 parsecs. Ethnological influences Mythology Pleione was an Oceanid nymph of Mount Kyllene in Arkadia (southern Greece), one of the three thousand daughters of the Titans Oceanus and Tethys. The nymphs in Greek mythology were the spirits of nature; oceanids, spirits of the sea. Though considered lesser divinities, they were still very much venerated as the protectors of the natural world. Each oceanid was thence a patroness of a particular body of water — be it ocean, river, lake, spring or even cloud — and by extension activities related thereto. The sea-nymph, Pleione, was the consort of Atlas, the Titan, and mother of the Hyas, Hyades and Pleiades. Etymology When names were assigned to the stars in the Pleiades cluster, the bright pair of stars in the East of the cluster were named Atlas and Pleione, while the seven other bright stars were named after the mythological Pleiades (the 'Seven Sisters'). The term "Pleiades" was used by Valerius Flaccus to apply to the cluster as a whole, and Riccioli called the star Mater Pleione. There is some diversity of opinion as to the origin of the names Pleione and Pleiades. There are three possible derivations of note. Foremost is that both names come from the Greek word πλεῖν, (pr. ple'-ō), meaning "to sail". This is particularly plausible given that ancient Greece was a seafaring culture and because of Pleione's mythical status as an Oceanid nymph. Pleione, as a result, is sometimes referred to as the "sailing queen" while her daughters the "sailing ones". Also, the appearance of these stars coincided with the sailing season in antiquity; sailors were well advised to set sail only when the Pleiades were visible at night, lest they meet with misfortune. Another derivation of the name is the Greek word Πλειόνη (pr. plêionê), meaning "more", "plenty", or "full"—a lexeme with many English derivatives like pleiotropy, pleomorphism, pleonasm, pleonexia, plethora and Pliocene. This meaning also coincides with the biblical Kīmāh and the Arabic word for the Pleiades — Al Thurayya. In fact, Pleione may have been numbered amongst the Epimelides (nymphs of meadows and pastures) and presided over the multiplication of the animals, as her name means "to increase in number". Finally, the last comes from Peleiades (), a reference to the sisters' mythical transformation by Zeus into a flock of doves following their pursuit by Orion, the giant huntsman, across the heavens. Modern legacy In the best-selling 1955 nature book published by Time-Life called The World We Live In, there is an artist's impression of Pleione entitled Purple Pleione. The illustration is from the famed space artist Chesley Bonestell and carries the caption: "Purple Pleione, a star of the familiar Pleiades cluster, rotates so rapidly that it has flattened into a flying saucer and hurled forth a dark red ring of hydrogen. Where the excited gas crosses Pleione's equator, it obscures her violet light." Given its mythical connection with sailing and orchids, the name Pleione is often associated with grace, speed and elegance. Some of the finest designs in racing yachts have the name Pleione, and the recent Shanghai Oriental Art Center draws its inspiration from an orchid. Fat Jon in his new album Hundred Eight Stars has a prismatic track dedicated to 28 Tauri. See also Lists of stars in the constellation Taurus Class B Stars Be stars Shell star Circumstellar disk Notes References External links Jim Kaler's Stars, University of Illinois: PLEIONE (28 Tauri) Philippe Stee's in-depth information on: Hot and Active Stars Research Olivier Thizy's in-depth information on: Be Stars High-resolution LRGB image based on 4 hrs total exposure: M45 – Pleiades Open Cluster APOD Pictures: Orion, the giant huntsman, in pursuit of the Pleiades Himalayan Skyscape Pleiades and the Milky Way Pleiades and the Interstellar Medium Taurus (constellation) Pleiades B-type main-sequence stars Be stars Gamma Cassiopeiae variable stars Binary stars Tauri, 028 Tauri, BU 023862 1180 017851 Durchmusterung objects
Pleione (star)
[ "Astronomy" ]
2,906
[ "Taurus (constellation)", "Constellations" ]
2,692,841
https://en.wikipedia.org/wiki/Magnetic%20reluctance
Magnetic reluctance, or magnetic resistance, is a concept used in the analysis of magnetic circuits. It is defined as the ratio of magnetomotive force (mmf) to magnetic flux. It represents the opposition to magnetic flux, and depends on the geometry and composition of an object. Magnetic reluctance in a magnetic circuit is analogous to electrical resistance in an electrical circuit in that resistance is a measure of the opposition to the electric current. The definition of magnetic reluctance is analogous to Ohm's law in this respect. However, magnetic flux passing through a reluctance does not give rise to dissipation of heat as it does for current through a resistance. Thus, the analogy cannot be used for modelling energy flow in systems where energy crosses between the magnetic and electrical domains. An alternative analogy to the reluctance model which does correctly represent energy flows is the gyrator–capacitor model. Magnetic reluctance is a scalar extensive quantity. The unit for magnetic reluctance is inverse henry, H−1. History The term reluctance was coined in May 1888 by Oliver Heaviside. The notion of "magnetic resistance" was first mentioned by James Joule in 1840. The idea for a magnetic flux law, similar to Ohm's law for closed electric circuits, is attributed to Henry Augustus Rowland in an 1873 paper. Rowland is also responsible for coining the term magnetomotive force in 1880, also coined, apparently independently, a bit later in 1883 by Bosanquet. Reluctance is usually represented by a cursive capital . Definitions In both AC and DC fields, the reluctance is the ratio of the magnetomotive force (MMF) in a magnetic circuit to the magnetic flux in this circuit. In a pulsating DC or AC field, the reluctance also pulsates (see phasors). The definition can be expressed as follows: where ("R") is the reluctance in ampere-turns per weber (a unit that is equivalent to turns per henry). "Turns" refers to the winding number of an electrical conductor comprising an inductor. ("F") is the magnetomotive force (MMF) in ampere-turns Φ ("Phi") is the magnetic flux in webers. It is sometimes known as Hopkinson's law and is analogous to Ohm's Law with resistance replaced by reluctance, voltage by MMF and current by magnetic flux. Permeance is the inverse of reluctance: Its SI derived unit is the henry (the same as the unit of inductance, although the two concepts are distinct). Magnetic flux always forms a closed loop, as described by Maxwell's equations, but the path of the loop depends on the reluctance of the surrounding materials. It is concentrated around the path of least reluctance. Air and vacuum have high reluctance, while easily magnetized materials such as soft iron have low reluctance. The concentration of flux in low-reluctance materials forms strong temporary poles and causes mechanical forces that tend to move the materials towards regions of higher flux so it is always an attractive force (pull). The reluctance of a uniform magnetic circuit can be calculated as: where l is the length of the circuit in metres is the permeability of vacuum, equal to (or, = = ) is the relative magnetic permeability of the material (dimensionless) is the permeability of the material () A is the cross-sectional area of the circuit in square metres Applications Constant air gaps can be created in the core of certain transformers to reduce the effects of saturation. This increases the reluctance of the magnetic circuit, and enables it to store more energy before core saturation. This effect is also used in the flyback transformer. Variable air gaps can be created in the cores by a movable keeper to create a flux switch that alters the amount of magnetic flux in a magnetic circuit without varying the constant magnetomotive force in that circuit. Variation of reluctance is the principle behind the reluctance motor (or the variable reluctance generator) and the Alexanderson alternator. Another way of saying this is that the reluctance forces strive for a maximally aligned magnetic circuit and a minimal air gap distance. Loudspeakers used in conjunction with computer monitors or other screens are typically shielded magnetically, in order to reduce magnetic interference caused to the screens such as in televisions or CRTs. The speaker magnet is covered with a material such as soft iron to minimize the stray magnetic field. Reluctance can also be applied to: Reluctance motors Variable reluctance (magnetic) pickups Magnetic capacitance Magnetic circuit Magnetic complex reluctance References Electric and magnetic fields in matter Magnetic circuits
Magnetic reluctance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
935
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
2,694,525
https://en.wikipedia.org/wiki/Point%20groups%20in%20three%20dimensions
In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O(3), the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O(3) itself is a subgroup of the Euclidean group E(3) of all isometries. Symmetry groups of geometric objects are isometry groups. Accordingly, analysis of isometry groups is analysis of possible symmetries. All isometries of a bounded (finite) 3D object have one or more common fixed points. We follow the usual convention by choosing the origin as one of them. The symmetry group of an object is sometimes also called its full symmetry group, as opposed to its proper symmetry group, the intersection of its full symmetry group with E+(3), which consists of all direct isometries, i.e., isometries preserving orientation. For a bounded object, the proper symmetry group is called its rotation group. It is the intersection of its full symmetry group with SO(3), the full rotation group of the 3D space. The rotation group of a bounded object is equal to its full symmetry group if and only if the object is chiral. The point groups that are generated purely by a finite set of reflection mirror planes passing through the same point are the finite Coxeter groups, represented by Coxeter notation. The point groups in three dimensions are widely used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups. 3D isometries that leave the origin fixed The symmetry group operations (symmetry operations) are the isometries of three-dimensional space R3 that leave the origin fixed, forming the group O(3). These operations can be categorized as: The direct (orientation-preserving) symmetry operations, which form the group SO(3): The identity operation, denoted by E or the identity matrix I. Rotation about an axis through the origin by an angle θ. Rotation by θ = 360°/n for any positive integer n is denoted Cn (from the Schoenflies notation for the group Cn that it generates). The identity operation, also written C1, is a special case of the rotation operator. The indirect (orientation-reversing) operations: Inversion, denoted i or Ci, that is, rotation by 180° about a coordinate axis followed by a reflection in the orthogonal coordinate plane. The matrix notation is −I. Reflection in a plane through the origin, denoted σ. Improper rotation, also called rotation-reflection: rotation about an axis by an angle θ, combined with reflection in the plane through the origin perpendicular to the axis. Rotation-reflection by θ = 360°/n for any positive integer n is denoted Sn (from the Schoenflies notation for the group Sn that it generates if n is even). Inversion is a special case of rotation-reflection (i = S2), as is reflection (σ = S1), so these operations are often classified as improper rotations. A circumflex is sometimes added to the symbol to indicate an operator, as in Ĉn and Ŝn. Conjugacy When comparing the symmetry type of two objects, the origin is chosen for each separately, i.e., they need not have the same center. Moreover, two objects are considered to be of the same symmetry type if their symmetry groups are conjugate subgroups of O(3) (two subgroups H1, H2 of a group G are conjugate, if there exists g ∈ G such that H1 = g−1H2g ). For example, two 3D objects have the same symmetry type: if both have mirror symmetry, but with respect to a different mirror plane if both have 3-fold rotational symmetry, but with respect to a different axis. In the case of multiple mirror planes and/or axes of rotation, two symmetry groups are of the same symmetry type if and only if there is a rotation mapping the whole structure of the first symmetry group to that of the second. (In fact there will be more than one such rotation, but not an infinite number as when there is only one mirror or axis.) The conjugacy definition would also allow a mirror image of the structure, but this is not needed, the structure itself is achiral. For example, if a symmetry group contains a 3-fold axis of rotation, it contains rotations in two opposite directions. (The structure is chiral for 11 pairs of space groups with a screw axis.) Infinite isometry groups There are many infinite isometry groups; for example, the "cyclic group" (meaning that it is generated by one element – not to be confused with a torsion group) generated by a rotation by an irrational number of turns about an axis. We may create non-cyclical abelian groups by adding more rotations around the same axis. The set of points on a circle at rational numbers of degrees around the circle illustrates a point group requiring an infinite number of generators. There are also non-abelian groups generated by rotations around different axes. These are usually (generically) free groups. They will be infinite unless the rotations are specially chosen. All the infinite groups mentioned so far are not closed as topological subgroups of O(3). We now discuss topologically closed subgroups of O(3). The whole O(3) is the symmetry group of spherical symmetry; SO(3) is the corresponding rotation group. The other infinite isometry groups consist of all rotations about an axis through the origin, and those with additionally reflection in the planes through the axis, and/or reflection in the plane through the origin, perpendicular to the axis. Those with reflection in the planes through the axis, with or without reflection in the plane through the origin perpendicular to the axis, are the symmetry groups for the two types of cylindrical symmetry. Any 3D shape (subset of R3) having infinite rotational symmetry must also have mirror symmetry for every plane through the axis. Physical objects having infinite rotational symmetry will also have the symmetry of mirror planes through the axis, but vector fields may not, for instance the velocity vectors of a cone rotating about its axis, or the magnetic field surrounding a wire. There are seven continuous groups which are all in a sense limits of the finite isometry groups. These so called limiting point groups or Curie limiting groups are named after Pierre Curie who was the first to investigate them. The seven infinite series of axial groups lead to five limiting groups (two of them are duplicates), and the seven remaining point groups produce two more continuous groups. In international notation, the list is ∞, ∞2, ∞/m, ∞mm, ∞/mm, ∞∞, and ∞∞m. Not all of these are possible for physical objects, for example objects with ∞∞ symmetry also have ∞∞m symmetry. See below for other designations and more details. Finite isometry groups Symmetries in 3D that leave the origin fixed are fully characterized by symmetries on a sphere centered at the origin. For finite 3D point groups, see also spherical symmetry groups. Up to conjugacy, the set of finite 3D point groups consists of: , which have at most one more-than-2-fold rotation axis; they are the finite symmetry groups on an infinite cylinder, or equivalently, those on a finite cylinder. They are sometimes called the axial or prismatic point groups. , which have multiple 3-or-more-fold rotation axes; these groups can also be characterized as point groups having multiple 3-fold rotation axes. The possible combinations are: Four 3-fold axes (the three tetrahedral symmetries T, Th, and Td) Four 3-fold axes and three 4-fold axes (octahedral symmetries O and Oh) Ten 3-fold axes and six 5-fold axes (icosahedral symmetries I and Ih) According to the crystallographic restriction theorem, only a limited number of point groups are compatible with discrete translational symmetry: 27 from the 7 infinite series, and 5 of the 7 others. Together, these make up the 32 so-called crystallographic point groups. The seven infinite series of axial groups The infinite series of axial or prismatic groups have an index n, which can be any integer; in each series, the nth symmetry group contains n-fold rotational symmetry about an axis, i.e., symmetry with respect to a rotation by an angle 360°/n. n=1 covers the cases of no rotational symmetry at all. There are four series with no other axes of rotational symmetry (see cyclic symmetries) and three with additional axes of 2-fold symmetry (see dihedral symmetry). They can be understood as point groups in two dimensions extended with an axial coordinate and reflections in it. They are related to the frieze groups; they can be interpreted as frieze-group patterns repeated n times around a cylinder. The following table lists several notations for point groups: Hermann–Mauguin notation (used in crystallography), Schönflies notation (used to describe molecular symmetry), orbifold notation, and Coxeter notation. The latter three are not only conveniently related to its properties, but also to the order of the group. The orbifold notation is a unified notation, also applicable for wallpaper groups and frieze groups. The crystallographic groups have n restricted to 1, 2, 3, 4, and 6; removing crystallographic restriction allows any positive integer. The series are: For odd n we have Z2n = Zn × Z2 and Dih2n = Dihn × Z2. The groups Cn (including the trivial C1) and Dn are chiral, the others are achiral. The terms horizontal (h) and vertical (v), and the corresponding subscripts, refer to the additional mirror plane, that can be parallel to the rotation axis (vertical) or perpendicular to the rotation axis (horizontal). The simplest nontrivial axial groups are equivalent to the abstract group Z2: Ci (equivalent to S2) – inversion symmetry C2 – 2-fold rotational symmetry Cs (equivalent to C1hand C1v) – reflection symmetry, also called bilateral symmetry. The second of these is the first of the uniaxial groups (cyclic groups) Cn of order n (also applicable in 2D), which are generated by a single rotation of angle 360°/n. In addition to this, one may add a mirror plane perpendicular to the axis, giving the group Cnh of order 2n, or a set of n mirror planes containing the axis, giving the group Cnv, also of order 2n. The latter is the symmetry group for a regular n-sided pyramid. A typical object with symmetry group Cn or Dn is a propeller. If both horizontal and vertical reflection planes are added, their intersections give n axes of rotation through 180°, so the group is no longer uniaxial. This new group of order 4n is called Dnh. Its subgroup of rotations is the dihedral group Dn of order 2n, which still has the 2-fold rotation axes perpendicular to the primary rotation axis, but no mirror planes. Note: in 2D, Dn includes reflections, which can also be viewed as flipping over flat objects without distinction of frontside and backside; but in 3D, the two operations are distinguished: Dn contains "flipping over", not reflections. There is one more group in this family, called Dnd (or Dnv), which has vertical mirror planes containing the main rotation axis, but instead of having a horizontal mirror plane, it has an isometry that combines a reflection in the horizontal plane and a rotation by an angle 180°/n. Dnh is the symmetry group for a "regular" n-gonal prism and also for a "regular" n-gonal bipyramid. Dnd is the symmetry group for a "regular" n-gonal antiprism, and also for a "regular" n-gonal trapezohedron. Dn is the symmetry group of a partially rotated ("twisted") prism. The groups D2 and D2h are noteworthy in that there is no special rotation axis. Rather, there are three perpendicular 2-fold axes. D2 is a subgroup of all the polyhedral symmetries (see below), and D2h is a subgroup of the polyhedral groups Th and Oh. D2 occurs in molecules such as twistane and in homotetramers such as Concanavalin A. The elements of D2 are in 1-to-2 correspondence with the rotations given by the unit Lipschitz quaternions. The group Sn is generated by the combination of a reflection in the horizontal plane and a rotation by an angle 360°/n. For n odd this is equal to the group generated by the two separately, Cnh of order 2n, and therefore the notation Sn is not needed; however, for n even it is distinct, and of order n. Like Dnd it contains a number of improper rotations without containing the corresponding rotations. All symmetry groups in the 7 infinite series are different, except for the following four pairs of mutually equal ones: C1h and C1v: group of order 2 with a single reflection (Cs ) D1 and C2: group of order 2 with a single 180° rotation D1h and C2v: group of order 4 with a reflection in a plane and a 180° rotation through a line in that plane D1d and C2h: group of order 4 with a reflection in a plane and a 180° rotation through a line perpendicular to that plane. S2 is the group of order 2 with a single inversion (Ci ). "Equal" is meant here as the same up to conjugacy in space. This is stronger than "up to algebraic isomorphism". For example, there are three different groups of order two in the first sense, but there is only one in the second sense. Similarly, e.g. S2n is algebraically isomorphic with Z2n. The groups may be constructed as follows: Cn. Generated by an element also called Cn, which corresponds to a rotation by angle 2π/n around the axis. Its elements are E (the identity), Cn, Cn2, ..., Cnn−1, corresponding to rotation angles 0, 2π/n, 4π/n, ..., 2(n − 1)π/n. S2n. Generated by element C2nσh, where σh is a reflection in the direction of the axis. Its elements are the elements of Cn with C2nσh, C2n3σh, ..., C2n2n−1σh added. Cnh. Generated by element Cn and reflection σh. Its elements are the elements of group Cn, with elements σh, Cnσh, Cn2σh, ..., Cnn−1σh added. Cnv. Generated by element Cn and reflection σv in a direction in the plane perpendicular to the axis. Its elements are the elements of group Cn, with elements σv, Cnσv, Cn2σv, ..., Cnn−1σv added. Dn. Generated by element Cn and 180° rotation U = σhσv around a direction in the plane perpendicular to the axis. Its elements are the elements of group Cn, with elements U, CnU, Cn2U, ..., Cnn − 1U added. Dnd. Generated by elements C2nσh and σv. Its elements are the elements of group Cn and the additional elements of S2n and Cnv, with elements C2nσhσv, C2n3σhσv, ..., C2n2n − 1σhσv added. Dnh. Generated by elements Cn, σh, and σv. Its elements are the elements of group Cn and the additional elements of Cnh, Cnv, and Dn. Groups with continuous axial rotations are designated by putting ∞ in place of n. Note however that C here is not the same as the infinite cyclic group (also sometimes designated C), which is isomorphic to the integers. The following table gives the five continuous axial rotation groups. They are limits of the finite groups only in the sense that they arise when the main rotation is replaced by rotation by an arbitrary angle, so not necessarily a rational number of degrees as with the finite groups. Physical objects can only have C or D symmetry, but vector fields can have the others. The seven remaining point groups The remaining point groups are said to be of very high or polyhedral symmetry because they have more than one rotation axis of order greater than 2. Here, Cn denotes an axis of rotation through 360°/n and Sn denotes an axis of improper rotation through the same. On successive lines are the orbifold notation, the Coxeter notation and Coxeter diagram, and the Hermann–Mauguin notation (full, and abbreviated if different) and the order (number of elements) of the symmetry group. The groups are: The continuous groups related to these groups are: ∞∞, K, or SO(3), all possible rotations. ∞∞m, Kh, or O(3), all possible rotations and reflections. As noted above for the infinite isometry groups, any physical object having K symmetry will also have Kh symmetry. Reflective Coxeter groups The reflective point groups in three dimensions are also called Coxeter groups and can be given by a Coxeter-Dynkin diagram and represent a set of mirrors that intersect at one central point. Coxeter notation offers a bracketed notation equivalent to the Coxeter diagram, with markup symbols for rotational and other subsymmetry point groups. In Schoenflies notation, the reflective point groups in 3D are Cnv, Dnh, and the full polyhedral groups T, O, and I. The mirror planes bound a set of spherical triangle domains on the surface of a sphere. A rank n Coxeter group has n mirror planes. Coxeter groups having fewer than 3 generators have degenerate spherical triangle domains, as lunes or a hemisphere. In Coxeter notation these groups are tetrahedral symmetry [3,3], octahedral symmetry [4,3], icosahedral symmetry [5,3], and dihedral symmetry [p,2]. The number of mirrors for an irreducible group is nh/2, where h is the Coxeter group's Coxeter number, n is the dimension (3). Rotation groups The rotation groups, i.e., the finite subgroups of SO(3), are: the cyclic groups Cn (the rotation group of a canonical pyramid), the dihedral groups Dn (the rotation group of a uniform prism, or canonical bipyramid), and the rotation groups T, O and I of a regular tetrahedron, octahedron/cube and icosahedron/dodecahedron. In particular, the dihedral groups D3, D4 etc. are the rotation groups of plane regular polygons embedded in three-dimensional space, and such a figure may be considered as a degenerate regular prism. Therefore, it is also called a dihedron (Greek: solid with two faces), which explains the name dihedral group. An object having symmetry group Cn, Cnh, Cnv or S2n has rotation group Cn. An object having symmetry group Dn, Dnh, or Dnd has rotation group Dn. An object having a polyhedral symmetry (T, Td, Th, O, Oh, I or Ih) has as its rotation group the corresponding one without a subscript: T, O or I. The rotation group of an object is equal to its full symmetry group if and only if the object is chiral. In other words, the chiral objects are those with their symmetry group in the list of rotation groups. Given in Schönflies notation, Coxeter notation, (orbifold notation), the rotation subgroups are: Correspondence between rotation groups and other groups Groups containing inversion The rotation group SO(3) is a subgroup of O(3), the full point rotation group of the 3D Euclidean space. Correspondingly, O(3) is the direct product of SO(3) and the inversion group Ci (where inversion is denoted by its matrix −I): O(3) = SO(3) × { I , −I } Thus there is a 1-to-1 correspondence between all direct isometries and all indirect isometries, through inversion. Also there is a 1-to-1 correspondence between all groups H of direct isometries in SO(3) and all groups K of isometries in O(3) that contain inversion: K = H × { I , −I } H = K ∩ SO(3) where the isometry ( A, I ) is identified with A. For finite groups, the correspondence is: Groups containing indirect isometries but no inversion If a group of direct isometries H has a subgroup L of index 2, then there is a corresponding group that contains indirect isometries but no inversion: For example, H = C4 corresponds to M = S4. Thus M is obtained from H by inverting the isometries in . This group M is, when considered as an abstract group, isomorphic to H. Conversely, for all point groups M that contain indirect isometries but no inversion we can obtain a rotation group H by inverting the indirect isometries. For finite groups, the correspondence is: Normal subgroups In 2D, the cyclic group of k-fold rotations Ck is for every positive integer k a normal subgroup of O(2) and SO(2). Accordingly, in 3D, for every axis the cyclic group of k-fold rotations about that axis is a normal subgroup of the group of all rotations about that axis. Since any subgroup of index two is normal, the group of rotations (Cn) is normal both in the group (Cnv) obtained by adding to (Cn) reflection planes through its axis and in the group (Cnh) obtained by adding to (Cn) a reflection plane perpendicular to its axis. Maximal symmetries There are two discrete point groups with the property that no discrete point group has it as proper subgroup: Oh and Ih. Their largest common subgroup is Th. The two groups are obtained from it by changing 2-fold rotational symmetry to 4-fold, and adding 5-fold symmetry, respectively. There are two crystallographic point groups with the property that no crystallographic point group has it as proper subgroup: Oh and D6h. Their maximal common subgroups, depending on orientation, are D3d and D2h. The groups arranged by abstract group type Below the groups explained above are arranged by abstract group type. The smallest abstract groups that are not any symmetry group in 3D, are the quaternion group (of order 8), Z3 × Z3 (of order 9), the dicyclic group Dic3 (of order 12), and 10 of the 14 groups of order 16. The column "# of order 2 elements" in the following tables shows the total number of isometry subgroups of types C2, Ci, Cs. This total number is one of the characteristics helping to distinguish the various abstract group types, while their isometry type helps to distinguish the various isometry groups of the same abstract group. Within the possibilities of isometry groups in 3D, there are infinitely many abstract group types with 0, 1 and 3 elements of order 2, there are two with 4n + 1 elements of order 2, and there are three with 4n + 3 elements of order 2 (for each n ≥ 8 ). There is never a positive even number of elements of order 2. Symmetry groups in 3D that are cyclic as abstract group The symmetry group for n-fold rotational symmetry is Cn; its abstract group type is cyclic group Zn, which is also denoted by Cn. However, there are two more infinite series of symmetry groups with this abstract group type: For even order 2n there is the group S2n (Schoenflies notation) generated by a rotation by an angle 180°/n about an axis, combined with a reflection in the plane perpendicular to the axis. For S2 the notation Ci is used; it is generated by inversion. For any order 2n where n is odd, we have Cnh; it has an n-fold rotation axis, and a perpendicular plane of reflection. It is generated by a rotation by an angle 360°/n about the axis, combined with the reflection. For C1h the notation Cs is used; it is generated by reflection in a plane. Thus we have, with bolding of the 10 cyclic crystallographic point groups, for which the crystallographic restriction applies: etc. Symmetry groups in 3D that are dihedral as abstract group In 2D dihedral group Dn includes reflections, which can also be viewed as flipping over flat objects without distinction of front- and backside. However, in 3D the two operations are distinguished: the symmetry group denoted by Dn contains n 2-fold axes perpendicular to the n-fold axis, not reflections. Dn is the rotation group of the n-sided prism with regular base, and n-sided bipyramid with regular base, and also of a regular, n-sided antiprism and of a regular, n-sided trapezohedron. The group is also the full symmetry group of such objects after making them chiral by an identical chiral marking on every face, for example, or some modification in the shape. The abstract group type is dihedral group Dihn, which is also denoted by Dn. However, there are three more infinite series of symmetry groups with this abstract group type: Cnv of order 2n, the symmetry group of a regular n-sided pyramid Dnd of order 4n, the symmetry group of a regular n-sided antiprism Dnh of order 4n for odd n. For n = 1 we get D2, already covered above, so n ≥ 3. Note the following property: Dih4n+2 Dih2n+1 × Z2 Thus we have, with bolding of the 12 crystallographic point groups, and writing D1d as the equivalent C2h: etc. Other C2n,h of order 4n is of abstract group type Z2n × Z2. For n = 1 we get Dih2, already covered above, so n ≥ 2. Thus we have, with bolding of the 2 cyclic crystallographic point groups: etc. Dnh of order 4n is of abstract group type Dihn × Z2. For odd n this is already covered above, so we have here D2nh of order 8n, which is of abstract group type Dih2n × Z2 (n≥1). Thus we have, with bolding of the 3 dihedral crystallographic point groups: etc. The remaining seven are, with bolding of the 5 crystallographic point groups (see also above): Fundamental domain The fundamental domain of a point group is a conic solid. An object with a given symmetry in a given orientation is characterized by the fundamental domain. If the object is a surface it is characterized by a surface in the fundamental domain continuing to its radial bordal faces or surface. If the copies of the surface do not fit, radial faces or surfaces can be added. They fit anyway if the fundamental domain is bounded by reflection planes. For a polyhedron this surface in the fundamental domain can be part of an arbitrary plane. For example, in the disdyakis triacontahedron one full face is a fundamental domain of icosahedral symmetry. Adjusting the orientation of the plane gives various possibilities of combining two or more adjacent faces to one, giving various other polyhedra with the same symmetry. The polyhedron is convex if the surface fits to its copies and the radial line perpendicular to the plane is in the fundamental domain. Also the surface in the fundamental domain may be composed of multiple faces. Binary polyhedral groups The map Spin(3) → SO(3) is the double cover of the rotation group by the spin group in 3 dimensions. (This is the only connected cover of SO(3), since Spin(3) is simply connected.) By the lattice theorem, there is a Galois connection between subgroups of Spin(3) and subgroups of SO(3) (rotational point groups): the image of a subgroup of Spin(3) is a rotational point group, and the preimage of a point group is a subgroup of Spin(3). (Note that Spin(3) has alternative descriptions as the special unitary group SU(2) and as the group of unit quaternions. Topologically, this Lie group is the 3-dimensional sphere S3.) The preimage of a finite point group is called a binary polyhedral group, represented as ⟨l,n,m⟩, and is called by the same name as its point group, with the prefix binary, with double the order of the related polyhedral group (l,m,n). For instance, the preimage of the icosahedral group (2,3,5) is the binary icosahedral group, ⟨2,3,5⟩. The binary polyhedral groups are: : binary cyclic group of an (n + 1)-gon, order 2n : binary dihedral group of an n-gon, ⟨2,2,n⟩, order 4n : binary tetrahedral group, ⟨2,3,3⟩, order 24 : binary octahedral group, ⟨2,3,4⟩, order 48 : binary icosahedral group, ⟨2,3,5⟩, order 120 These are classified by the ADE classification, and the quotient of C2 by the action of a binary polyhedral group is a Du Val singularity. For point groups that reverse orientation, the situation is more complicated, as there are two pin groups, so there are two possible binary groups corresponding to a given point group. Note that this is a covering of groups, not a covering of spaces – the sphere is simply connected, and thus has no covering spaces. There is thus no notion of a "binary polyhedron" that covers a 3-dimensional polyhedron. Binary polyhedral groups are discrete subgroups of a Spin group, and under a representation of the spin group act on a vector space, and may stabilize a polyhedron in this representation – under the map Spin(3) → SO(3) they act on the same polyhedron that the underlying (non-binary) group acts on, while under spin representations or other representations they may stabilize other polyhedra. This is in contrast to projective polyhedra – the sphere does cover projective space (and also lens spaces), and thus a tessellation of projective space or lens space yields a distinct notion of polyhedron. See also List of spherical symmetry groups List of character tables for chemically important 3D point groups Point groups in two dimensions Point groups in four dimensions Symmetry Euclidean plane isometry Group action Point group Crystal system Space group List of small groups Molecular symmetry Footnotes References . 6.5 The binary polyhedral groups, p. 68 External links Graphic overview of the 32 crystallographic point groups – form the first parts (apart from skipping n=5) of the 7 infinite series and 5 of the 7 separate 3D point groups Overview of properties of point groups Simplest Canonical Polyhedra of Each Symmetry Type (uses Java) Point Groups and Crystal Systems, by Yi-Shu Wei, pp. 4–6 The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions) Euclidean symmetries Group theory
Point groups in three dimensions
[ "Physics", "Mathematics" ]
6,674
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Symmetry" ]
2,695,433
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%20model
In theoretical physics, the Wess–Zumino model has become the first known example of an interacting four-dimensional quantum field theory with linearly realised supersymmetry. In 1974, Julius Wess and Bruno Zumino studied, using modern terminology, dynamics of a single chiral superfield (composed of a complex scalar and a spinor fermion) whose cubic superpotential leads to a renormalizable theory. It is a special case of 4D N = 1 global supersymmetry. The treatment in this article largely follows that of Figueroa-O'Farrill's lectures on supersymmetry, and to some extent of Tong. The model is an important model in supersymmetric quantum field theory. It is arguably the simplest supersymmetric field theory in four dimensions, and is ungauged. The Wess–Zumino action Preliminary treatment Spacetime and matter content In a preliminary treatment, the theory is defined on flat spacetime (Minkowski space). For this article, the metric has mostly plus signature. The matter content is a real scalar field , a real pseudoscalar field , and a real (Majorana) spinor field . This is a preliminary treatment in the sense that the theory is written in terms of familiar scalar and spinor fields which are functions of spacetime, without developing a theory of superspace or superfields, which appear later in the article. Free, massless theory The Lagrangian of the free, massless Wess–Zumino model is where The corresponding action is . Massive theory Supersymmetry is preserved when adding a mass term of the form Interacting theory Supersymmetry is preserved when adding an interaction term with coupling constant : The full Wess–Zumino action is then given by putting these Lagrangians together: Alternative expression There is an alternative way of organizing the fields. The real fields and are combined into a single complex scalar field while the Majorana spinor is written in terms of two Weyl spinors: . Defining the superpotential the Wess–Zumino action can also be written (possibly after relabelling some constant factors) Upon substituting in , one finds that this is a theory with a massive complex scalar and a massive Majorana spinor of the same mass. The interactions are a cubic and quartic interaction, and a Yukawa interaction between and , which are all familiar interactions from courses in non-supersymmetric quantum field theory. Using superspace and superfields Superspace and superfield content Superspace consists of the direct sum of Minkowski space with 'spin space', a four dimensional space with coordinates , where are indices taking values in More formally, superspace is constructed as the space of right cosets of the Lorentz group in the super-Poincaré group. The fact there is only 4 'spin coordinates' means that this is a theory with what is known as supersymmetry, corresponding to an algebra with a single supercharge. The dimensional superspace is sometimes written , and called super Minkowski space. The 'spin coordinates' are so called not due to any relation to angular momentum, but because they are treated as anti-commuting numbers, a property typical of spinors in quantum field theory due to the spin statistics theorem. A superfield is then a function on superspace, . Defining the supercovariant derivative a chiral superfield satisfies The field content is then simply a single chiral superfield. However, the chiral superfield contains fields, in the sense that it admits the expansion with Then can be identified as a complex scalar, is a Weyl spinor and is an auxiliary complex scalar. These fields admit a further relabelling, with and This allows recovery of the preliminary forms, after eliminating the non-dynamical using its equation of motion. Free, massless action When written in terms of the chiral superfield , the action (for the free, massless Wess–Zumino model) takes on the simple form where are integrals over spinor dimensions of superspace. Superpotential Masses and interactions are added through a superpotential. The Wess–Zumino superpotential is Since is complex, to ensure the action is real its conjugate must also be added. The full Wess–Zumino action is written Supersymmetry of the action Preliminary treatment The action is invariant under the supersymmetry transformations, given in infinitesimal form by where is a Majorana spinor-valued transformation parameter and is the chirality operator. The alternative form is invariant under the transformation . Without developing a theory of superspace transformations, these symmetries appear ad-hoc. Superfield treatment If the action can be written as where is a real superfield, that is, , then the action is invariant under supersymmetry. Then the reality of means it is invariant under supersymmetry. Extra classical symmetries Superconformal symmetry The massless Wess–Zumino model admits a larger set of symmetries, described at the algebra level by the superconformal algebra. As well as the Poincaré symmetry generators and the supersymmetry translation generators, this contains the conformal algebra as well as a conformal supersymmetry generator . The conformal symmetry is broken at the quantum level by trace and conformal anomalies, which break invariance under the conformal generators for dilatations and for special conformal transformations respectively. R-symmetry The R-symmetry of supersymmetry holds when the superpotential is a monomial. This means either , so that the superfield is massive but free (non-interacting), or so the theory is massless but (possibly) interacting. This is broken at the quantum level by anomalies. Action for multiple chiral superfields The action generalizes straightforwardly to multiple chiral superfields with . The most general renormalizable theory is where the superpotential is , where implicit summation is used. By a change of coordinates, under which transforms under , one can set without loss of generality. With this choice, the expression is known as the canonical Kähler potential. There is residual freedom to make a unitary transformation in order to diagonalise the mass matrix . When , if the multiplet is massive then the Weyl fermion has a Majorana mass. But for the two Weyl fermions can have a Dirac mass, when the superpotential is taken to be This theory has a symmetry, where rotate with opposite charges Super QCD For general , a superpotential of the form has a symmetry when rotate with opposite charges, that is under . This symmetry can be gauged and coupled to supersymmetric Yang–Mills to form a supersymmetric analogue to quantum chromodynamics, known as super QCD. Supersymmetric sigma models If renormalizability is not insisted upon, then there are two possible generalizations. The first of these is to consider more general superpotentials. The second is to consider in the kinetic term to be a real function of and . The action is invariant under transformations : these are known as Kähler transformations. Considering this theory gives an intersection of Kähler geometry with supersymmetric field theory. By expanding the Kähler potential in terms of derivatives of and the constituent superfields of , and then eliminating the auxiliary fields using the equations of motion, the following expression is obtained: where is the Kähler metric. It is invariant under Kähler transformations. If the kinetic term is positive definite, then is invertible, allowing the inverse metric to be defined. The Christoffel symbols (adapted for a Kähler metric) are and The covariant derivatives and are defined and The Riemann curvature tensor (adapted for a Kähler metric) is defined . Adding a superpotential A superpotential can be added to form the more general action where the Hessians of are defined . See also N = 4 supersymmetric Yang–Mills theory Supermultiplet References Supersymmetric quantum field theory
Wess–Zumino model
[ "Physics" ]
1,693
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry" ]
2,695,448
https://en.wikipedia.org/wiki/Superconformal%20algebra
In theoretical physics, the superconformal algebra is a graded Lie algebra or superalgebra that combines the conformal algebra and supersymmetry. In two dimensions, the superconformal algebra is infinite-dimensional. In higher dimensions, superconformal algebras are finite-dimensional and generate the superconformal group (in two Euclidean dimensions, the Lie superalgebra does not generate any Lie supergroup). Superconformal algebra in dimension greater than 2 The conformal group of the -dimensional space is and its Lie algebra is . The superconformal algebra is a Lie superalgebra containing the bosonic factor and whose odd generators transform in spinor representations of . Given Kac's classification of finite-dimensional simple Lie superalgebras, this can only happen for small values of and . A (possibly incomplete) list is in 3+0D thanks to ; in 2+1D thanks to ; in 4+0D thanks to ; in 3+1D thanks to ; in 2+2D thanks to ; real forms of in five dimensions in 5+1D, thanks to the fact that spinor and fundamental representations of are mapped to each other by outer automorphisms. Superconformal algebra in 3+1D According to the superconformal algebra with supersymmetries in 3+1 dimensions is given by the bosonic generators , , , , the U(1) R-symmetry , the SU(N) R-symmetry and the fermionic generators , , and . Here, denote spacetime indices; left-handed Weyl spinor indices; right-handed Weyl spinor indices; and the internal R-symmetry indices. The Lie superbrackets of the bosonic conformal algebra are given by where η is the Minkowski metric; while the ones for the fermionic generators are: The bosonic conformal generators do not carry any R-charges, as they commute with the R-symmetry generators: But the fermionic generators do carry R-charge: Under bosonic conformal transformations, the fermionic generators transform as: Superconformal algebra in 2D There are two possible algebras with minimal supersymmetry in two dimensions; a Neveu–Schwarz algebra and a Ramond algebra. Additional supersymmetry is possible, for instance the N = 2 superconformal algebra. See also Conformal symmetry Super Virasoro algebra Supersymmetry algebra References Conformal field theory Supersymmetry Lie algebras
Superconformal algebra
[ "Physics" ]
524
[ "Unsolved problems in physics", "Quantum mechanics", "Quantum physics stubs", "Physics beyond the Standard Model", "Supersymmetry", "Symmetry" ]
2,695,487
https://en.wikipedia.org/wiki/Hagedorn%20temperature
The Hagedorn temperature, TH, is the temperature in theoretical physics where hadronic matter (i.e. ordinary matter) is no longer stable, and must either "evaporate" or convert into quark matter; as such, it can be thought of as the "boiling point" of hadronic matter. It was discovered by Rolf Hagedorn. The Hagedorn temperature exists because the amount of energy available is high enough that matter particle (quark–antiquark) pairs can be spontaneously pulled from vacuum. Thus, naively considered, a system at Hagedorn temperature can accommodate as much energy as one can put in, because the formed quarks provide new degrees of freedom, and thus the Hagedorn temperature would be an impassable absolute hot. However, if this phase is viewed as quarks instead, it becomes apparent that the matter has transformed into quark matter, which can be further heated. The Hagedorn temperature, TH, is about  or about , little above the mass–energy of the lightest hadrons, the pion. Matter at Hagedorn temperature or above will spew out fireballs of new particles, which can again produce new fireballs, and the ejected particles can then be detected by particle detectors. This quark matter may have been detected in heavy-ion collisions at SPS and LHC in CERN (France and Switzerland) and at RHIC in Brookhaven National Laboratory (USA). In string theory, a separate Hagedorn temperature can be defined for strings rather than hadrons. This temperature is extremely high (1030 K) and thus of mainly theoretical interest. History The Hagedorn temperature was discovered by German physicist Rolf Hagedorn in the 1960s while working at CERN. His work on the statistical bootstrap model of hadron production showed that because increases in energy in a system will cause new particles to be produced, an increase of collision energy will increase the entropy of the system rather than the temperature, and "the temperature becomes stuck at a limiting value". Technical explanation Hagedorn temperature is the temperature TH above which the partition sum diverges in a system with exponential growth in the density of states. where , being the Boltzmann constant. Because of the divergence, people may come to the incorrect conclusion that it is impossible to have temperatures above the Hagedorn temperature, which would make it the absolute hot temperature, because it would require an infinite amount of energy. In equations: This line of reasoning was well known to be false even to Hagedorn. The partition function for creation of hydrogen–antihydrogen pairs diverges even more rapidly, because it gets a finite contribution from energy levels that accumulate at the ionization energy. The states that cause the divergence are spatially big, since the electrons are very far from the protons. The divergence indicates that at a low temperature hydrogen–antihydrogen will not be produced, rather proton/antiproton and electron/antielectron. The Hagedorn temperature is only a maximum temperature in the physically unrealistic case of exponentially many species with energy E and finite size. The concept of exponential growth in the number of states was originally proposed in the context of condensed matter physics. It was incorporated into high-energy physics in the early 1970s by Steven Frautschi and Hagedorn. In hadronic physics, the Hagedorn temperature is the deconfinement temperature. In string theory In string theory, it indicates a phase transition: the transition at which very long strings are copiously produced. It is controlled by the size of the string tension, which is smaller than the Planck scale by some power of the coupling constant. By adjusting the tension to be small compared to the Planck scale, the Hagedorn transition can be much less than the Planck temperature. Traditional grand unified string models place this in the magnitude of , two orders of magnitude smaller than the Planck temperature. Such temperatures have not been reached in any experiment and are far beyond the reach of current, or even foreseeable technology. See also Heat Thermodynamic temperature Non-extensive self-consistent thermodynamical theory References Nuclear physics Statistical mechanics String theory Quantum chromodynamics Quark matter Threshold temperatures
Hagedorn temperature
[ "Physics", "Chemistry", "Astronomy" ]
876
[ "Physical phenomena", "Phase transitions", "Astronomical hypotheses", "Quark matter", "String theory", "Threshold temperatures", "Astrophysics", "Nuclear physics", "Statistical mechanics" ]
2,695,491
https://en.wikipedia.org/wiki/Weinberg%E2%80%93Witten%20theorem
In theoretical physics, the Weinberg–Witten (WW) theorem, proved by Steven Weinberg and Edward Witten, states that massless particles (either composite or elementary) with spin j > 1/2 cannot carry a Lorentz-covariant current, while massless particles with spin j > 1 cannot carry a Lorentz-covariant stress-energy. The theorem is usually interpreted to mean that the graviton (j = 2) cannot be a composite particle in a relativistic quantum field theory. Background During the 1980s, preon theories, technicolor and the like were very popular and some people speculated that gravity might be an emergent phenomenon or that gluons might be composite. Weinberg and Witten, on the other hand, developed a no-go theorem that excludes, under very general assumptions, the hypothetical composite and emergent theories. Decades later new theories of emergent gravity are proposed and some high-energy physicists are still using this theorem to try and refute such theories. Because most of these emergent theories aren't Lorentz covariant, the WW theorem doesn't apply. The violation of Lorentz covariance, however, usually leads to other problems. Theorem Weinberg and Witten proved two separate results. According to them, the first is due to Sidney Coleman, who did not publish it: A 3 + 1D QFT (quantum field theory) with a conserved 4-vector current (see four-current) which is Poincaré covariant (and gauge invariant if there happens to be any gauge symmetry which hasn't been gauge-fixed) does not admit massless particles with helicity |h| > 1/2 that also have nonzero charges associated with the conserved current in question. A 3 + 1D QFT with a non-zero conserved stress–energy tensor which is Poincaré covariant (and gauge invariant if there happens to be any gauge symmetry which hasn't been gauge-fixed) does not admit massless particles with helicity |h| > 1. A sketch of the proof The conserved charge Q is given by . We shall consider the matrix elements of the charge and of the current for one-particle asymptotic states, of equal helicity, and , labeled by their lightlike 4-momenta. We shall consider the case in which isn't null, which means that the momentum transfer is spacelike. Let q be the eigenvalue of those states for the charge operator Q, so that: where we have now made used of translational covariance, which is part of the Poincaré covariance. Thus: with . Let's transform to a reference frame where p moves along the positive z-axis and p′ moves along the negative z-axis. This is always possible for any spacelike momentum transfer. In this reference frame, and change by the phase factor under rotations by θ counterclockwise about the z-axis whereas and change by the phase factors and respectively. If h is nonzero, we need to specify the phases of states. In general, this can't be done in a Lorentz-invariant way (see Thomas precession), but the one particle Hilbert space is Lorentz-covariant. So, if we make any arbitrary but fixed choice for the phases, then each of the matrix components in the previous paragraph has to be invariant under the rotations about the z-axis. So, unless |h| = 0 or 1/2, all of the components have to be zero. Weinberg and Witten did not assume the continuity . Rather, the authors argue that the physical (i.e., the measurable) quantum numbers of a massless particle are always defined by the matrix elements in the limit of zero momentum, defined for a sequence of spacelike momentum transfers. Also, in the first equation can be replaced by "smeared out" Dirac delta function, which corresponds to performing the volume integral over a finite box. The proof of the second part of theorem is completely analogous, replacing the matrix elements of the current with the matrix elements of the stress–energy tensor : and with . For spacelike momentum transfers, we can go to the reference frame where p′ + p is along the t-axis and p′ − p is along the z-axis. In this reference frame, the components of transforms as , ,, or under a rotation by θ about the z-axis. Similarly, we can conclude that Note that this theorem also applies to free field theories. If they contain massless particles with the "wrong" helicity/charge, they have to be gauge theories. Ruling out emergent theories What does this theorem have to do with emergence/composite theories? If let's say gravity is an emergent theory of a fundamentally flat theory over a flat Minkowski spacetime, then by Noether's theorem, we have a conserved stress–energy tensor which is Poincaré covariant. If the theory has an internal gauge symmetry (of the Yang–Mills kind), we may pick the Belinfante–Rosenfeld stress–energy tensor which is gauge-invariant. As there is no fundamental diffeomorphism symmetry, we don't have to worry about that this tensor isn't BRST-closed under diffeomorphisms. So, the Weinberg–Witten theorem applies and we can't get a massless spin-2 (i.e. helicity ±2) composite/emergent graviton. If we have a theory with a fundamental conserved 4-current associated with a global symmetry, then we can't have emergent/composite massless spin-1 particles which are charged under that global symmetry. Theories where the theorem is inapplicable Nonabelian gauge theories There are a number of ways to see why nonabelian Yang–Mills theories in the Coulomb phase don't violate this theorem. Yang–Mills theories don't have any conserved 4-current associated with the Yang–Mills charges that are both Poincaré covariant and gauge invariant. Noether's theorem gives a current which is conserved and Poincaré covariant, but not gauge invariant. As |p> is really an element of the BRST cohomology, i.e. a quotient space, it is really an equivalence class of states. As such, is only well defined if J is BRST-closed. But if J isn't gauge-invariant, then J isn't BRST-closed in general. The current defined as is not conserved because it satisfies instead of where D is the covariant derivative. The current defined after a gauge-fixing like the Coulomb gauge is conserved but isn't Lorentz covariant. Spontaneously broken gauge theories The gauge bosons associated with spontaneously broken symmetries are massive. For example, in QCD, we have electrically charged rho mesons which can be described by an emergent hidden gauge symmetry which is spontaneously broken. Therefore, there is nothing in principle stopping us from having composite preon models of W and Z bosons. On a similar note, even though the photon is charged under the SU(2) weak symmetry (because it is the gauge boson associated with a linear combination of weak isospin and hypercharge), it is also moving through a condensate of such charges, and so, isn't an exact eigenstate of the weak charges and this theorem doesn't apply either. Massive gravity On a similar note, it is possible to have a composite/emergent theory of massive gravity. General relativity In GR, we have diffeomorphisms and A|ψ> (over an element |ψ> of the BRST cohomology) only makes sense if A is BRST-closed. There are no local BRST-closed operators and this includes any stress–energy tensor that we can think of. As an alternate explanation, note that the stress tensor for pure GR vanishes (this statement is equivalent to the vacuum Einstein equation) and the stress tensor for GR coupled to matter is just the matter stress tensor. The latter is not conserved, , but rather where is the covariant derivative. Induced gravity In induced gravity, the fundamental theory is also diffeomorphism invariant and the same comment applies. Seiberg duality If we take N=1 chiral super QCD with Nc colors and Nf flavors with , then by the Seiberg duality, this theory is dual to a nonabelian gauge theory which is trivial (i.e. free) in the infrared limit. As such, the dual theory doesn't suffer from any infraparticle problem or a continuous mass spectrum. Despite this, the dual theory is still a nonabelian Yang–Mills theory. Because of this, the dual magnetic current still suffers from all the same problems even though it is an "emergent current". Free theories aren't exempt from the Weinberg–Witten theorem. Conformal field theory In a conformal field theory, the only truly massless particles are noninteracting singletons (see singleton field). The other "particles"/bound states have a continuous mass spectrum which can take on any arbitrarily small nonzero mass. So, we can have spin-3/2 and spin-2 bound states with arbitrarily small masses but still not violate the theorem. In other words, they are infraparticles. Infraparticles Two otherwise identical charged infraparticles moving with different velocities belong to different superselection sectors. Let's say they have momenta p′ and p respectively. Then as Jμ(0) is a local neutral operator, it does not map between different superselection sectors. So, is zero. The only way |p′'> and |p> can belong in the same sector is if they have the same velocity, which means that they are proportional to each other, i.e. a null or zero momentum transfer, which isn't covered in the proof. So, infraparticles violate the continuity assumption This doesn't mean of course that the momentum of a charge particle can't change by some spacelike momentum. It only means that if the incoming state is a one infraparticle state, then the outgoing state contains an infraparticle together with a number of soft quanta. This is nothing other than the inevitable bremsstrahlung. But this also means that the outgoing state isn't a one particle state. Theories with nonlocal charges Obviously, a nonlocal charge does not have a local 4-current and a theory with a nonlocal 4-momentum does not have a local stress–energy tensor. Acoustic metric theories and analog model of gravity These theories are not Lorentz covariant. However, some of these theories can give rise to an approximate emergent Lorentz symmetry at low energies. Superstring theory Superstring theory defined over a background metric (possibly with some fluxes) over a 10D space which is the product of a flat 4D Minkowski space and a compact 6D space has a massless graviton in its spectrum. This is an emergent particle coming from the vibrations of a superstring. Let's look at how we would go about defining the stress–energy tensor. The background is given by g (the metric) and a couple of other fields. The effective action is a functional of the background. The VEV of the stress–energy tensor is then defined as the functional derivative The stress-energy operator is defined as a vertex operator corresponding to this infinitesimal change in the background metric. Not all backgrounds are permissible. Superstrings have to have superconformal symmetry, which is a super generalization of Weyl symmetry, in order to be consistent but they are only superconformal when propagating over some special backgrounds (which satisfy the Einstein field equations plus some higher order corrections). Because of this, the effective action is only defined over these special backgrounds and the functional derivative is not well-defined. The vertex operator for the stress–energy tensor at a point also doesn't exist. References (see Ch. 2 for a detailed review) Quantum field theory Quantum gravity Theorems in quantum mechanics No-go theorems
Weinberg–Witten theorem
[ "Physics", "Mathematics" ]
2,599
[ "Theorems in quantum mechanics", "Quantum field theory", "No-go theorems", "Equations of physics", "Unsolved problems in physics", "Quantum mechanics", "Theorems in mathematical physics", "Quantum gravity", "Physics beyond the Standard Model", "Physics theorems" ]
2,695,510
https://en.wikipedia.org/wiki/Rational%20conformal%20field%20theory
In theoretical physics, a rational conformal field theory is a special type of two-dimensional conformal field theory with a finite number of conformal primaries. In these theories, all dimensions (and the central charge) are rational numbers that can be computed from the consistency conditions of conformal field theory. The most famous examples are the so-called minimal models. More generally, rational conformal field theory can refer to any CFT with a finite number of primary operators with respect to the action of its chiral algebra. Chiral algebras can be much larger than the Virasoro algebra. Well-known examples include (the enveloping algebra of) affine Lie algebras, relevant to the Wess–Zumino–Witten model, and W-algebras. References Conformal field theory
Rational conformal field theory
[ "Physics" ]
165
[ "Quantum mechanics", "Quantum physics stubs" ]
2,695,828
https://en.wikipedia.org/wiki/Alpha%20Trianguli
Alpha Trianguli (α Trianguli, abbreviated Alpha Tri, α Tri) is a spectroscopic binary star in the constellation of Triangulum. Based on parallax measurements obtained during the Hipparcos mission, it is approximately distant from the Sun. The brighter or primary component is named Mothallah . Nomenclature α Trianguli (Latinised to Alpha Trianguli) is the system's Bayer designation. The system bore the traditional names Ras al Muthallah or Mothallah and Caput Trianguli derived from the Arabic رأس المثلث' raʼs al-muthallath "the head of the triangle" and its Latin translation. The International Astronomical Union Working Group on Star Names (WGSN) has approved the name Mothallah for this star. For members of multiple star systems, and where a component letter (e.g. from the Washington Double Star Catalog) is not explicitly shown in the namelist, the WGSN says that the name should be understood to be attributed to the visually brightest component by visual brightness. In combination with Beta Trianguli, these stars were called Al Mīzān, which is Arabic for "The Scale Beam". In Babylonian astronomy, Alpha Trianguli is listed as UR.BAR.RA "The Wolf", bearing the epithet "the seeder of the plough" in the MUL.APIN, listed after "The Plough", the name for a constellation formed of Triangulum plus Gamma Andromedae. Properties Estimates of the combined stellar classification for this system range from F5III to F6IV, with the luminosity class of 'IV' or 'III' indicating the primary component is a subgiant or giant star, respectively. It is a member of a close binary system—a spectroscopic binary—whose components complete an orbit about their center of mass once every 1.736 days. Because the primary star is rotating rapidly, it has assumed the shape of an oblate spheroid. The ellipsoidal profile of the star, as viewed from Earth, varies over the course of an orbit causing the luminosity to vary in magnitude during the same period. Such stars are termed ellipsoidal variables. Within a few million years, as the primary continues to evolve into a red giant star, the system may become a semi-detached binary with the Roche lobe becoming filled to overflowing. The mean apparent magnitude of +3.42 for this pair is bright enough to be readily seen with the naked eye. It forms the second brightest star or star system in this generally faint constellation, following Beta Trianguli. The effective temperature of the primary's outer envelope is 6,288 K, giving it a yellow-white hue typical of F-type stars. It has a mean radius about three times the radius of the Sun. The system is an estimated 1.6 billion years old. References External links Trianguli, Alpha Triangulum Spectroscopic binaries F-type subgiants Mothallah 008796 Trianguli, 02 Rotating ellipsoidal variables 0544 011443 BD+28 312
Alpha Trianguli
[ "Astronomy" ]
658
[ "Triangulum", "Constellations" ]
2,696,018
https://en.wikipedia.org/wiki/Beta%20Trianguli
Beta Trianguli (Beta Tri, β Trianguli, β Tri) is the Bayer designation for a binary star system in the constellation Triangulum, located about 127 light years from Earth. Although it is only a third-magnitude star, it is the brightest star in the constellation Triangulum. This is a double-lined spectroscopic binary star system with an orbital period of 31.39 days and an eccentricity of 0.53. The members are separated by a distance of less than 5 AU. The primary component has a stellar classification of A5IV, indicating that it has evolved away from the main sequence and is now a subgiant star. However, the classification is uncertain and not consistent with the mass derived from the orbit. It is among the least variable of the stars that were observed by the Hipparcos spacecraft, with a magnitude varying by only 0.0005. Based on observations using the Spitzer Space Telescope, as reported in 2005, this system is emitting an excess of infrared radiation. This emission can be explained by a circumbinary ring of dust. The dust is emitting infrared radiation at a blackbody temperature of 100 K. It is thought to extend from 50 to 400 AU away from the stars. Naming In combination with Alpha Trianguli, these stars were called Al Mīzān, which is Arabic for "The Scale Beam". In Chinese, (), meaning Heaven's Great General, refers to an asterism consisting of β Trianguli, γ Andromedae, φ Persei, 51 Andromedae, 49 Andromedae, χ Andromedae, υ Andromedae, τ Andromedae, 56 Andromedae, γ Trianguli and δ Trianguli. Consequently, the Chinese name for β Trianguli itself is (, .). References Trianguli, Beta Trianguli, 04 Triangulum 0622 013161 010064 BD+34 381 A-type subgiants Spectroscopic binaries
Beta Trianguli
[ "Astronomy" ]
423
[ "Triangulum", "Constellations" ]
2,697,320
https://en.wikipedia.org/wiki/Brucella%20suis
Brucella suis is a bacterium that causes swine brucellosis, a zoonosis that affects pigs. The disease typically causes chronic inflammatory lesions in the reproductive organs of susceptible animals or orchitis, and may even affect joints and other organs. The most common symptom is abortion in pregnant susceptible sows at any stage of gestation. Other manifestations are temporary or permanent sterility, lameness, posterior paralysis, spondylitis, and abscess formation. It is transmitted mainly by ingestion of infected tissues or fluids, semen during breeding, and suckling infected animals. Since brucellosis threatens the food supply and causes undulant fever, Brucella suis and other Brucella species (B. melitensis, B. abortus, B. ovis, B. canis) are recognized as potential agricultural, civilian, and military bioterrorism agents. Symptoms and signs The most frequent clinical sign following B. suis infection is abortion in pregnant females, reduced milk production, and infertility. Cattle can also be transiently infected when they share pasture or facilities with infected pigs, and B. suis can be transmitted by cow's milk. Swine also develop orchitis (swelling of the testicles), lameness (movement disability), hind limb paralysis, or spondylitis (inflammation in joints). Cause Brucella suis is a Gram-negative, facultative, intracellular coccobacillus, capable of growing and reproducing inside of host cells, specifically phagocytic cells. They are also not spore-forming, capsulated, or motile. Flagellar genes, however, are present in the B. suis genome, but are thought to be cryptic remnants because some were truncated and others were missing crucial components of the flagellar apparatus. In mouse models, the flagellum is essential for a normal infectious cycle, where the inability to assemble a complete flagellum leads to severe attenuation of the bacteria. Brucella suis is differentiated into five biovars (strains), where biovars 1–3 infect wild boar and domestic pigs, and biovars 1 and 3 may cause severe diseases in humans. In contrast, biovar 2 found in wild boars in Europe shows mild or no clinical signs and cannot infect healthy humans, but does infect pigs and hares. Pathogenesis Phagocytes are an essential component of the host's innate immune system with various antimicrobial defense mechanisms to clear pathogens by oxidative burst, acidification of phagosomes, and fusion of the phagosome and lysosome. B. suis, in return, has developed ways to counteract the host cell defense to survive in the macrophage and to deter host immune responses. B. suis possesses smooth lipopolysaccharide (LPS), which has a full-length O-chain, as opposed to rough LPS, which has a truncated or no O-chain. This structural characteristic allows for B. suis to interact with lipid rafts on the surface of macrophages to be internalized, and the formed lipid-rich phagosome is able to avoid fusion with lysosomes through this endocytic pathway. In addition, this furtive entry into macrophages does not affect the cell's normal trafficking. The smooth LPS also inhibits host cell apoptosis by O-polysaccharides through a TNF-alpha-independent mechanism, which allows for B. suis to avoid the activation of the host immune system. Once inside macrophages, B. suis is able to endure the rapid acidification in the phagosome to pH 4.0–4.5 by expressing metabolism genes mainly for amino acid synthesis. The acidic pH is actually essential for replication of the bacteria by inducing major virulence genes of the virB operon and the synthesis of DnaK chaperones. DnaK is part of the heat shock protein 70 family, and aids in the correct synthesis and activation of certain virulence factors. In addition, the B. suis gene for nickel transport, nikA, is activated by metal ion deficiency and is expressed once in the phagosome. Nickel is essential for many enzymatic reactions, including ureolysis to produce ammonia which in turn may neutralize acidic pH. Since B. suis is unable to grow in a strongly acidic medium, it could be protected from acidification by the ammonia. Summary: Brucella suis encounters a macrophage, but no oxidative burst occurs. Lipid rafts are necessary for macrophage penetration. The phagosome rapidly acidifies, creating a stressful environment for bacteria, which triggers activation of virulence genes. Lipid rafts on phagosomes prevent lysosomal fusion, and normal cell trafficking is unaffected. Diagnosis Treatment Because B. suis is facultative and intracellular, and is able to adapt to environmental conditions in macrophages, treatment failure and relapse rates are high. The only effective way to control and eradicate zoonosis is by vaccination of all susceptible hosts and elimination of infected animals. The Brucella abortus (rough LPS Brucella) vaccine, developed for bovine brucellosis and licensed by the USDA Animal Plant Health Inspection Service, has shown protection for some swine and is also effective against B. suis infection, but there is currently no approved vaccine for swine brucellosis. Biological warfare In the United States, B. suis was the first biological agent weaponized in 1952, and was field-tested with B. suis-filled bombs called M33 cluster bombs. It is, however, considered to be one of the agents of lesser threat because many infections are asymptomatic and the mortality is low, but it is used more as an incapacitating agent. References Swine diseases Bacterial diseases Biological agents Theriogenology Hyphomicrobiales
Brucella suis
[ "Biology", "Environmental_science" ]
1,255
[ "Biological agents", "Toxicology", "Biological warfare" ]
6,494,433
https://en.wikipedia.org/wiki/Curing%20%28chemistry%29
Curing is a chemical process employed in polymer chemistry and process engineering that produces the toughening or hardening of a polymer material by cross-linking of polymer chains. Even if it is strongly associated with the production of thermosetting polymers, the term "curing" can be used for all the processes where a solid product is obtained from a liquid solution, such as with PVC plastisols. Curing process During the curing process, single monomers and oligomers, mixed with or without a curing agent, react to form a tridimensional polymeric network. In the very first part of the reaction branches of molecules with various architectures are formed, and their molecular weight increases in time with the extent of the reaction until the network size is equal to the size of the system. The system has lost its solubility and its viscosity tends to infinite. The remaining molecules start to coexist with the macroscopic network until they react with the network creating other crosslinks. The crosslink density increases until the system reaches the end of the chemical reaction. Curing can be induced by heat, radiation, electron beams, or chemical additives. To quote from IUPAC: curing "might or might not require mixing with a chemical curing agent". Thus, two broad classes are curing induced by chemical additives (also called curing agents, hardeners) and curing in the absence of additives. An intermediate case involves a mixture of resin and additives that requires external stimulus (light, heat, radiation) to induce curing. The curing methodology depends on the resin and the application. Particular attention is paid to the shrinkage induced by the curing. Usually small values of shrinkage (2–3%) are desirable. Curing induced by additives Epoxy resins are typically cured by the use of additives, often called hardeners. Polyamines are often used. The amine groups ring-open the epoxide rings. In rubber, the curing is also induced by the addition of a crosslinker. The resulting process is called sulfur vulcanization. Sulfur breaks down to form polysulfide cross-links (bridges) between sections of the polymer chains. The degree of crosslinking determines the rigidity and durability, as well as other properties of the material. Paints and varnishes commonly contain oil drying agents, usually metallic soaps that catalyze cross-linking of the unsaturated drying oils that largely comprise them. When paint is described as "drying" it is in fact hardening by crosslinking. Oxygen atoms serve as the crosslinks, analogous to the role played by sulfur in the vulcanization of rubber. Curing without additives In the case of concrete, curing entails the formation of silicate crosslinks. The process is not induced by additives. In many cases, the resin is provided as a solution or mixture with a thermally-activated catalyst, which induces crosslinking but only upon heating. For example, some acrylate-based resins are formulated with dibenzoyl peroxide. Upon heating the mixture, the peroxide converts to a free radical, which adds to an acrylate, initiating crosslinking. Some organic resins are cured with heat. As heat is applied, the viscosity of the resin drops before the onset of crosslinking, whereupon it increases as the constituent oligomers interconnect. This process continues until a tridimensional network of oligomer chains is created – this stage is termed gelation. In terms of processability of the resin this marks an important stage: before gelation the system is relatively mobile, after it the mobility is very limited, the micro-structure of the resin and the composite material is fixed and severe diffusion limitations to further cure are created. Thus, in order to achieve vitrification in the resin, it is usually necessary to increase the process temperature after gelation. When catalysts are activated by ultraviolet radiation, the process is called UV cure. Monitoring methods Cure monitoring is, for example, an essential component for the control of the manufacturing process of composite materials. The material, initially liquid, at the end of the process will be solid: viscosity is the most important property that changes during the process. Cure monitoring relies on monitoring various physical or chemical properties. Rheological analysis A simple way to monitor the change in viscosity, and thus, the extent of the reaction, in a curing process is to measure the variation of the elastic modulus. To measure the elastic modulus of a system during curing, a rheometer can be used. With dynamic mechanical analysis, the storage modulus (G') and the loss modulus (G'') can be measured. The variation of G' and G" in time can indicate the extent of the curing reaction. As shown in Figure 4, after an "induction time", G' and G" start to increase, with an abrupt change in slope. At a certain point they cross each other; afterwards, the rates of G' and G" decrease, and the moduli tend to a plateau. When they reach the plateau the reaction is concluded. When the system is liquid, the storage modulus is very low: the system behaves like a liquid. Then the reaction continues and the system starts to react more like a solid: the storage modulus increases. The degree of curing, , can be defined as follow: The degree of curing starts from zero (at the beginning of the reaction) and grows until one (the end of the reaction). The slope of the curve changes with time and has his maximum about at half of the reaction. Thermal analysis If the reactions occurring during crosslinking are exothermic, the crosslinking rate can be related to the heat released during the process. Higher is the number of bonds created, higher is the heat released in the reaction. At the end of the reaction, no more heat will be released. To measure the heat flow differential scanning calorimetry can be used. Assuming that each bond formed during the crosslinking releases the same amount of energy, the degree of curing, , can be defined as follows: where is the heat released up to a certain time , is the instantaneous rate of heat and is the total amount of heat released in , when the reaction finishes. Also in this case the degree of curing goes from zero (no bonds created) to one (no more reactions occur) with a slope that changes in time and has its maximum about at half of the reaction. Dielectrometric analysis Conventional dielectrometry is carried out typically in a parallel plate configuration of the dielectric sensor (capacitance probe) and has the capability of monitoring the resin cure throughout the entire cycle, from the liquid to the rubber to the solid state. It is capable of monitoring phase separation in complex resin blends curing also within a fibrous perform. The same attributes belong to the more recent development of the dielectric technique, namely microdielectrometry. Several versions of dielectric sensors are available commercially. The most suitable format for use in cure monitoring applications are the flat interdigital capacitive structures bearing a sensing grid on their surface. Depending on their design (specifically those on durable substrates) they have some reusability, while flexible substrate sensors can be used also in the bulk of the resin systems as embedded sensors. Spectroscopic analysis The curing process can be monitored by measuring changes in various parameters: the concentration of specific reactive resin species using spectroscopic methods such as FTIR & Raman; the refractive index or fluorescence of the resin (optical property); the internal resin strain (mechanical property) with the use of Fiber Bragg grating (FBG) sensors. Ultrasonic analysis Ultrasonic cure monitoring methods are based on the relationships between changes in the characteristics of propagating ultrasound and the real-time mechanical properties of a component, by measuring: ultrasonic time of flight, both in through-transmission and pulse-echo modes; natural frequency using impact excitation and laser-induced surface acoustic wave velocity measurement. See also Vulcanization Cross-link References I.Partridge and G.Maistros, 'Dielectric Cure Monitoring for Process Control', Chapter 17, Vol. 5, Encyclopaedia of Composite Materials (2001), Elsevier Science, London, page 413 P.Ciriscioli and G.Springer, 'Smart Autoclave cure in Composites', (1991), Technomic Publishing, Lancaster, PA. Polymer chemistry Chemical processes
Curing (chemistry)
[ "Chemistry", "Materials_science", "Engineering" ]
1,796
[ "Materials science", "Chemical processes", "nan", "Polymer chemistry", "Chemical process engineering" ]
6,495,737
https://en.wikipedia.org/wiki/Borel%E2%80%93Moore%20homology
In topology, Borel−Moore homology or homology with closed support is a homology theory for locally compact spaces, introduced by Armand Borel and John Moore in 1960. For reasonable compact spaces, Borel−Moore homology coincides with the usual singular homology. For non-compact spaces, each theory has its own advantages. In particular, a closed oriented submanifold defines a class in Borel–Moore homology, but not in ordinary homology unless the submanifold is compact. Note: Borel equivariant cohomology is an invariant of spaces with an action of a group G; it is defined as That is not related to the subject of this article. Definition There are several ways to define Borel−Moore homology. They all coincide for reasonable spaces such as manifolds and locally finite CW complexes. Definition via sheaf cohomology For any locally compact space X, Borel–Moore homology with integral coefficients is defined as the cohomology of the dual of the chain complex which computes sheaf cohomology with compact support. As a result, there is a short exact sequence analogous to the universal coefficient theorem: In what follows, the coefficients are not written. Definition via locally finite chains The singular homology of a topological space X is defined as the homology of the chain complex of singular chains, that is, finite linear combinations of continuous maps from the simplex to X. The Borel−Moore homology of a reasonable locally compact space X, on the other hand, is isomorphic to the homology of the chain complex of locally finite singular chains. Here "reasonable" means X is locally contractible, σ-compact, and of finite dimension. In more detail, let be the abelian group of formal (infinite) sums where σ runs over the set of all continuous maps from the standard i-simplex Δi to X and each aσ is an integer, such that for each compact subset K of X, we have for only finitely many σ whose image meets K. Then the usual definition of the boundary ∂ of a singular chain makes these abelian groups into a chain complex: The Borel−Moore homology groups are the homology groups of this chain complex. That is, If X is compact, then every locally finite chain is in fact finite. So, given that X is "reasonable" in the sense above, Borel−Moore homology coincides with the usual singular homology for X compact. Definition via compactifications Suppose that X is homeomorphic to the complement of a closed subcomplex S in a finite CW complex Y. Then Borel–Moore homology is isomorphic to the relative homology Hi(Y, S). Under the same assumption on X, the one-point compactification of X is homeomorphic to a finite CW complex. As a result, Borel–Moore homology can be viewed as the relative homology of the one-point compactification with respect to the added point. Definition via Poincaré duality Let X be any locally compact space with a closed embedding into an oriented manifold M of dimension m. Then where in the right hand side, relative cohomology is meant. Definition via the dualizing complex For any locally compact space X of finite dimension, let be the dualizing complex of . Then where in the right hand side, hypercohomology is meant. Properties Borel−Moore homology is a covariant functor with respect to proper maps. That is, a proper map f: X → Y induces a pushforward homomorphism for all integers i. In contrast to ordinary homology, there is no pushforward on Borel−Moore homology for an arbitrary continuous map f. As a counterexample, one can consider the non-proper inclusion Borel−Moore homology is a contravariant functor with respect to inclusions of open subsets. That is, for U open in X, there is a natural pullback or restriction homomorphism For any locally compact space X and any closed subset F, with the complement, there is a long exact localization sequence: Borel−Moore homology is homotopy invariant in the sense that for any space X, there is an isomorphism The shift in dimension means that Borel−Moore homology is not homotopy invariant in the naive sense. For example, the Borel−Moore homology of Euclidean space is isomorphic to in degree n and is otherwise zero. Poincaré duality extends to non-compact manifolds using Borel–Moore homology. Namely, for an oriented n-manifold X, Poincaré duality is an isomorphism from singular cohomology to Borel−Moore homology, for all integers i. A different version of Poincaré duality for non-compact manifolds is the isomorphism from cohomology with compact support to usual homology: A key advantage of Borel−Moore homology is that every oriented manifold M of dimension n (in particular, every smooth complex algebraic variety), not necessarily compact, has a fundamental class If the manifold M has a triangulation, then its fundamental class is represented by the sum of all the top dimensional simplices. In fact, in Borel−Moore homology, one can define a fundamental class for arbitrary (possibly singular) complex varieties. In this case the complement of the set of smooth points has (real) codimension at least 2, and by the long exact sequence above the top dimensional homologies of and are canonically isomorphic. The fundamental class of is then defined to be the fundamental class of . Examples Compact Spaces Given a compact topological space its Borel-Moore homology agrees with its standard homology; that is, Real line The first non-trivial calculation of Borel-Moore homology is of the real line. First observe that any -chain is cohomologous to . Since this reduces to the case of a point , notice that we can take the Borel-Moore chain since the boundary of this chain is and the non-existent point at infinity, the point is cohomologous to zero. Now, we can take the Borel-Moore chain which has no boundary, hence is a homology class. This shows that Real n-space The previous computation can be generalized to the case We get Infinite Cylinder Using the Kunneth decomposition, we can see that the infinite cylinder has homology Real n-space minus a point Using the long exact sequence in Borel-Moore homology, we get (for ) the non-zero exact sequences and From the first sequence we get that and from the second we get that and We can interpret these non-zero homology classes using the following observations: There is the homotopy equivalence A topological isomorphism hence we can use the computation for the infinite cylinder to interpret as the homology class represented by and as Plane with Points Removed Let have -distinct points removed. Notice the previous computation with the fact that Borel-Moore homology is an isomorphism invariant gives this computation for the case . In general, we will find a -class corresponding to a loop around a point, and the fundamental class in . Double Cone Consider the double cone . If we take then the long exact sequence shows Genus Two Curve with Three Points Removed Given a genus two curve (Riemann surface) and three points , we can use the long exact sequence to compute the Borel-Moore homology of This gives Since is only three points we have This gives us that Using Poincare-duality we can compute since deformation retracts to a one-dimensional CW-complex. Finally, using the computation for the homology of a compact genus 2 curve we are left with the exact sequence showing since we have the short exact sequence of free abelian groups from the previous sequence. Notes References Survey articles Books Homology theory Sheaf theory
Borel–Moore homology
[ "Mathematics" ]
1,643
[ "Topology", "Sheaf theory", "Mathematical structures", "Category theory" ]
6,497,220
https://en.wikipedia.org/wiki/Computational%20complexity%20of%20mathematical%20operations
The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm. Arithmetic functions This table lists the complexity of mathematical operations on integers. On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two -bit numbers in time O(n). Algebraic functions Here we consider operations over polynomials and denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers. Special functions Many of the methods in this section are given in Borwein & Borwein. Elementary functions The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions. Below, the size refers to the number of digits of precision at which the function is to be evaluated. It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound . Non-elementary functions Mathematical constants This table gives the complexity of computing approximations to the given constants to correct digits. Number theory Algorithms for number theoretical calculations are studied in computational number theory. Matrix algebra The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field. In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2. Transforms Algorithms for computing transforms of functions (particularly integral transforms) are widely used in all areas of mathematics, particularly analysis and signal processing. Notes References Further reading Computer arithmetic algorithms Computational complexity theory Mathematics-related lists Number theoretic algorithms Unsolved problems in computer science
Computational complexity of mathematical operations
[ "Mathematics" ]
507
[ "Unsolved problems in computer science", "Unsolved problems in mathematics", "Mathematical problems" ]
6,500,531
https://en.wikipedia.org/wiki/Surrogate%20model
A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so an approximate mathematical model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables (e.g., length, curvature, material, etc.). For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and "what-if" analysis become impossible since they require thousands or even millions of simulation evaluations. One way of alleviating this burden is by constructing approximation models, known as surrogate models, metamodels or emulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheaper to evaluate. Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), relying solely on the input-output behavior. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling or black-box modeling, though the terminology is not always consistent. When only a single design variable is involved, the process is known as curve fitting. Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modeling may be used in many other areas of science where there are expensive experiments and/or function evaluations. Goals The scientific challenge of surrogate modeling is the generation of a surrogate that is as accurate as possible, using as few simulation evaluations as possible. The process comprises three major steps which may be interleaved iteratively: Sample selection (also known as sequential design, optimal experimental design (OED) or active learning) Construction of the surrogate model and optimizing the model parameters (i.e., bias-variance tradeoff) Appraisal of the accuracy of the surrogate. The accuracy of the surrogate depends on the number and location of samples (expensive experiments or simulations) in the design space. Various design of experiments (DOE) techniques cater to different sources of errors, in particular, errors due to noise in the data or errors due to an improper surrogate model. Types of surrogate models Popular surrogate modeling approaches are: polynomial response surfaces; kriging; more generalized Bayesian approaches; gradient-enhanced kriging (GEK); radial basis function; support vector machines; space mapping; artificial neural networks and Bayesian networks. Other methods recently explored include Fourier surrogate modeling and random forests. For some problems, the nature of the true function is not known a priori, and therefore it is not clear which surrogate model will be the most accurate one. In addition, there is no consensus on how to obtain the most reliable estimates of the accuracy of a given surrogate. Many other problems have known physics properties. In these cases, physics-based surrogates such as space-mapping based models are commonly used. Invariance properties Recently proposed comparison-based surrogate models (e.g., ranking support vector machines) for evolutionary algorithms, such as CMA-ES, allow preservation of some invariance properties of surrogate-assisted optimizers: Invariance with respect to monotonic transformations of the function (scaling) Invariance with respect to orthogonal transformations of the search space (rotation) Applications An important distinction can be made between two different applications of surrogate models: design optimization and design space approximation (also known as emulation). In surrogate model-based optimization, an initial surrogate is constructed using some of the available budgets of expensive experiments and/or simulations. The remaining experiments/simulations are run for designs which the surrogate model predicts may have promising performance. The process usually takes the form of the following search/update procedure. Initial sample selection (the experiments and/or simulations to be run) Construct surrogate model Search surrogate model (the model can be searched extensively, e.g., using a genetic algorithm, as it is cheap to evaluate) Run and update experiment/simulation at new location(s) found by search and add to sample Iterate steps 2 to 4 until out of time or design is "good enough" Depending on the type of surrogate used and the complexity of the problem, the process may converge on a local or global optimum, or perhaps none at all. In design space approximation, one is not interested in finding the optimal parameter vector, but rather in the global behavior of the system. Here the surrogate is tuned to mimic the underlying model as closely as needed over the complete design space. Such surrogates are a useful, cheap way to gain insight into the global behavior of the system. Optimization can still occur as a post-processing step, although with no update procedure (see above), the optimum found cannot be validated. Surrogate modeling software Surrogate Modeling Toolbox (SMT: https://github.com/SMTorg/smt) is a Python package that contains a collection of surrogate modeling methods, sampling techniques, and benchmarking functions. This package provides a library of surrogate models that is simple to use and facilitates the implementation of additional methods. SMT is different from existing surrogate modeling libraries because of its emphasis on derivatives, including training derivatives used for gradient-enhanced modeling, prediction derivatives, and derivatives with respect to the training data. It also includes new surrogate models that are not available elsewhere: kriging by partial-least squares reduction and energy-minimizing spline interpolation. Surrogates.jl is a Julia packages which offers tools like random forests, radial basis methods and kriging. See also Linear approximation Response surface methodology Kriging Radial basis functions Gradient-enhanced kriging (GEK) OptiY Space mapping Surrogate endpoint Surrogate data Fitness approximation Computer experiment Conceptual model Bayesian regression Bayesian model selection References Further reading Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Tucker, P.K. (2005), “Surrogate-based analysis and optimization,” Progress in Aerospace Sciences, 41, 1–28. D. Gorissen, I. Couckuyt, P. Demeester, T. Dhaene, K. Crombecq, (2010), “A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design," Journal of Machine Learning Research, Vol. 11, pp. 2051−2055, July 2010. T-Q. Pham, A. Kamusella, H. Neubert, “Auto-Extraction of Modelica Code from Finite Element Analysis or Measurement Data," 8th International Modelica Conference, 20–22 March 2011 in Dresden. Forrester, Alexander, Andras Sobester, and Andy Keane, Engineering design via surrogate modelling: a practical guide, John Wiley & Sons, 2008. Bouhlel, M. A. and Bartoli, N. and Otsmane, A. and Morlier, J. (2016) "Improving kriging surrogates of high-dimensional design models by Partial Least Squares dimension reduction", Structural and Multidisciplinary Optimization 53 (5), 935-952 Bouhlel, M. A. and Bartoli, N. and Otsmane, A. and Morlier, J. (2016) "An improved approach for estimating the hyperparameters of the kriging model for high-dimensional problems through the partial least squares method", Mathematical Problems in Engineering External links Matlab code for surrogate modelling Matlab SUrrogate MOdeling Toolbox – Matlab SUMO Toolbox Surrogate Modeling Toolbox -- Python Design of experiments Numerical analysis Scientific models Mathematical modeling Machine learning
Surrogate model
[ "Mathematics", "Engineering" ]
1,701
[ "Mathematical modeling", "Machine learning", "Applied mathematics", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Artificial intelligence engineering", "Approximations" ]
6,503,128
https://en.wikipedia.org/wiki/Iodine-123
Iodine-123 (123I) is a radioactive isotope of iodine used in nuclear medicine imaging, including single-photon emission computed tomography (SPECT) or SPECT/CT exams. The isotope's half-life is 13.2232 hours; the decay by electron capture to tellurium-123 emits gamma radiation with a predominant energy of 159 keV (this is the gamma primarily used for imaging). In medical applications, the radiation is detected by a gamma camera. The isotope is typically applied as iodide-123, the anionic form. Production Iodine-123 is produced in a cyclotron by proton irradiation of xenon in a capsule. Xenon-124 absorbs a proton and immediately loses a neutron and proton to form xenon-123, or else loses two neutrons to form caesium-123, which decays to xenon-123. The xenon-123 formed by either route then decays to iodine-123, and is trapped on the inner wall of the irradiation capsule under refrigeration, then eluted with sodium hydroxide in a halogen disproportionation reaction, similar to collection of iodine-125 after it is formed from xenon by neutron irradiation (see article on 125I for more details). (p,pn) → (p,2n) → → Iodine-123 is usually supplied as []-sodium iodide in 0.1 M sodium hydroxide solution, at 99.8% isotopic purity. 123I for medical applications has also been produced at Oak Ridge National Laboratory by proton cyclotron bombardment of 80% isotopically enriched tellurium-123. (p,n) Decay The detailed decay mechanism is electron capture (EC) to form an excited state of the nearly-stable nuclide tellurium-123 (its half life is so long that it is considered stable for all practical purposes). This excited state of 123Te produced is not the metastable nuclear isomer 123mTe (the decay of 123I does not involve enough energy to produce 123mTe), but rather is a lower-energy nuclear isomer of 123Te that immediately gamma decays to ground state 123Te at the energies noted, or else (13% of the time) decays by internal conversion electron emission (127 keV), followed by an average of 11 Auger electrons emitted at very low energies (50-500 eV). The latter decay channel also produces ground-state 123Te. Especially because of the internal conversion decay channel, 123I is not an absolutely pure gamma-emitter, although it is sometimes clinically assumed to be one. The Auger electrons from the radioisotope have been found in one study to do little cellular damage, unless the radionuclide is directly incorporated chemically into cellular DNA, which is not the case for present radiopharmaceuticals which use 123I as the radioactive label nuclide. The damage from the more penetrating gamma radiation and 127 keV internal conversion electron radiation from the initial decay of 123Te is moderated by the relatively short half-life of the isotope. Medical applications 123I is the most suitable isotope of iodine for the diagnostic study of thyroid diseases. The half-life of approximately 13.2 hours is ideal for the 24-hour iodine uptake test and 123I has other advantages for diagnostic imaging thyroid tissue and thyroid cancer metastasis. The energy of the photon, 159 keV, is ideal for the NaI (sodium iodide) crystal detector of current gamma cameras and also for the pinhole collimators. It has much greater photon flux than 131I. It gives approximately 20 times the counting rate of 131I for the same administered dose, while the radiation burden to the thyroid is far less (1%) than that of 131I. Moreover, scanning a thyroid remnant or metastasis with 123I does not cause "stunning" of the tissue (with loss of uptake), because of the low radiation burden of this isotope. For the same reasons, 123I is never used for thyroid cancer or Graves disease treatment, and this role is reserved for 131I. 123I is supplied as sodium iodide (NaI), sometimes in basic solution in which it has been dissolved as the free element. This is administered to a patient by ingestion under capsule form, by intravenous injection, or (less commonly due to problems involved in a spill) in a drink. The iodine is taken up by the thyroid gland and a gamma camera is used to obtain functional images of the thyroid for diagnosis. Quantitative measurements of the thyroid can be performed to calculate the iodine uptake (absorption) for the diagnosis of hyperthyroidism and hypothyroidism. Dosing can vary; is recommended for thyroid imaging and for total body while an uptake test may use . There is a study that indicates a given dose can effectively result in effects of an otherwise higher dose, due to impurities in the preparation. The dose of radioiodine 123I is typically tolerated by individuals who cannot tolerate contrast mediums containing larger concentration of stable iodine such as used in CT scan, intravenous pyelogram (IVP) and similar imaging diagnostic procedures. Iodine is not an allergen. 123I is also used as a label in other imaging radiopharmaceuticals, such as metaiodobenzylguanidine (MIBG) and ioflupane. Precautions Removal of radioiodine contamination can be difficult and use of a decontaminant specially made for radioactive iodine removal is advised. Two common products designed for institutional use are Bind-It and I-Bind. General purpose radioactive decontamination products are often unusable for iodine, as these may only spread or volatilize it. See also Isotopes of iodine Iodine-125 Iodine-129 Iodine-131 Iodine in biology References Diagnostic endocrinology Isotopes of iodine Medical isotopes
Iodine-123
[ "Chemistry" ]
1,268
[ "Chemicals in medicine", "Isotopes of iodine", "Isotopes", "Medical isotopes" ]
6,503,797
https://en.wikipedia.org/wiki/Ground%20granulated%20blast-furnace%20slag
Ground granulated blast-furnace slag (GGBS or GGBFS) is obtained by quenching molten iron slag (a by-product of iron and steel-making) from a blast furnace in water or steam, to produce a glassy, granular product that is then dried and ground into a fine powder. Ground granulated blast furnace slag is a latent hydraulic binder forming calcium silicate hydrates (C-S-H) after contact with water. It is a strength-enhancing compound improving the durability of concrete. It is a component of metallurgic cement ( in the European norm ). Its main advantage is its slow release of hydration heat, allowing limitation of the temperature increase in massive concrete components and structures during cement setting and concrete curing, or to cast concrete during hot summer. Production and composition The chemical composition of a slag varies considerably depending on the composition of the raw materials in the iron production process. Silicate and aluminate impurities from the ore and coke are combined in the blast furnace with a flux which lowers the viscosity of the slag. In the case of pig iron production, the flux consists mostly of a mixture of limestone and forsterite or in some cases dolomite. In the blast furnace the slag floats on top of the iron and is decanted for separation. Slow cooling of slag melts results in an unreactive crystalline material consisting of an assemblage of Ca-Al-Mg silicates. To obtain a good slag reactivity or hydraulicity, the slag melt needs to be rapidly cooled or quenched below 800 °C in order to prevent the crystallization of merwinite and melilite. In order to cool and fragment the slag, a granulation process can be applied in which molten slag is subjected to jet streams of water or air under pressure. Alternatively, in the pelletization process, the liquid slag is partially cooled with water and subsequently projected into the air by a rotating drum. In order to obtain a suitable reactivity, the obtained fragments are ground to reach the same fineness as Portland cement. The main components of blast furnace slag are CaO (30-50%), SiO2 (28-38%), Al2O3 (8-24%), MnO, and MgO (1-18%). In general increasing the CaO content of the slag results in raised slag basicity and an increase in compressive strength. The MgO and Al2O3 content show the same trend up to respectively 10-12% and 14%, beyond which no further improvement can be obtained. Several compositional ratios or so-called hydraulic indices have been used to correlate slag composition with hydraulic activity; the latter being mostly expressed as the binder compressive strength. The glass content of slags suitable for blending with Portland cement typically varies between 90 and 100% and depends on the cooling method and the temperature at which cooling is initiated. The glass structure of the quenched glass largely depends on the proportions of network-forming elements such as Si and Al over network-modifiers such as Ca, Mg and to a lesser extent Al. Increased amounts of network-modifiers lead to higher degrees of network depolymerization and reactivity. Common crystalline constituents of blast-furnace slags are merwinite and melilite. Other minor components which can form during progressive crystallization are belite, monticellite, rankinite, wollastonite and forsterite. Minor amounts of reduced sulphur are commonly encountered as oldhamite. Applications GGBS is used to make durable concrete structures in combination with ordinary Portland cement and/or other pozzolanic materials. GGBS has been widely used in Europe, and increasingly in the United States and in Asia (particularly in Japan and Singapore) for its superiority in concrete durability, extending the lifespan of buildings. Two major uses of GGBS are in the production of quality-improved slag cement, namely Portland Blastfurnace cement (PBFC) and high-slag blast-furnace cement (HSBFC), with GGBS content ranging typically from 30 to 70%; and in the production of ready-mixed or site-batched durable concrete. Concrete made with GGBS cement sets more slowly than concrete made with ordinary Portland cement, depending on the amount of GGBS in the cementitious material, but also continues to gain strength over a longer period in production conditions. This results in lower heat of hydration and lower temperature rises, and makes avoiding cold joints easier, but may also affect construction schedules where quick setting is required. Use of GGBS significantly reduces the risk of damages caused by alkali–silica reaction (ASR), provides higher resistance to chloride ingress — reducing the risk of reinforcement corrosion — and provides higher resistance to attacks by sulfate and other chemicals. GGBS cement uses GGBS cement can be added to concrete in the concrete manufacturer's batching plant, along with Portland cement, aggregates and water. The normal ratios of aggregates and water to cementitious material in the mix remain unchanged. GGBS is used as a direct replacement for Portland cement, on a one-to-one basis by weight. Replacement levels for GGBS vary from 30% to up to 85%. Typically 40% to 50% is used in most instances. The use of GGBS in addition to Portland cement in concrete in Europe is covered in the concrete standard EN 206:2013. This standard establishes two categories of additions to concrete along with ordinary Portland cement: nearly inert additions (Type I) and pozzolanic or latent hydraulic additions (Type II). GGBS cement falls in the latter category. As GGBS cement is slightly less expensive than Portland cement, concrete made with GGBS cement will be similarly priced to that made with ordinary Portland cement. It is used partially as per mix ratio. Architectural and engineering benefits Durability GGBS cement is routinely specified in concrete to provide protection against both sulfate attack and chloride attack. GGBS has now effectively replaced sulfate-resisting Portland cement (SRPC) on the market for sulfate resistance because of its superior performance and greatly reduced cost compared to SRPC. Most projects in Dublin's docklands, including Spencer Dock, are using GGBS in subsurface concrete for sulfate resistance. Bulk Electrical Resistivity is a test method that can measure the resistivity of concrete samples. (ASTM 1876–19) The higher electrical resistivity can be an indication of higher ion transfer resistivity and thus higher durability. By replacing up to 50% GGBS in concrete, researchers have shown that some durability properties can be significantly improved. To protect against chloride attack, GGBS is used at a replacement level of 50% in concrete. Instances of chloride attack occur in reinforced concrete in marine environments and in road bridges where the concrete is exposed to splashing from road de-icing salts. In most NRA projects in Ireland GGBS is now specified in structural concrete for bridge piers and abutments for protection against chloride attack. The use of GGBS in such instances will increase the life of the structure by up to 50% had only Portland cement been used, and precludes the need for more expensive stainless steel reinforcing. GGBS is also routinely used to limit the temperature rise in large concrete pours. The more gradual hydration of GGBS cement generates both lower temperature peak and less total overall heat than Portland cement. This reduces thermal gradients in the concrete, which prevents the occurrence of microcracking which can weaken the concrete and reduce its durability, and was used for this purpose in the construction of the Jack Lynch Tunnel in Cork. Appearance In contrast to the stony grey of concrete made with Portland cement, the near-white color of GGBS cement permits architects to achieve a lighter color for exposed fair-faced concrete finishes, at no extra cost. To achieve a lighter color finish, GGBS is usually specified at replacement levels of between 50% and 70%, although levels as high as 85% can be used. GGBS cement also produces a smoother, more defect-free surface, due to the fineness of the GGBS particles. Dirt does not adhere to GGBS concrete as easily as concrete made with Portland cement, reducing maintenance costs. GGBS cement prevents the occurrence of efflorescence, the staining of concrete surfaces by calcium carbonate deposits. Due to its much lower lime content and lower permeability, GGBS is effective in preventing efflorescence when used at replacement levels of 50%-to-60%. Strength Concrete containing GGBS cement has a higher ultimate strength than concrete made with Portland cement. It has a higher proportion of the strength-enhancing calcium silicate hydrates (CSH) than concrete made with Portland cement only, and a reduced content of free lime, which does not contribute to concrete strength. Concrete made with GGBS continues to gain strength over time, and has been shown to double its 28-day strength over periods of 10 to 12 years. The optimum dosage of Ground granulated blast-furnace slag (GGBS) for replacement in concrete was reported to be 20-30% by mass to provide higher compressive strength compared to the concrete made with only cement. Sustainability Since GGBS is a by-product of steel manufacturing process, its use in concrete is recognized by LEED, as well as Building Environmental Assessment Method (BEAM) Plus in Hong Kong, etc. as improving the sustainability of the project and will therefore add points towards LEED and BEAM Plus certifications. In this respect, GGBS can also be used for superstructure in addition to the cases where the concrete is in contact with chlorides and sulfates — provided that the slower setting time for casting of the superstructure is justified. Notes External links The Concrete Society, Cementitious Materials: The effect of GGBS, fly ash, silica fume and limestone fines on the properties of concrete Cementitious materials References Amorphous solids Glass compositions Cement Concrete Materials
Ground granulated blast-furnace slag
[ "Physics", "Chemistry", "Engineering" ]
2,110
[ "Structural engineering", "Glass chemistry", "Glass compositions", "Unsolved problems in physics", "Materials", "Concrete", "Amorphous solids", "Matter" ]
6,504,044
https://en.wikipedia.org/wiki/Sugar%20signal%20transduction
Sugar signal transduction is an evolutionarily conserved mechanism used by organisms to survive. Sugars have an overwhelming effect on gene expression. In yeast, glucose levels are managed by controlling the mRNA levels of hexose transporters, while in mammals, the response to glucose is more tightly controlled with glucose metabolism and is therefore much more complex. Several glucose-responsive DNA motifs and DNA binding protein complexes have been identified in liver and b-cells. Although not proven, glucose repression appears to be conserved in plants because in many cases, both sugar induction and sugar repression are initiated by turning off transcription factors. See also Glycobiology References Xiao W, Sheen J, Jang JC. "The role of hexokinase in plant sugar signal transduction and growth and development." Plant Molecular Biology. 2000 Nov;44(4):451-61 Evolutionary biology Glycobiology
Sugar signal transduction
[ "Chemistry", "Biology" ]
180
[ "Evolutionary biology", "Glycobiology", "Biochemistry" ]
6,505,575
https://en.wikipedia.org/wiki/Milne%20model
The Milne model was a special-relativistic cosmological model of the universe proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space. Since it features both zero energy density and maximally negative spatial curvature, the Milne model is inconsistent with cosmological observations. Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness. Milne metric The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: where is the metric for a two-sphere and is the curvature-corrected radial component for negatively curved space that varies between 0 and . The empty space that the Milne model describes can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. Milne developed this model independent of general relativity but with awareness of special relativity. As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density () version of the FLRW metric to Milne's model implies that a full general relativistic treatment using Milne's assumptions would result in a linearly increasing scale factor for all time since the deceleration parameter is uniquely zero for such a model. Milne's density function Milne proposed that the universe's density changes in time because of an initial outward explosion of matter. Milne's model assumes an inhomogeneous density function which is Lorentz Invariant (around the event t=x=y=z=0). When rendered graphically Milne's density distribution shows a three-dimensional spherical Lobachevskian pattern with outer edges moving outward at the speed of light. Every inertial body perceives itself to be at the center of the explosion of matter (see observable universe), and sees the local universe as homogeneous and isotropic in the sense of the cosmological principle. In order to be consistent with general relativity, the universe's density must be negligible in comparison to the critical density at all times for which the Milne model is taken to apply. Notes References Milne Cosmology: Why I Keep Talking About It - a detailed non-technical introduction to the Milne model A thorough historical and theoretical study of the British Tradition in Cosmology, and one long celebration of Milne. Obsolete theories in physics Exact solutions in general relativity Minkowski spacetime 1935 in science
Milne model
[ "Physics", "Mathematics" ]
675
[ "Exact solutions in general relativity", "Theoretical physics", "Mathematical objects", "Equations", "Obsolete theories in physics" ]
856,798
https://en.wikipedia.org/wiki/Push%E2%80%93pull%20output
A push–pull amplifier is a type of electronic circuit that uses a pair of active devices that alternately supply current to, or absorb current from, a connected load. This kind of amplifier can enhance both the load capacity and switching speed. Push–pull outputs are present in TTL and CMOS digital logic circuits and in some types of amplifiers, and are usually realized by a complementary pair of transistors, one dissipating or sinking current from the load to ground or a negative power supply, and the other supplying or sourcing current to the load from a positive power supply. A push–pull amplifier is more efficient than a single-ended "class-A" amplifier. The output power that can be achieved is higher than the continuous dissipation rating of either transistor or tube used alone and increases the power available for a given supply voltage. Symmetrical construction of the two sides of the amplifier means that even-order harmonics are cancelled, which can reduce distortion. DC current is cancelled in the output, allowing a smaller output transformer to be used than in a single-ended amplifier. However, the push–pull amplifier requires a phase-splitting component that adds complexity and cost to the system; use of center-tapped transformers for input and output is a common technique but adds weight and restricts performance. If the two parts of the amplifier do not have identical characteristics, distortion can be introduced as the two halves of the input waveform are amplified unequally. Crossover distortion can be created near the zero point of each cycle as one device is cut off and the other device enters its active region. Push–pull circuits are widely used in many amplifier output stages. A pair of audion tubes connected in push–pull is described in Edwin H. Colpitts' US patent 1137384 granted in 1915, although the patent does not specifically claim the push–pull connection. The technique was well known at that time and the principle had been claimed in an 1895 patent predating electronic amplifiers. Possibly the first commercial product using a push–pull amplifier was the RCA Balanced amplifier released in 1924 for use with their Radiola III regenerative broadcast receiver. By using a pair of low-power vacuum tubes in push–pull configuration, the amplifier allowed the use of a loudspeaker instead of headphones, while providing acceptable battery life with low standby power consumption. The technique continues to be used in audio, radio frequency, digital and power electronics systems today. Digital circuits A digital use of a push–pull configuration is the output of TTL and related families. The upper transistor is functioning as an active pull-up, in linear mode, while the lower transistor works digitally. For this reason they are not capable of sourcing as much current as they can sink (typically 20 times less). Because of the way these circuits are drawn schematically, with two transistors stacked vertically, normally with a level shifting diode in between, they are called "totem pole" outputs. A disadvantage of simple push–pull outputs is that two or more of them cannot be connected together, because if one tried to pull while another tried to push, the transistors could be damaged. To avoid this restriction, some push–pull outputs have a third state in which both transistors are switched off. In this state, the output is said to be floating (or, to use a proprietary term, tri-stated). An alternative to push–pull output is a single switch that disconnects or connects the load to ground (called an open collector or open drain output), or a single switch that disconnects or connects the load to the power supply (called an open-emitter or open-source output). Analog circuits A conventional amplifier stage which is not push–pull is sometimes called single-ended to distinguish it from a push–pull circuit. In analog push–pull power amplifiers the two output devices operate in antiphase (i.e. 180° apart). The two antiphase outputs are connected to the load in a way that causes the signal outputs to be added, but distortion components due to non-linearity in the output devices to be subtracted from each other; if the non-linearity of both output devices is similar, distortion is much reduced. Symmetrical push–pull circuits must cancel even order harmonics, like 2f, 4f, 6f and therefore promote odd order harmonics, like f, 3f, 5f when driven into the nonlinear range. A push–pull amplifier produces less distortion than a single-ended one. This allows a class-A or AB push–pull amplifier to have less distortion for the same power as the same devices used in single-ended configuration. Distortion can occur at the moment the outputs switch: the "hand-off" is not perfect. This is called crossover distortion. Class AB and class B dissipate less power for the same output than class A; general distortion can be kept low by negative feedback, and crossover distortion can be reduced by adding a 'bias current' to smoothen the hand-off. A class-B push–pull amplifier is more efficient than a class-A power amplifier because each output device amplifies only half the output waveform and is cut off during the opposite half. It can be shown that the theoretical full power efficiency (AC power in load compared to DC power consumed) of a push–pull stage is approximately 78.5%. This compares with a class-A amplifier which has efficiency of 25% if directly driving the load and no more than 50% for a transformer coupled output. A push–pull amplifier draws little power with zero signal, compared to a class-A amplifier that draws constant power. Power dissipation in the output devices is roughly one-fifth of the output power rating of the amplifier. A class-A amplifier, by contrast, must use a device capable of dissipating several times the output power. The output of the amplifier may be direct-coupled to the load, coupled by a transformer, or connected through a dc blocking capacitor. Where both positive and negative power supplies are used, the load can be returned to the midpoint (ground) of the power supplies. A transformer allows a single polarity power supply to be used, but limits the low-frequency response of the amplifier. Similarly, with a single power supply, a capacitor can be used to block the DC level at the output of the amplifier. Where bipolar junction transistors are used, the bias network must compensate for the negative temperature coefficient of the transistors' base to emitter voltage. This can be done by including a small value resistor between emitter and output. Also, the driving circuit can have silicon diodes mounted in thermal contact with the output transistors to provide compensation. Push–pull transistor output stages Categories include: Transformer-output transistor power amplifiers It is now very rare to use output transformers with transistor amplifiers, although such amplifiers offer the best opportunity for matching the output devices (with only PNP or only NPN devices required). Totem pole push–pull output stages Two matched transistors of the same polarity can be arranged to supply opposite halves of each cycle without the need for an output transformer, although in doing so the driver circuit often is asymmetric and one transistor will be used in a common-emitter configuration while the other is used as an emitter follower. This arrangement is less used today than during the 1970s; it can be implemented with few transistors (not so important today) but is relatively difficult to balance and to keep a low distortion. Symmetrical push–pull Each half of the output pair "mirror" the other, in that an NPN (or N-Channel FET) device in one half will be matched by a PNP (or P-Channel FET) in the other. This type of arrangement tends to give lower distortion than quasi-symmetric stages because even harmonics are cancelled more effectively with greater symmetry. Quasi-symmetrical push–pull In the past when good quality PNP complements for high power NPN silicon transistors were limited, a workaround was to use identical NPN output devices, but fed from complementary PNP and NPN driver circuits in such a way that the combination was close to being symmetrical (but never as good as having symmetry throughout). Distortion due to mismatched gain on each half of the cycle could be a significant problem. Super-symmetric output stages Employing some duplication in the whole driver circuit, to allow symmetrical drive circuits can improve matching further, although driver asymmetry is a small fraction of the distortion generating process. Using a bridge-tied load arrangement allows a much greater degree of matching between positive and negative halves, compensating for the inevitable small differences between NPN and PNP devices. Square-law push–pull The output devices, usually MOSFETs or vacuum tubes, are configured so that their square-law transfer characteristics (that generate second-harmonic distortion if used in a single-ended circuit) cancel distortion to a large extent. That is, as one transistor's gate-source voltage increases, the drive to the other device is reduced by the same amount and the drain (or plate) current change in the second device approximately corrects for the non-linearity in the increase of the first. Push–pull tube (valve) output stages Vacuum tubes (valves) are not available in complementary types (as are PNP/NPN transistors), so the tube push–pull amplifier has a pair of identical output tubes or groups of tubes with the control grids driven in antiphase. These tubes drive current through the two halves of the primary winding of a center-tapped output transformer. Signal currents add, while the distortion signals due to the non-linear characteristic curves of the tubes subtract. These amplifiers were first designed long before the development of solid-state electronic devices; they are still in use by both audiophiles and musicians who consider them to sound better. Vacuum tube push–pull amplifiers usually use an output transformer, although Output-transformerless (OTL) tube stages exist (such as the SEPP/SRPP and the White Cathode Follower below). The phase-splitter stage is usually another vacuum tube but a transformer with a center-tapped secondary winding was occasionally used in some designs. Because these are essentially square-law devices, the comments regarding distortion cancellation mentioned above apply to most push–pull tube designs when operated in class A (i.e. neither device is driven to its non-conducting state). A Single Ended Push–Pull (SEPP, SRPP or mu-follower) output stage, originally called the Series-Balanced amplifier (US patent 2,310,342, Feb 1943). is similar to a totem-pole arrangement for transistors in that two devices are in series between the power supply rails, but the input drive goes only to one of the devices, the bottom one of the pair; hence the (seemingly contradictory) Single-Ended description. The output is taken from the cathode of the top (not directly driven) device, which acts part way between a constant current source and a cathode follower but receiving some drive from the plate (anode) circuit of the bottom device. The drive to each tube therefore might not be equal, but the circuit tends to keep the current through the bottom device somewhat constant throughout the signal, increasing the power gain and reducing distortion compared with a true single-tube single-ended output stage. The transformer-less circuit with two tetrode tubes dates back to 1933: "THE USE OF A VACUUM TUBE AS A PLATE-FEED IMPEDANCE." by J.W.Horton in the Journal of the Franklin Institute 1933 volume 216 Issue 6 The White Cathode Follower (Patent 2,358,428, Sep. 1944 by E. L. C. White) is similar to the SEPP design above, but the signal input is to the top tube, acting as a cathode follower, but one where the bottom tube (in common cathode configuration) if fed (usually via a step-up transformer) from the current in the plate (anode) of the top device. It essentially reverses the roles of the two devices in SEPP. The bottom tube acts part way between a constant current sink and an equal partner in the push–pull workload. Again, the drive to each tube therefore might not be equal. Transistor versions of the SEPP and White follower do exist, but are rare. Ultra-linear push–pull A so-called ultra-linear push–pull amplifier uses either pentodes or tetrodes with their screen grid fed from a percentage of the primary voltage on the output transformer. This gives efficiency and distortion that is a good compromise between triode (or triode-strapped) power amplifier circuits and conventional pentode or tetrode output circuits where the screen is fed from a relatively constant voltage source. See also Single-ended triode Push–pull converter for more details on implementation Open collector References Electronic circuits
Push–pull output
[ "Engineering" ]
2,722
[ "Electronic engineering", "Electronic circuits" ]
856,862
https://en.wikipedia.org/wiki/Buys%20Ballot%27s%20law
In meteorology, Buys Ballot's law () may be expressed as follows: In the Northern Hemisphere, if a person stands with their back to the wind, the atmospheric pressure is low to the left, high to the right. This is because wind travels counterclockwise around low pressure zones in the Northern Hemisphere. It is approximately true in the higher latitudes of the Northern Hemisphere, and is reversed in the Southern Hemisphere, but the angle between the pressure gradient force and wind is not a right angle in low latitudes. A version taught to US Naval Cadets in WW2 is: "In the Northern Hemisphere, if you turn your back to the wind, the low pressure center will be to your left and somewhat toward the front." History As early as the 16th century extensive weather observations were included as part of a ship's log. These observations as well as other log information, were turned over to national hydrographic institutes in various nations, most notably Germany and England and later the US. The information from many ships about individual voyages was compiled ashore and later became what today is still published by England, a 3 volume set complete with charts titled "Sailing Directions for the World". Additionally the US Defense Mapping Agency publishes a 47 volume set Sailing Directions which serves much the same purpose. The information is the distillate of empirical observations of thousands of ships masters over thousands of voyages spanning several hundred years. Buys Ballot's law, which was first deduced by the American meteorologists J.H. Coffin and William Ferrel, is a direct consequence of Ferrel's law. The law takes its name from C. H. D. Buys Ballot, a Dutch meteorologist, who published it in the Comptes Rendus, 9 November 1857. While William Ferrel theorized this first in 1856, Buys Ballot was the first to provide an empirical validation. While Buys Ballot hoped that his law would be validated by other meteorological services in other countries, foreign dissemination and adoption of Buys Ballot's law was slow. The French and American meteorological services of the era prioritized describing the state of storms rather than forecasting them, finding little use for the predictive value of Buys Ballot's law. However, the British Meteorological Office began to use Buys Ballot's law extensively after reintroducing a storm warning system in 1867 following the death of its former director Robert FitzRoy. The rule of thumb became more widely accepted following its effective use in the British forecasts. Buys Ballot's law first appeared in early versions (prior to 1900) of Bowditch's American Practical Navigator and other publications written to assist in passage planning and the safe conduct of ships at sea and is still included today both in Bowditch and in Sailing Directions (see following reference) as an item of practical reference and information. Uses The law outlines general rules of conduct for masters of both sail and steam vessels, to assist them in steering the vessels away from the center and right front (in the Northern Hemisphere and left front in the Southern Hemisphere) quadrants of hurricanes or any other rotating disturbances at sea. Prior to radio, satellite observation and the ability to transmit timely weather information over long distances, the only method a ship's master had to forecast the weather was observation of meteorological conditions (visible cloud formations, wind direction and atmospheric pressure) at his location. Included in the Sailing Directions for the World are Buys Ballot's techniques for avoiding the worst part of any rotating storm system at sea using only the locally observable phenomena of cloud formations, wind speed and barometric pressure tendencies over a number of hours. These observations and application of the principles of Buys Ballot's law help to establish the probability of the existence of a storm and the best course to steer to try to avoid the worst of it—with the best chance of survival. The underlying principles of Buys Ballot's law state that for anyone ashore in the Northern Hemisphere and in the path of a hurricane, the most dangerous place to be is in the right front quadrant of the storm. There, the observed wind speed of the storm is the sum of the speed of wind in the storm circulation plus the velocity of the storm's forward movement. Buys Ballot's law calls this the "Dangerous Quadrant". Likewise, in the left front quadrant of the storm the observed wind is the difference between the storm's wind velocity and its forward speed. This is called the "Safe Quadrant" due to the lower observed wind speeds. To look at it another way, in the Northern Hemisphere if a person is to the right of where a hurricane or tropical storm makes landfall, that is considered the dangerous quadrant. If they are to the left of the point of landfall, that is the safe quadrant. In the dangerous quadrant an observer will experience higher wind speeds and generally a much higher storm surge due to the onshore wind direction. In the safe quadrant, the observer will experience somewhat lower wind speeds and the possibility of lower than normal water levels due to the direction of the wind being offshore. These are very general rules that are subject to many other factors, including shapes of the coastline, and topography in any location. Although the principles apply to a very limited extent to a coastal observer during the approach and passage of a storm in any location, Buys Ballot's law was primarily formulated from empirical data to assist ships at sea. Notes Synoptic meteorology and weather Atmospheric dynamics
Buys Ballot's law
[ "Chemistry" ]
1,101
[ "Atmospheric dynamics", "Fluid dynamics" ]
857,235
https://en.wikipedia.org/wiki/Equivalence%20principle
The equivalence principle is the hypothesis that the observed equivalence of gravitational and inertial mass is a consequence of nature. The weak form, known for centuries, relates to masses of any composition in free fall taking the same trajectories and landing at identical times. The extended form by Albert Einstein requires special relativity to also hold in free fall and requires the weak equivalence to be valid everywhere. This form was a critical input for the development of the theory of general relativity. The strong form requires Einstein's form to work for stellar objects. Highly precise experimental tests of the principle limit possible deviations from equivalence to be very small. Concept In classical mechanics, Newton's equation of motion in a gravitational field, written out in full, is: inertial mass × acceleration = gravitational mass × gravitational acceleration Careful experiments have shown that the inertial mass on the left side and gravitational mass on the right side are numerically equal and independent of the material composing the masses. The equivalence principle is the hypothesis that this numerical equality of inertial and gravitational mass is a consequence of their fundamental identity. The equivalence principle can be considered an extension of the principle of relativity, the principle that the laws of physics are invariant under uniform motion. An observer in a windowless room cannot distinguish between being on the surface of the Earth and being in a spaceship in deep space accelerating at 1g and the laws of physics are unable to distinguish these cases. History By experimenting with the acceleration of different materials, Galileo determined that gravitation is independent of the amount of mass being accelerated. Newton, just 50 years after Galileo, investigated whether gravitational and inertial mass might be different concepts. He compared the periods of pendulums composed of different materials and found them to be identical. From this, he inferred that gravitational and inertial mass are the same thing. The form of this assertion, where the equivalence principle is taken to follow from empirical consistency, later became known as "weak equivalence". A version of the equivalence principle consistent with special relativity was introduced by Albert Einstein in 1907, when he observed that identical physical laws are observed in two systems, one subject to a constant gravitational field causing acceleration and the other subject to constant acceleration, like a rocket far from any gravitational field. Since the physical laws are the same, Einstein assumed the gravitational field and the acceleration were "physically equivalent". Einstein stated this hypothesis by saying he would: In 1911 Einstein demonstrated the power of the equivalence principle by using it to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field. He connected the equivalence principle to his earlier principle of special relativity: Soon after completing work on his theory of gravity (known as general relativity) and then also in later years, Einstein recalled the importance of the equivalence principle to his work: Einstein's development of general relativity necessitated some means of empirically discriminating the theory from other theories of gravity compatible with special relativity. Accordingly, Robert Dicke developed a test program incorporating two new principles—the , and the —each of which assumes the weak equivalence principle as a starting point. Definitions Three main forms of the equivalence principle are in current use: weak (Galilean), Einsteinian, and strong. Some proposals also suggest finer divisions or minor alterations. Weak equivalence principle The weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle can be stated in many ways. The strong equivalence principle, a generalization of the weak equivalence principle, includes astronomic bodies with gravitational self-binding energy. Instead, the weak equivalence principle assumes falling bodies are self-bound by non-gravitational forces only (e.g. a stone). Either way: "All uncharged, freely falling test particles follow the same trajectories, once an initial position and velocity have been prescribed". "... in a uniform gravitational field all objects, regardless of their composition, fall with precisely the same acceleration." "The weak equivalence principle implicitly assumes that the falling objects are bound by non-gravitational forces." "... in a gravitational field the acceleration of a test particle is independent of its properties, including its rest mass." Mass (measured with a balance) and weight (measured with a scale) are locally in identical ratio for all bodies (the opening page to Newton's Philosophiæ Naturalis Principia Mathematica, 1687). Uniformity of the gravitational field eliminates measurable tidal forces originating from a radial divergent gravitational field (e.g., the Earth) upon finite sized physical bodies. Einstein equivalence principle What is now called the "Einstein equivalence principle" states that the weak equivalence principle holds, and that: Here local means that experimental setup must be small compared to variations in the gravitational field, called tidal forces. The test experiment must be small enough so that its gravitational potential does not alter the result. The two additional constraints added to the weak principle to get the Einstein form − (1) the independence of the outcome on relative velocity (local Lorentz invariance) and (2) independence of "where" known as (local positional invariance) − have far reaching consequences. With these constraints alone Einstein was able to predict the gravitational redshift. Theories of gravity that obey the Einstein equivalence principle must be "metric theories", meaning that trajectories of freely falling bodies are geodesics of symmetric metric. Around 1960 Leonard I. Schiff conjectured that any complete and consistent theory of gravity that embodies the weak equivalence principle implies the Einstein equivalence principle; the conjecture can't be proven but has several plausibility arguments in its favor. Nonetheless, the two principles are tested with very different kinds of experiments. The Einstein equivalence principle has been criticized as imprecise, because there is no universally accepted way to distinguish gravitational from non-gravitational experiments (see for instance Hadley and Durand). Strong equivalence principle The strong equivalence principle applies the same constraints as the Einstein equivalence principle, but allows the freely falling bodies to be massive gravitating objects as well as test particles. Thus this is a version of the equivalence principle that applies to objects that exert a gravitational force on themselves, such as stars, planets, black holes or Cavendish experiments. It requires that the gravitational constant be the same everywhere in the universe and is incompatible with a fifth force. It is much more restrictive than the Einstein equivalence principle. Like the Einstein equivalence principle, the strong equivalence principle requires gravity to be geometrical by nature, but in addition it forbids any extra fields, so the metric alone determines all of the effects of gravity. If an observer measures a patch of space to be flat, then the strong equivalence principle suggests that it is absolutely equivalent to any other patch of flat space elsewhere in the universe. Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle. A number of alternative theories, such as Brans–Dicke theory and the Einstein-aether theory add additional fields. Active, passive, and inertial masses Some of the tests of the equivalence principle use names for the different ways mass appears in physical formulae. In nonrelativistic physics three kinds of mass can be distinguished: Inertial mass intrinsic to an object, the sum of all of its mass–energy. Passive mass, the response to gravity, the object's weight. Active mass, the mass that determines the objects gravitational effect. By definition of active and passive gravitational mass, the force on due to the gravitational field of is: Likewise the force on a second object of arbitrary mass2 due to the gravitational field of mass0 is: By definition of inertial mass:if and are the same distance from then, by the weak equivalence principle, they fall at the same rate (i.e. their accelerations are the same). Hence: Therefore: In other words, passive gravitational mass must be proportional to inertial mass for objects, independent of their material composition if the weak equivalence principle is obeyed. The dimensionless Eötvös-parameter or Eötvös ratio is the difference of the ratios of gravitational and inertial masses divided by their average for the two sets of test masses "A" and "B". Values of this parameter are used to compare tests of the equivalence principle. A similar parameter can be used to compare passive and active mass. By Newton's third law of motion: must be equal and opposite to It follows that: In words, passive gravitational mass must be proportional to active gravitational mass for all objects. The difference, is used to quantify differences between passive and active mass. Experimental tests Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects and verifying that they land at the same time. Historically this was the first approach—though probably not by Galileo's Leaning Tower of Pisa experiment but instead earlier by Simon Stevin, who dropped lead balls of different masses off the Delft churchtower and listened for the sound of them hitting a wooden plank. Isaac Newton measured the period of pendulums made with different materials as an alternative test giving the first precision measurements. Loránd Eötvös's approach in 1908 used a very sensitive torsion balance to give precision approaching 1 in a billion. Modern experiments have improved this by another factor of a million. A popular exposition of this measurement was done on the Moon by David Scott in 1971. He dropped a falcon feather and a hammer at the same time, showing on video that they landed at the same time. Experiments are still being performed at the University of Washington which have placed limits on the differential acceleration of objects towards the Earth, the Sun and towards dark matter in the Galactic Center. Future satellite experiments – Satellite Test of the Equivalence Principle and Galileo Galilei – will test the weak equivalence principle in space, to much higher accuracy. With the first successful production of antimatter, in particular anti-hydrogen, a new approach to test the weak equivalence principle has been proposed. Experiments to compare the gravitational behavior of matter and antimatter are currently being developed. Proposals that may lead to a quantum theory of gravity such as string theory and loop quantum gravity predict violations of the weak equivalence principle because they contain many light scalar fields with long Compton wavelengths, which should generate fifth forces and variation of the fundamental constants. Heuristic arguments suggest that the magnitude of these equivalence principle violations could be in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification. Tests of the Einstein equivalence principle In addition to the tests of the weak equivalence principle, the Einstein equivalence principle requires testing the local Lorentz invariance and local positional invariance conditions. Testing local Lorentz invariance amounts to testing special relativity, a theory with vast number of existing tests. Nevertheless, attempts to look for quantum gravity require even more precise tests. The modern tests include looking for directional variations in the speed of light (called "clock anisotropy tests") and new forms of the Michelson-Morley experiment. The anisotropy measures less than one part in 10−20. Testing local positional invariance divides in to tests in space and in time. Space-based tests use measurements of the gravitational redshift, the classic is the Pound–Rebka experiment in the 1960s. The most precise measurement was done in 1976 by flying a hydrogen maser and comparing it to one on the ground. The Global positioning system requires compensation for this redshift to give accurate position values. Time-based tests search for variation of dimensionless constants and mass ratios. For example, Webb et al. reported detection of variation (at the 10−5 level) of the fine-structure constant from measurements of distant quasars. Other researchers dispute these findings. The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants. Tests of the strong equivalence principle The strong equivalence principle can be tested by 1) finding orbital variations in massive bodies (Sun-Earth-Moon), 2) variations in the gravitational constant (G) depending on nearby sources of gravity or on motion, or 3) searching for a variation of Newton's gravitational constant over the life of the universe Orbital variations due to gravitational self-energy should cause a "polarization" of solar system orbits called the Nordtvedt effect. This effect has been sensitively tested by the Lunar Laser Ranging Experiment. Up to the limit of one part in 1013 there is no Nordtvedt effect. A tight bound on the effect of nearby gravitational fields on the strong equivalence principle comes from modeling the orbits of binary stars and comparing the results to pulsar timing data. In 2014, astronomers discovered a stellar triple system containing a millisecond pulsar PSR J0337+1715 and two white dwarfs orbiting it. The system provided them a chance to test the strong equivalence principle in a strong gravitational field with high accuracy. Most alternative theories of gravity predict a change in the gravity constant over time. Studies of Big Bang nucleosynthesis, analysis of pulsars, and the lunar laser ranging data have shown that G cannot have varied by more than 10% since the creation of the universe. The best data comes from studies of the ephemeris of Mars, based on three successive NASA missions, Mars Global Surveyor, Mars Odyssey, and Mars Reconnaissance Orbiter. See also Classical mechanics Eötvös experiment Einstein's thought experiments Gauge gravitation theory General covariance Mach's principle Tests of general relativity Unsolved problems in astronomy Unsolved problems in physics References Further reading Dicke, Robert H.; "New Research on Old Gravitation", Science 129, 3349 (1959). Explains the value of research on gravitation and distinguishes between the strong (later renamed "Einstein") and weak equivalence principles. Dicke, Robert H.; "Mach's Principle and Equivalence", in Evidence for gravitational theories: proceedings of course 20 of the International School of Physics "Enrico Fermi", ed. C. Møller (Academic Press, New York, 1962). This article outlines the approach to precisely testing general relativity advocated by Dicke and pursued from 1959 onwards. Misner, Charles W.; Thorne, Kip S.; and Wheeler, John A.; Gravitation, New York: W. H. Freeman and Company, 1973, Chapter 16 discusses the equivalence principle. Ohanian, Hans; and Ruffini, Remo; Gravitation and Spacetime 2nd edition, New York: Norton, 1994, Chapter 1 discusses the equivalence principle, but incorrectly, according to modern usage, states that the strong equivalence principle is wrong. Will, Clifford M.; Theory and experiment in gravitational physics, Cambridge, UK: Cambridge University Press, 1993. This is the standard technical reference for tests of general relativity. Will, Clifford M.; Was Einstein Right?: Putting General Relativity to the Test, Basic Books (1993). This is a popular account of tests of general relativity. Friedman, Michael; Foundations of Space-Time Theories, Princeton, New Jersey: Princeton University Press, 1983. Chapter V discusses the equivalence principle. External links Gravity and the principle of equivalence – The Feynman Lectures on Physics Introducing The Einstein Principle of Equivalence from Syracuse University The Equivalence Principle at MathPages The Einstein Equivalence Principle at Living Reviews on General Relativity "...Physicists in Germany have used an atomic interferometer to perform the most accurate ever test of the equivalence principle at the level of atoms..." General relativity Fictitious forces Albert Einstein Principles Acceleration Philosophy of astronomy Articles containing video clips
Equivalence principle
[ "Physics", "Astronomy", "Mathematics" ]
3,360
[ "Force", "Physical quantities", "Acceleration", "Philosophy of astronomy", "Quantity", "Fictitious forces", "General relativity", "Theory of relativity", "Wikipedia categories named after physical quantities" ]
857,590
https://en.wikipedia.org/wiki/Columbia%20Basin%20Project
The Columbia Basin Project (or CBP) in Central Washington, United States, is the irrigation network that the Grand Coulee Dam makes possible. It is the largest water reclamation project in the United States, supplying irrigation water to over of the large project area, all of which was originally intended to be supplied and is still classified irrigable and open for the possible enlargement of the system. Water pumped from the Columbia River is carried over of main canals, stored in a number of reservoirs, then fed into of lateral irrigation canals, and out into of drains and wasteways. The Grand Coulee Dam, powerplant, and various other parts of the CBP are operated by the Bureau of Reclamation. There are three irrigation districts (the Quincy-Columbia Basin Irrigation District, the East Columbia Basin Irrigation District, and the South Columbia Basin Irrigation District) in the project area, which operate additional local facilities. History The U.S. Bureau of Reclamation was created in 1902 to aid development of dry western states. Central Washington's Columbia Plateau was a prime candidate—a desert with fertile loess soil and the Columbia River passing through. Competing groups lobbied for different irrigation projects; a Spokane group wanted a gravity flow canal from Lake Pend Oreille while a Wenatchee group (further south) wanted a large dam on the Columbia River, which would pump water up to fill the nearby Grand Coulee, a formerly-dry canyon-like coulee. After thirteen years of debate, President Franklin D. Roosevelt authorized the dam project with National Industrial Recovery Act money. (It was later specifically authorized by the Rivers and Harbors Act of 1935, and then reauthorized by the Columbia Basin Project Act of 1943 which put it under the Reclamation Project Act of 1939.) Construction of Grand Coulee Dam began in 1933 and was completed in 1942. Its main purpose of pumping water for irrigation was postponed during World War II in favor of electrical power generation that was used for the war effort. Additional hydroelectric generating capacity was added into the 1970s. The Columbia River reservoir behind the dam was named Franklin Delano Roosevelt Lake in honor of the president. The irrigation holding reservoir in Grand Coulee was named Banks Lake. After World War II the project suffered a number of setbacks. Irrigation water began to arrive between 1948 and 1952, but the costs escalated, resulting in the original plan, in which the people receiving irrigation water would pay back the costs of the project over time, being repeatedly revised and becoming a permanent water subsidy. In addition, the original vision of a social engineering project intended to help farmers settle on small landholdings failed. Farm plots, at first restricted in size, became larger and soon became corporate agribusiness operations. The original plan was that a federal agency similar to the Tennessee Valley Authority would manage the entire system. Instead, conflicts between the Bureau of Reclamation and the Department of Agriculture thwarted the goal of both agencies of settling the project area with small family farms; larger corporate farms arose instead. The determination to finish the project's plan to irrigate the full waned during the 1960s. The estimated total cost for completing the project had more than doubled between 1940 and 1964, it had become clear that the government's financial investment would not be recovered, and that the benefits of the project were unevenly distributed and increasingly going to larger businesses and corporations. These issues and others dampened enthusiasm for the project, although the exact motives behind the decision to stop construction with the project about half finished are not known. Geology The Columbia Basin in Central Washington is fertile due to its loess soils, but large portions are a near desert, receiving less than ten inches (254 mm) of rain per year. The area is characterized by huge deposits of flood basalt, thousands of feet thick in places, laid down over a period of approximately 11 million years, during the Miocene epoch. These flood basalts are exposed in some places, while in others they are covered with thick layers of loess. During the last ice age glaciers shaped the landscape of the Columbia River Plateau. Ice blocked the Columbia River near the north end of Grand Coulee, creating glacial lakes Columbia and Spokane. Ice age glaciers also created Glacial Lake Missoula, in what is now Montana. Erosion allowed glacial Lake Columbia to begin to drain into what became Grand Coulee, which was fully created when glacial Lake Missoula along with glacial Lake Columbia catastrophically emptied. This flood event was one of several known as the Missoula Floods. Unique erosion features, called channeled scablands, are attributed to these amazing floods. Component units of the project Grand Coulee Dam Complex and Lake Roosevelt Grand Coulee Dam (1950) Right (north) Powerhouse Left (south) Powerhouse Third Powerhouse (1974) was added as a north wing of the dam from the original Right powerhouse. This addition expanded power generation by 300%. Lake Roosevelt Grand Coulee Pumping-Generating Plant (1953) consist of twelve pump units with combined power of 593kw(795khp), of which six are reversible pump-generator turbine units with a combined generating capacity of 314Mw.) The pumps are used to move water from Lake Roosevelt into Banks Lake, from which it can be either sent south into the Columbia Basin Irrigation system or returned to Lake Roosevelt by the reversible generating pump-turbines to create additional electricity for the grid. Feeder Canal, North and Dry Falls Dams, Banks Lake Banks Lake (1951) is an artificial impoundment in the Upper Grand Coulee. It is long and wide. The coulee has nearly vertical rock walls up to high. North Dam, near the town of Grand Coulee, has a maximum height of and a crest length of . Dry Falls, or South Dam, near Coulee City, has a maximum height of and a crest length of . The crest elevation of both dams is . Project water enters Banks Lake through the Feeder Canal from the Pump-generating plant. The outlet for Banks Lake is the Main Canal near Coulee City. It is near the east abutment of Dry Falls Dam. Banks Lake serves as an equalizing reservoir for storage of water for irrigation and can be used to for power generation. Feeder Canal (1951) links North Dam at northern end of Banks Lake with the siphon outlets for the Grand Coulee Pumping—Generating plants discharge lines. It is long running in an open concrete-lined canal, and a twin-barrel concrete cut—and-cover conduit. Main Canal (1951) is , including of lake sections. Bacon Tunnel and Siphon (1950) is a long sealed Siphon under the eastern extension of the Dry Falls draw. Billy Clapp Lake (Pinto Dam – zoned earth & rockfill) (1951) aka (Long Lake Dam) is at the south end of Long Lake Coulee. The reservoir is long and wide. Potholes Reservoir Irrigation When it was built, Grand Coulee Dam was the largest dam in the world, but it was only part of the irrigation project. Additional dams were built at the north and south ends of Grand Coulee, the dry canyon south of Grand Coulee Dam, allowing the coulee to be filled with water pumped up from the Columbia River. The resulting reservoir, called Banks Lake, is about long. Banks Lake serves as the CBP's initial storage reservoir. Additional canals, siphons, and reservoirs were built south of Bank Lake, reaching over . Water is lifted from Lake Roosevelt to feed the massive network. The total amount of the Columbia flow that is diverted into the CBP at Grand Coulee varies a little from year to year, and is currently about 3.0 million acre-feet. This is about 3.8 percent of the Columbia's average flow as measured at the Grand Coulee dam. This amount is larger than the combined annual flows of the nearby Yakima, Wenatchee, and Okanogan rivers. There were plans to double the area of irrigated land, according to tour guides at the dam, over the next several decades. However, the Bureau of Reclamation website states that no further development is anticipated, with irrigated out of the original planned. Interest in completing the Columbia Basin Project's has grown in the late 20th and early 21st centuries. One reason for the renewed interest is the substantial depletion of the Odessa aquifer. Agricultural operations within the CBP's boundaries but outside the developed portion have for decades used groundwater pumped from the Odessa aquifer to irrigate crops. Hydroelectric power Hydroelectricity was not the primary goal of the project, but during World War II the demand for electricity in the region boomed. The Hanford nuclear reservation was built just south of the project and aluminum smelting plants flocked to the Columbia Basin. A new power house was built at the Grand Coulee Dam, starting in the late sixties, that tripled the generating capacity. Part of the dam had to be blown up and re-built to make way for the new generators. Electricity is now transmitted to Canada and as far south as San Diego. Environmental impact One environmental impact has been the reduction in native fish stocks above the dams. The majority of fish in the Columbia basin are migratory fish like salmon, sturgeon and steelhead. These migratory fish are often harmed or unable to pass through the narrow passages and turbines at dams. In addition to the physical barriers the dams pose, the slowing speed and altered course of the river raises temperatures, alters oxygen content, and changes river bed conditions. These altered conditions can stress and potentially kill both migratory and local non-migratory organisms in the river. The decimation of these migratory fish stocks above Grand Coulee Dam would not allow the former fishing lifestyle of Native Americans of the area, who once depended on the salmon for a way of life. The environmental impacts of the Columbia Basin Project have made it a contentious and often politicized issue. A common argument for not implementing environmental safeguards at dam sites is that post-construction modifications would likely have to be significant. Tour guides at the Grand Coulee dam site, for example, indicate that a "fish ladder might have to be long to get the fish up the needed, and many fish would die before reaching the upper end" thus no fish ladders were built. Advocates of remedial measures point out that such steps would still be better than the status quo, which has led to marked die-offs and the likely extinction of several types of salmon. There are a number of issues regarding the runoff of irrigation water. The project region receives about 6 to of annual rainfall, while the application of irrigation water amounts to an equivalent 40 to . The original plans did not sufficiently address the inevitable seepage and runoff. In some cases the results are beneficial. For example, numerous new lakes provide recreation opportunities and habitat for fish and game. In other cases agricultural chemicals in the runoff cause pollution. Economic benefits and costs The irrigation water provided by this project greatly benefits the agricultural production of the area. North Central Washington is one of the largest and most productive tree fruit producing areas on the planet. Without Coulee Dam and the greater Columbia Basin Project, much of North Central Washington State would be too arid for cultivation. According to the federal Bureau of Reclamation the yearly value of the Columbia Basin Project is $630 million in irrigated crops, $950 million in power production, $20 million in flood damage prevention, and $50 million in recreation. The project itself involves costs that are difficult to determine. The farms that receive irrigation water must pay for it, but due to insufficient data from the Bureau of Reclamation, it is not possible to compare the total cost paid by the Bureau to the payments received. Nevertheless, the farm payments account for only a small fraction of the total cost to the government, resulting in the project's agricultural corporations receiving a large water subsidy from the government. Critics describe the CBP as a classical example of federal money being used to subsidize a relatively small group of farmers in the American West in places where it would never be economically viable under other circumstances. See also Tributaries of the Columbia River Cities on the Columbia River Hydroelectric dams on the Columbia River Columbia Basin Initiative References External links University of Idaho Libraries Digital Collections- Columbia Basin Project Photographs of the construction of the Columbia Basin Project, with a special emphasis on the construction of Grand Coulee Dam. History of the CBP from the Northwest Power and Conservation Council Official explanation of Salmon Recovery and Salmon Death-minimizing activities required by the Endangered Species Act Historic American Engineering Record (HAER) documentation, filed under Grand Coulee, Grant County, WA: Columbia River Historic American Engineering Record in Washington (state) Irrigation projects Irrigation in the United States United States Bureau of Reclamation Water supply infrastructure in Washington (state) Moses Lake, Washington
Columbia Basin Project
[ "Engineering" ]
2,625
[ "Irrigation projects" ]
857,710
https://en.wikipedia.org/wiki/Time%E2%80%93frequency%20representation
A time–frequency representation (TFR) is a view of a signal (taken to be a function of time) represented over both time and frequency. Time–frequency analysis means analysis into the time–frequency domain provided by a TFR. This is achieved by using a formulation often called "Time–Frequency Distribution", abbreviated as TFD. TFRs are often complex-valued fields over time and frequency, where the modulus of the field represents either amplitude or "energy density" (the concentration of the root mean square over time and frequency), and the argument of the field represents phase. Background and motivation A signal, as a function of time, may be considered as a representation with perfect time resolution. In contrast, the magnitude of the Fourier transform (FT) of the signal may be considered as a representation with perfect spectral resolution but with no time information because the magnitude of the FT conveys frequency content but it fails to convey when, in time, different events occur in the signal. TFRs provide a bridge between these two representations in that they provide some temporal information and some spectral information simultaneously. Thus, TFRs are useful for the representation and analysis of signals containing multiple time-varying frequencies. Formulation of TFRs and TFDs One form of TFR (or TFD) can be formulated by the multiplicative comparison of a signal with itself, expanded in different directions about each point in time. Such representations and formulations are known as quadratic or "bilinear" TFRs or TFDs (QTFRs or QTFDs) because the representation is quadratic in the signal (see Bilinear time–frequency distribution). This formulation was first described by Eugene Wigner in 1932 in the context of quantum mechanics and, later, reformulated as a general TFR by Ville in 1948 to form what is now known as the Wigner–Ville distribution, as it was shown in that Wigner's formula needed to use the analytic signal defined in Ville's paper to be useful as a representation and for a practical analysis. Today, QTFRs include the spectrogram (squared magnitude of short-time Fourier transform), the scaleogram (squared magnitude of Wavelet transform) and the smoothed pseudo-Wigner distribution. Although quadratic TFRs offer perfect temporal and spectral resolutions simultaneously, the quadratic nature of the transforms creates cross-terms, also called "interferences". The cross-terms caused by the bilinear structure of TFDs and TFRs may be useful in some applications such as classification as the cross-terms provide extra detail for the recognition algorithm. However, in some other applications, these cross-terms may plague certain quadratic TFRs and they would need to be reduced. One way to do this is obtained by comparing the signal with a different function. Such resulting representations are known as linear TFRs because the representation is linear in the signal. An example of such a representation is the windowed Fourier transform (also known as the short-time Fourier transform) which localises the signal by modulating it with a window function, before performing the Fourier transform to obtain the frequency content of the signal in the region of the window. Wavelet transforms Wavelet transforms, in particular the continuous wavelet transform, expand the signal in terms of wavelet functions which are localised in both time and frequency. Thus the wavelet transform of a signal may be represented in terms of both time and frequency. Continuous wavelet transform analysis is very useful for identifying non-stationary signals in time series, such as those related to climate or landslides. The notions of time, frequency, and amplitude used to generate a TFR from a wavelet transform were originally developed intuitively. In 1992, a quantitative derivation of these relationships was published, based upon a stationary phase approximation. Linear canonical transformation Linear canonical transformations are the linear transforms of the time–frequency representation that preserve the symplectic form. These include and generalize the Fourier transform, fractional Fourier transform, and others, thus providing a unified view of these transforms in terms of their action on the time–frequency domain. See also Newland transform Reassignment method Time–frequency analysis for music signals References External links DiscreteTFDs — software for computing time–frequency distributions TFTB — Time–Frequency ToolBox Time stretched short time Fourier transform for time-frequency analysis of ultra wideband signals representation Signal estimation
Time–frequency representation
[ "Physics" ]
907
[ "Frequency-domain analysis", "Spectrum (physical sciences)", "Time–frequency analysis" ]
857,780
https://en.wikipedia.org/wiki/Entropic%20uncertainty
In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations. In 1957, Hirschman considered a function f and its Fourier transform g such that where the "≈" indicates convergence in 2, and normalized so that (by Plancherel's theorem), He showed that for any such functions the sum of the Shannon entropies is non-negative, A tighter bound, was conjectured by Hirschman and Everett, proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski. The equality holds in the case of Gaussian distributions. Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space. Sketch of proof The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.) From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, , where , which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited. Babenko–Beckner inequality The (q, p)-norm of the Fourier transform is defined to be where   and In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is Thus we have the Babenko–Beckner inequality that Rényi entropy bound From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived. Let so that and , we have Squaring both sides and taking the logarithm, we get We can rewrite the condition on as Assume , then we multiply both sides by the negative to get Rearranging terms yields an inequality in terms of the sum of Rényi entropies, Right-hand side Shannon entropy bound Taking the limit of this last inequality as and the substitutions yields the less general Shannon entropy inequality, valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc. The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e., In this case, the dilation of the Fourier transform absolute squared by a factor of 2 simply adds log(2) to its entropy. Entropy versus variance bounds The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function on the real line, Shannon's entropy inequality specifies: where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution. Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above, Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure. Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure. To formalize this distinction, we say that two probability density functions and are equimeasurable if where is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.) See also Inequalities in information theory Logarithmic Schrödinger equation Uncertainty principle Riesz–Thorin theorem Fourier transform References Further reading Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. (2016). "One-parameter class of uncertainty relations based on entropy power". Phys. Rev. E 93 (6): 060104(R). doi:10.1103/PhysRevE.93.060104. arXiv:math/0605510v1 Quantum mechanical entropy Information theory Physical quantities Inequalities
Entropic uncertainty
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,253
[ "Physical phenomena", "Mathematical theorems", "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Quantity", "Binary relations", "Computer science", "Entropy", "Information theory", "Mathematical relations", "Inequalities (mathematics)", "Quantum mechanical entr...
859,234
https://en.wikipedia.org/wiki/Mechanical%20energy
In physical sciences, mechanical energy is the sum of potential energy and kinetic energy. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant. If an object moves in the opposite direction of a conservative net force, the potential energy will increase; and if the speed (not the velocity) of the object changes, the kinetic energy of the object also changes. In all real systems, however, nonconservative forces, such as frictional forces, will be present, but if they are of negligible magnitude, the mechanical energy changes little and its conservation is a useful approximation. In elastic collisions, the kinetic energy is conserved, but in inelastic collisions some mechanical energy may be converted into thermal energy. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule. Many devices are used to convert mechanical energy to or from other forms of energy, e.g. an electric motor converts electrical energy to mechanical energy, an electric generator converts mechanical energy into electrical energy and a heat engine converts heat to mechanical energy. General Energy is a scalar quantity, and the mechanical energy of a system is the sum of the potential energy (which is measured by the position of the parts of the system) and the kinetic energy (which is also called the energy of motion): The potential energy, U, depends on the position of an object subjected to gravity or some other conservative force. The gravitational potential energy of an object is equal to the weight W of the object multiplied by the height h of the object's center of gravity relative to an arbitrary datum: The potential energy of an object can be defined as the object's ability to do work and is increased as the object is moved in the opposite direction of the direction of the force. If F represents the conservative force and x the position, the potential energy of the force between the two positions x1 and x2 is defined as the negative integral of F from x1 to x2: The kinetic energy, K, depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them. It is defined as one half the product of the object's mass with the square of its speed, and the total kinetic energy of a system of objects is the sum of the kinetic energies of the respective objects: The principle of conservation of mechanical energy states that if a body or system is subjected only to conservative forces, the mechanical energy of that body or system remains constant. The difference between a conservative and a non-conservative force is that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path. On the contrary, when a non-conservative force acts upon an object, the work done by the non-conservative force is dependent of the path. Conservation of mechanical energy According to the principle of conservation of mechanical energy, the mechanical energy of an isolated system remains constant in time, as long as the system is free of friction and other non-conservative forces. In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the principle of conservation of mechanical energy can be used as a fair approximation. Though energy cannot be created or destroyed, it can be converted to another form of energy. Swinging pendulum In a mechanical system like a swinging pendulum subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible, energy passes back and forth between kinetic and potential energy but never leaves the system. The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points. However, when taking the frictional forces into account, the system loses mechanical energy with each swing because of the negative work done on the pendulum by these non-conservative forces. Irreversibilities That the loss of mechanical energy in a system always resulted in an increase of the system's temperature has been known for a long time, but it was the amateur physicist James Prescott Joule who first experimentally demonstrated how a certain amount of work done against friction resulted in a definite quantity of heat which should be conceived as the random motions of the particles that comprise matter. This equivalence between mechanical energy and heat is especially important when considering colliding objects. In an elastic collision, mechanical energy is conserved – the sum of the mechanical energies of the colliding objects is the same before and after the collision. After an inelastic collision, however, the mechanical energy of the system will have changed. Usually, the mechanical energy before the collision is greater than the mechanical energy after the collision. In inelastic collisions, some of the mechanical energy of the colliding objects is transformed into kinetic energy of the constituent particles. This increase in kinetic energy of the constituent particles is perceived as an increase in temperature. The collision can be described by saying some of the mechanical energy of the colliding objects has been converted into an equal amount of heat. Thus, the total energy of the system remains unchanged though the mechanical energy of the system has reduced. Satellite A satellite of mass at a distance from the centre of Earth possesses both kinetic energy, , (by virtue of its motion) and gravitational potential energy, , (by virtue of its position within the Earth's gravitational field; Earth's mass is ). Hence, mechanical energy of the satellite-Earth system is given by If the satellite is in circular orbit, the energy conservation equation can be further simplified into since in circular motion, Newton's 2nd Law of motion can be taken to be Conversion Today, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories: An electric motor converts electrical energy into mechanical energy. A generator converts mechanical energy into electrical energy. A hydroelectric powerplant converts the mechanical energy of water in a storage dam into electrical energy. An internal combustion engine is a heat engine that obtains mechanical energy from chemical energy by burning fuel. From this mechanical energy, the internal combustion engine often generates electricity. A steam engine converts the heat energy of steam into mechanical energy. A turbine converts the kinetic energy of a stream of gas or liquid into mechanical energy. Distinction from other types The classification of energy into different types often follows the boundaries of the fields of study in the natural sciences. Chemical energy is the kind of potential energy "stored" in chemical bonds and is studied in chemistry. Nuclear energy is energy stored in interactions between the particles in the atomic nucleus and is studied in nuclear physics. Electromagnetic energy is in the form of electric charges, magnetic fields, and photons. It is studied in electromagnetism. Various forms of energy in quantum mechanics; e.g., the energy levels of electrons in an atom. References Notes Citations Bibliography Energy (physics) Mechanical quantities Articles containing video clips
Mechanical energy
[ "Physics", "Mathematics" ]
1,467
[ "Mechanical quantities", "Physical quantities", "Quantity", "Energy (physics)", "Mechanics", "Wikipedia categories named after physical quantities" ]
859,285
https://en.wikipedia.org/wiki/Displacement%20%28fluid%29
In fluid mechanics, displacement occurs when an object is largely immersed in a fluid, pushing it out of the way and taking its place. The volume of the fluid displaced can then be measured, and from this, the volume of the immersed object can be deduced: the volume of the immersed object will be exactly equal to the volume of the displaced fluid. An object immersed in a liquid displaces an amount of fluid equal to the object's volume. Thus, buoyancy is expressed through Archimedes' principle, which states that the weight of the object is reduced by its volume multiplied by the density of the fluid. If the weight of the object is less than this displaced quantity, the object floats; if more, it sinks. The amount of fluid displaced is directly related (via Archimedes' principle) to its volume. In the case of an object that sinks (is totally submerged), the volume of the object is displaced. In the case of an object that floats, the weight of fluid displaced will be equal to the weight of the displacing object. Applications of displacement This method can be used to measure the volume of a solid object, even if its form is not regular. Several methods of such measuring exist. In one case the increase of liquid level is registered as the object is immersed in the liquid (usually water). In the second case, the object is immersed into a vessel full of liquid (called an overflow can), causing it to overflow. Then the spilled liquid is collected and its volume measured. In the third case, the object is suspended under the surface of the liquid and the increase of weight of the vessel is measured. The increase in weight is equal to the amount of liquid displaced by the object, which is the same as the volume of the suspended object times the density of the liquid. The concept of Archimedes' principle is that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the object. The weight of the displaced fluid can be found mathematically. The mass of the displaced fluid can be expressed in terms of the density and its volume, . The fluid displaced has a weight , where is acceleration due to gravity. Therefore, the weight of the displaced fluid can be expressed as . The weight of an object or substance can be measured by floating a sufficiently buoyant receptacle in the cylinder and noting the water level. After placing the object or substance in the receptacle, the difference in weight of the water level volumes will equal the weight of the object. See also Displacement (ship) References Physical quantities Fluid mechanics Volume
Displacement (fluid)
[ "Physics", "Mathematics", "Engineering" ]
542
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Fluid mechanics", "Size", "Extensive quantities", "Civil engineering", "Volume", "Wikipedia categories named after physical quantities", "Physical properties" ]
859,590
https://en.wikipedia.org/wiki/CORDIC
CORDIC (coordinate rotation digital computer), Volder's algorithm, Digit-by-digit method, Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH CORDIC) (Yuanyong Luo et al.), is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons. History Similar mathematical techniques were published by Henry Briggs as early as 1624 and Robert Flower in 1771, but CORDIC is better optimized for low-complexity finite-state CPUs. CORDIC was conceived in 1956 by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber's navigation computer with a more accurate and faster real-time digital solution. Therefore, CORDIC is sometimes referred to as a digital resolver. In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics: where is such that , and . His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. The report also discussed the possibility to compute hyperbolic coordinate rotation, logarithms and exponential functions with modified CORDIC algorithms. Utilizing CORDIC for multiplication and division was also conceived at this time. Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD). In 1958, Convair finally started to build a demonstration system to solve radar fix–taking problems named CORDIC I, completed in 1960 without Volder, who had left the company already. More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962. Volder's CORDIC algorithm was first described in public in 1959, which caused it to be incorporated into navigation computers by companies including Martin-Orlando, Computer Control, Litton, Kearfott, Lear-Siegler, Sperry, Raytheon, and Collins Radio. Volder teamed up with Malcolm McMillan to build Athena, a fixed-point desktop calculator utilizing his binary CORDIC algorithm. The design was introduced to Hewlett-Packard in June 1965, but not accepted. Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM) had proposed as pseudo-multiplication and pseudo-division in 1961. Meggitt's method also suggested the use of base 10 rather than base 2, as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, built by and conceptually derived from Thomas E. Osborne's prototypical Green Machine, a four-function, floating-point desktop calculator he had completed in DTL logic in December 1964. This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year. When Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1 (September 1964) and LOCI-2 (January 1965) Logarithmic Computing Instrument desktop calculators, they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang's patents in 1968. John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions, natural exponentials, natural logarithms, multiplications, divisions, and square roots. The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. This development resulted in the first scientific handheld calculator, the HP-35 in 1972. Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC. Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 and it was found only later that Hewlett-Packard had implemented it in 1966 already. Decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed. CORDIC has been implemented in the ARM-based STM32G4, Intel 8087, 80287, 80387 up to the 80486 coprocessor series as well as in the Motorola 68881 and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system. Applications CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition, QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing, communication systems, robotics and 3D graphics apart from general scientific and technical computation. Hardware The algorithm was used in the navigational system of the Apollo program's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module. CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication. CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC). In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for. On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations. The STM32G4 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks. The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error. Software Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log10, natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC. Modes of operation Rotation mode CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle , the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector : In the first iteration, this vector is rotated 45° counterclockwise to get the vector . Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is for . More formally, every iteration calculates a rotation, which is performed by multiplying the vector with the rotation matrix : The rotation matrix is given by Using the trigonometric identity: the cosine factor can be taken out to give: The expression for the rotated vector then becomes: where and are the components of . Setting the angle for each iteration such that still yields a series that converges to every possible output value. The multiplication with the tangent can therefore be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift. The expression then becomes: and is used to determine the direction of the rotation: if the angle is positive, then is +1, otherwise it is −1. The following trigonometric identity can be used to replace the cosine: , giving this multiplier for each iteration: The factors can then be taken out of the iterative process and applied all at once afterwards with a scaling factor : which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling and hence saving a multiplication. Additionally, it can be noted that to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for altogether, resulting in a processing gain : After a sufficient number of iterations, the vector's angle will be close to the wanted angle . For most ordinary purposes, 40 iterations (n = 40) are sufficient to obtain the correct result to the 10th decimal place. The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of ). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle , if is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise: The values of must also be precomputed and stored. For small angles it can be approximated with to reduce the table size. As can be seen in the illustration above, the sine of the angle is the y coordinate of the final vector while the x coordinate is the cosine value. Vectoring mode The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on being positive or negative. The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of contains the total angle of rotation. The final value of x will be the magnitude of the original vector scaled by K. So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates. Implementation In Java the Math class has a scalb(double x,int scale) method to perform such a shift, C has the ldexp function, and the x86 class of processors have the fscale floating point operation. Software example (Python) from math import atan2, sqrt, sin, cos, radians ITERS = 16 theta_table = [atan2(1, 2**i) for i in range(ITERS)] def compute_K(n): """ Compute K(n) for n = ITERS. This could also be stored as an explicit constant if ITERS above is fixed. """ k = 1.0 for i in range(n): k *= 1 / sqrt(1 + 2 ** (-2 * i)) return k def CORDIC(alpha, n): assert n <= ITERS K_n = compute_K(n) theta = 0.0 x = 1.0 y = 0.0 P2i = 1 # This will be 2**(-i) in the loop below for arc_tangent in theta_table[:n]: sigma = +1 if theta < alpha else -1 theta += sigma * arc_tangent x, y = x - sigma * y * P2i, sigma * P2i * x + y P2i /= 2 return x * K_n, y * K_n if __name__ == "__main__": # Print a table of computed sines and cosines, from -90° to +90°, in steps of 15°, # comparing against the available math routines. print(" x sin(x) diff. sine cos(x) diff. cosine ") for x in range(-90, 91, 15): cos_x, sin_x = CORDIC(radians(x), ITERS) print( f"{x:+05.1f}° {sin_x:+.8f} ({sin_x-sin(radians(x)):+.8f}) {cos_x:+.8f} ({cos_x-cos(radians(x)):+.8f})" ) Output $ python cordic.py x sin(x) diff. sine cos(x) diff. cosine -90.0° -1.00000000 (+0.00000000) -0.00001759 (-0.00001759) -75.0° -0.96592181 (+0.00000402) +0.25883404 (+0.00001499) -60.0° -0.86601812 (+0.00000729) +0.50001262 (+0.00001262) -45.0° -0.70711776 (-0.00001098) +0.70709580 (-0.00001098) -30.0° -0.50001262 (-0.00001262) +0.86601812 (-0.00000729) -15.0° -0.25883404 (-0.00001499) +0.96592181 (-0.00000402) +00.0° +0.00001759 (+0.00001759) +1.00000000 (-0.00000000) +15.0° +0.25883404 (+0.00001499) +0.96592181 (-0.00000402) +30.0° +0.50001262 (+0.00001262) +0.86601812 (-0.00000729) +45.0° +0.70709580 (-0.00001098) +0.70711776 (+0.00001098) +60.0° +0.86601812 (-0.00000729) +0.50001262 (+0.00001262) +75.0° +0.96592181 (-0.00000402) +0.25883404 (+0.00001499) +90.0° +1.00000000 (-0.00000000) -0.00001759 (-0.00001759) Hardware example The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters. Double iterations CORDIC In two of the publications by Vladimir Baykov, it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes every time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: . Whereas with ordinary iterations: . The double iteration method guarantees the convergence of the method throughout the valid range of argument changes. The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix showed that for the functions sine, cosine, arctangent, it is enough to perform iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, iterations should be performed for each value . For the functions arcsine and arccosine, two iterations should be performed for each number digit, i.e. for each value of . For inverse hyperbolic sine and arcosine functions, the number of iterations will be for each , that is, for each result digit. Related algorithms CORDIC is part of the class of "shift-and-add" algorithms, as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm, which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle (in radians) by computing the exponential of , which is . The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor (K). See also Methods of computing square roots IEEE 754 Floating-point units Digital Circuits/CORDIC in Wikibooks References Further reading (NB. DIVIC stands for DIgital Variable Increments Computer. Some sources erroneously refer to this as by J. M. Parini.) () () () () () (about CORDIC in TI-58/TI-59) (x+268 pages) External links Soft CORDIC IP (verilog HDL code) CORDIC Bibliography Site BASIC Stamp, CORDIC math implementation CORDIC implementation in verilog CORDIC Vectoring with Arbitrary Target Value Python CORDIC implementation Simple C code for fixed-point CORDIC Tutorial and MATLAB Implementation – Using CORDIC to Estimate Phase of a Complex Number (archive.org) Descriptions of hardware CORDICs in Arx with testbenches in C++ and VHDL An Introduction to the CORDIC algorithm Implementation of the CORDIC Algorithm in a Digital Down-Converter Implementation of the CORDIC Algorithm: fixed point C code for trigonometric and hyperbolic functions, C code for test and performance verification Digit-by-digit algorithms Shift-and-add algorithms Root-finding algorithms Computer arithmetic Numerical analysis Trigonometry
CORDIC
[ "Mathematics" ]
4,752
[ "Computational mathematics", "Computer arithmetic", "Arithmetic", "Mathematical relations", "Numerical analysis", "Approximations" ]
859,686
https://en.wikipedia.org/wiki/Supercommutative%20algebra
In mathematics, a supercommutative (associative) algebra is a superalgebra (i.e. a Z2-graded algebra) such that for any two homogeneous elements x, y we have where |x| denotes the grade of the element and is 0 or 1 (in Z) according to whether the grade is even or odd, respectively. Equivalently, it is a superalgebra where the supercommutator always vanishes. Algebraic structures which supercommute in the above sense are sometimes referred to as skew-commutative associative algebras to emphasize the anti-commutation, or, to emphasize the grading, graded-commutative or, if the supercommutativity is understood, simply commutative. Any commutative algebra is a supercommutative algebra if given the trivial gradation (i.e. all elements are even). Grassmann algebras (also known as exterior algebras) are the most common examples of nontrivial supercommutative algebras. The supercenter of any superalgebra is the set of elements that supercommute with all elements, and is a supercommutative algebra. The even subalgebra of a supercommutative algebra is always a commutative algebra. That is, even elements always commute. Odd elements, on the other hand, always anticommute. That is, for odd x and y. In particular, the square of any odd element x vanishes whenever 2 is invertible: Thus a commutative superalgebra (with 2 invertible and nonzero degree one component) always contains nilpotent elements. A Z-graded anticommutative algebra with the property that for every element x of odd grade (irrespective of whether 2 is invertible) is called an alternating algebra. See also Graded-commutative ring Lie superalgebra References Algebras Super linear algebra
Supercommutative algebra
[ "Physics", "Mathematics" ]
413
[ "Mathematical structures", "Super linear algebra", "Algebras", "Algebraic structures", "Supersymmetry", "Symmetry" ]
859,798
https://en.wikipedia.org/wiki/Bottled%20gas
Bottled gas is a term used for substances which are gaseous at standard temperature and pressure (STP) and have been compressed and stored in carbon steel, stainless steel, aluminum, or composite containers known as gas cylinders. Gas state in cylinders There are four cases: either the substance remains a gas at standard temperature but increased pressure, the substance liquefies at standard temperature but increased pressure, the substance is dissolved in a solvent, or the substance is liquefied at reduced temperature and increased pressure. In the last case the bottle is constructed with an inner and outer shell separated by a vacuum (dewar flask) so that the low temperature can be maintained by evaporative cooling. Case I The substance remains a gas at standard temperature and increased pressure, its critical temperature being below standard temperature. Examples include: air argon fluorine helium hydrogen krypton nitrogen oxygen Case II The substance liquefies at standard temperature but increased pressure. Examples include: ammonia butane carbon dioxide (also packaged as a cryogenic gas, Case IV) chlorine nitrous oxide propane sulfur dioxide Case III The substance is dissolved at standard temperature in a solvent. Examples include: carbon dioxide in the form of a soft drink sulfur trioxide in the form of fuming sulfuric acid nitrogen dioxide in the form of red-fuming nitric acid hydrogen chloride in the form of muriatic acid Note: these four are most often found in containers other than metal bottles, and at low pressure, e.g. . acetylene Note: Acetylene cylinders contain an inert packing material, which may be agamassan, and are filled with a solvent such as acetone or dimethylformamide. The acetylene is pumped into the cylinder and it dissolves in the solvent. When the cylinder is opened the acetylene comes back out of solution, much like a carbonated beverage bubbles when opened. This is a workaround to acetylene's propensity to explode when pressurized above 200 kPa or liquified. Case IV The substance is liquefied at reduced temperature and increased pressure. These are also referred to as cryogenic gases. Examples include: liquid nitrogen (LN2) liquid hydrogen (LH2) liquid oxygen (LOX) carbon dioxide (also packaged as a liquefied gas, Case II) Note: cryogenic gases are typically equipped with some type of 'bleed' device to prevent overpressure from rupturing the bottle and to allow evaporative cooling to continue. Expansion and volume The general rule is that one unit volume of liquid will expand to approximately 800 unit volumes of gas at standard temperature and pressure with some variation due to intermolecular force and molecule size compared to an ideal gas. Normal high pressure gas cylinders will hold gas at pressures from . An ideal gas pressurised to 200 bar in a cylinder would contain 200 times as much as the volume of the cylinder at atmospheric pressure, but real gases will contain less than that by a few percent. At higher pressures, the shortfall is greater. Special handling considerations Because the contents are under high pressure and are sometimes hazardous, there are special safety regulations for handling bottled gases. These include chaining bottles to prevent falling and breaking, proper ventilation to prevent injury or death in case of leaks and signage to indicate the potential hazards. In the United States, the Compressed Gas Association (CGA) sells a number of booklets and pamphlets on safe handling and use of bottled gases. (Members of the CGA can get the pamphlets for free.) The European Industrial Gases Association and the British Compressed Gases Association provide similar facilities in Europe and the United Kingdom. Nomenclature differences In the United States, 'bottled gas' typically refers to liquefied petroleum gas. 'Bottled gas' is sometimes used in medical supply, especially for portable oxygen tanks. Packaged industrial gases are frequently called 'cylinder gas', though 'bottled gas' is sometimes used. The United Kingdom and other parts of Europe more commonly refer to 'bottled gas' when discussing any usage whether industrial, medical or liquefied petroleum. However, in contrast, what the United States calls liquefied petroleum gas is known generically in the United Kingdom as 'LPG'; and it may be ordered using by one of several Trade names, or specifically as butane or propane depending on the required heat output. Colour coding Different countries have different gas colour codes but attempts are being made to standardise the colours of cylinder shoulders: Colours of cylinders for Medical gases are covered by an International Organization for Standardization (ISO) standard, ISO 32; but not all countries use this standard. Within Europe gas cylinders colours are being standardised according to EN 1089-3, the standard colours applying to the cylinder shoulder only; i.e., the top of the cylinder close to the pillar valve. In the United States, colour-coding is not regulated by law. The user should not rely on the colour of a cylinder to indicate what it contains. The label or decal should always be checked for product identification. European cylinder colours The colours below are specific shades, defined in the European Standard in terms of RAL coordinates. The requirements are based on a combination of a few named gases, otherwise on the primary hazard associated with the gas contents: Specific gases Based on gas properties Gas mixtures, mostly for diving Diving cylinders are left unpainted (for aluminium), or painted to prevent corrosion (for steel), often in bright colors, most often fluorescent yellow, to increase visibility. This should not be confused with industrial gases, where a yellow shoulder means chlorine. See also References Notes Standards ISO 32: Gas cylinders for medical use—Marking for identification of content. CEN EN 1089-3: Transportable gas cylinders, Part 3 - Colour Coding. External links Virtual Anesthesia Machine - 6 different color codes for medical gas cylinders, hoses and outlets British Compressed Gases Association – Colour Coding of Cylinders. Air Products – European Gas Cylinder Identification Chart. Compressed Gas Association (U.S.) Gases and Welding Distributors Association (U.S.) European Industrial Gases Association (E.U.) British Compressed Gases Association (UK) Gases Pressure vessels Gas technologies Industrial gases Fuel containers de:Gasflasche Color codes
Bottled gas
[ "Physics", "Chemistry", "Engineering" ]
1,279
[ "Structural engineering", "Matter", "Chemical equipment", "Phases of matter", "Physical systems", "Hydraulics", "Industrial gases", "Chemical process engineering", "Statistical mechanics", "Pressure vessels", "Gases" ]
860,138
https://en.wikipedia.org/wiki/Sesquilinear%20form
In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix sesqui- meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector. A motivating special case is a sesquilinear form on a complex vector space, . This is a map that is linear in one argument and "twists" the linearity of the other argument by complex conjugation (referred to as being antilinear in the other argument). This case arises naturally in mathematical physics applications. Another important case allows the scalars to come from any field and the twist is provided by a field automorphism. An application in projective geometry requires that the scalars come from a division ring (skew field), , and this means that the "vectors" should be replaced by elements of a -module. In a very general setting, sesquilinear forms can be defined over -modules for arbitrary rings . Informal introduction Sesquilinear forms abstract and generalize the basic notion of a Hermitian form on complex vector space. Hermitian forms are commonly seen in physics, as the inner product on a complex Hilbert space. In such cases, the standard Hermitian form on is given by where denotes the complex conjugate of This product may be generalized to situations where one is not working with an orthonormal basis for , or even any basis at all. By inserting an extra factor of into the product, one obtains the skew-Hermitian form, defined more precisely, below. There is no particular reason to restrict the definition to the complex numbers; it can be defined for arbitrary rings carrying an antiautomorphism, informally understood to be a generalized concept of "complex conjugation" for the ring. Convention Conventions differ as to which argument should be linear. In the commutative case, we shall take the first to be linear, as is common in the mathematical literature, except in the section devoted to sesquilinear forms on complex vector spaces. There we use the other convention and take the first argument to be conjugate-linear (i.e. antilinear) and the second to be linear. This is the convention used mostly by physicists and originates in Dirac's bra–ket notation in quantum mechanics. It is also consistent with the definition of the usual (Euclidean) product of as . In the more general noncommutative setting, with right modules we take the second argument to be linear and with left modules we take the first argument to be linear. Complex vector spaces Assumption: In this section, sesquilinear forms are antilinear in their first argument and linear in their second. Over a complex vector space a map is sesquilinear if for all and all Here, is the complex conjugate of a scalar A complex sesquilinear form can also be viewed as a complex bilinear map where is the complex conjugate vector space to By the universal property of tensor products these are in one-to-one correspondence with complex linear maps For a fixed the map is a linear functional on (i.e. an element of the dual space ). Likewise, the map is a conjugate-linear functional on Given any complex sesquilinear form on we can define a second complex sesquilinear form via the conjugate transpose: In general, and will be different. If they are the same then is said to be . If they are negatives of one another, then is said to be . Every sesquilinear form can be written as a sum of a Hermitian form and a skew-Hermitian form. Matrix representation If is a finite-dimensional complex vector space, then relative to any basis of a sesquilinear form is represented by a matrix and given by where is the conjugate transpose. The components of the matrix are given by Hermitian form The term Hermitian form may also refer to a different concept than that explained below: it may refer to a certain differential form on a Hermitian manifold. A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form such that The standard Hermitian form on is given (again, using the "physics" convention of linearity in the second and conjugate linearity in the first variable) by More generally, the inner product on any complex Hilbert space is a Hermitian form. A minus sign is introduced in the Hermitian form to define the group SU(1,1). A vector space with a Hermitian form is called a Hermitian space. The matrix representation of a complex Hermitian form is a Hermitian matrix. A complex Hermitian form applied to a single vector is always a real number. One can show that a complex sesquilinear form is Hermitian if and only if the associated quadratic form is real for all Skew-Hermitian form A complex skew-Hermitian form (also called an antisymmetric sesquilinear form), is a complex sesquilinear form such that Every complex skew-Hermitian form can be written as the imaginary unit times a Hermitian form. The matrix representation of a complex skew-Hermitian form is a skew-Hermitian matrix. A complex skew-Hermitian form applied to a single vector is always a purely imaginary number. Over a division ring This section applies unchanged when the division ring is commutative. More specific terminology then also applies: the division ring is a field, the anti-automorphism is also an automorphism, and the right module is a vector space. The following applies to a left module with suitable reordering of expressions. Definition A -sesquilinear form over a right -module is a bi-additive map with an associated anti-automorphism of a division ring such that, for all in and all in , The associated anti-automorphism for any nonzero sesquilinear form is uniquely determined by . Orthogonality Given a sesquilinear form over a module and a subspace (submodule) of , the orthogonal complement of with respect to is Similarly, is orthogonal to with respect to , written (or simply if can be inferred from the context), when . This relation need not be symmetric, i.e. does not imply (but see below). Reflexivity A sesquilinear form is reflexive if, for all in , implies That is, a sesquilinear form is reflexive precisely when the derived orthogonality relation is symmetric. Hermitian variations A -sesquilinear form is called -Hermitian if there exists in such that, for all in , If , the form is called -Hermitian, and if , it is called -anti-Hermitian. (When is implied, respectively simply Hermitian or anti-Hermitian.) For a nonzero -Hermitian form, it follows that for all in , It also follows that is a fixed point of the map . The fixed points of this map form a subgroup of the additive group of . A -Hermitian form is reflexive, and every reflexive -sesquilinear form is -Hermitian for some . In the special case that is the identity map (i.e., ), is commutative, is a bilinear form and . Then for the bilinear form is called symmetric, and for is called skew-symmetric. Example Let be the three dimensional vector space over the finite field , where is a prime power. With respect to the standard basis we can write and and define the map by: The map is an involutory automorphism of . The map is then a -sesquilinear form. The matrix associated to this form is the identity matrix. This is a Hermitian form. In projective geometry Assumption: In this section, sesquilinear forms are antilinear (resp. linear) in their second (resp. first) argument. In a projective geometry , a permutation of the subspaces that inverts inclusion, i.e. for all subspaces , of , is called a correlation. A result of Birkhoff and von Neumann (1936) shows that the correlations of desarguesian projective geometries correspond to the nondegenerate sesquilinear forms on the underlying vector space. A sesquilinear form is nondegenerate if for all in (if and) only if . To achieve full generality of this statement, and since every desarguesian projective geometry may be coordinatized by a division ring, Reinhold Baer extended the definition of a sesquilinear form to a division ring, which requires replacing vector spaces by -modules. (In the geometric literature these are still referred to as either left or right vector spaces over skewfields.) Over arbitrary rings The specialization of the above section to skewfields was a consequence of the application to projective geometry, and not intrinsic to the nature of sesquilinear forms. Only the minor modifications needed to take into account the non-commutativity of multiplication are required to generalize the arbitrary field version of the definition to arbitrary rings. Let be a ring, an -module and an antiautomorphism of . A map is -sesquilinear if for all in and all in . An element is orthogonal to another element with respect to the sesquilinear form (written ) if . This relation need not be symmetric, i.e. does not imply . A sesquilinear form is reflexive (or orthosymmetric) if implies for all in . A sesquilinear form is Hermitian if there exists such that for all in . A Hermitian form is necessarily reflexive, and if it is nonzero, the associated antiautomorphism is an involution (i.e. of order 2). Since for an antiautomorphism we have for all in , if , then must be commutative and is a bilinear form. In particular, if, in this case, is a skewfield, then is a field and is a vector space with a bilinear form. An antiautomorphism can also be viewed as an isomorphism , where is the opposite ring of , which has the same underlying set and the same addition, but whose multiplication operation () is defined by , where the product on the right is the product in . It follows from this that a right (left) -module can be turned into a left (right) -module, . Thus, the sesquilinear form can be viewed as a bilinear form . See also *-ring Notes References External links Functional analysis Linear algebra
Sesquilinear form
[ "Mathematics" ]
2,367
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra" ]
13,120,740
https://en.wikipedia.org/wiki/Pulse%20tube%20refrigerator
The pulse tube refrigerator (PTR) or pulse tube cryocooler is a developing technology that emerged largely in the early 1980s with a series of other innovations in the broader field of thermoacoustics. In contrast with other cryocoolers (e.g. Stirling cryocooler and GM-refrigerators), this cryocooler can be made without moving parts in the low temperature part of the device, making the cooler suitable for a wide variety of applications. Uses Pulse tube cryocoolers are used in niche industrial applications such as semiconductor fabrication and superconducting radio-frequency circuits. They are also used in military applications such as for the cooling of infrared sensors. In research, PTRs are often used as precoolers of dilution refrigerators. They are also being developed for cooling of astronomical detectors where liquid cryogens are typically used, such as the Atacama Cosmology Telescope or the Qubic experiment (an interferometer for cosmology studies). Pulse tubes are particularly useful in space-based telescopes such as the James Webb Space Telescope where it is not possible to replenish the cryogens as they are depleted. It has also been suggested that pulse tubes could be used to liquefy oxygen on Mars. Principle of operation Figure 1 represents the Stirling-type single-orifice pulse-tube refrigerator (PTR), which is filled with a gas, typically helium at a pressure varying from 10 to 30 bar. From left to right the components are: a compressor, with a piston moving back and forth at room temperature TH a heat exchanger X1 where heat is released to the surroundings at room temperature a regenerator consisting of a porous medium with a large specific heat (which can be stainless steel wire mesh, copper wire mesh, phosphor bronze wire mesh, lead balls, lead shot, or rare earth materials) in which the gas flows back and forth a heat exchanger X2, cooled by the gas, where the useful cooling power is delivered at the low temperature TL, taken from the object to be cooled a tube in which the gas is pushed and pulled a heat exchanger X3 near room temperature where heat is released to the surroundings a flow resistance (often called orifice) a buffer volume (a large closed volume at practically constant pressure) The part in between X1 and X3 is thermally insulated from the surroundings, usually by vacuum. The pressure varies gradually and the velocities of the gas are low. So the name "pulse" tube cooler is misleading, since there are no pulses in the system. The piston moves periodically from left to right and back. As a result, the gas also moves from left to right and back while the pressure within the system increases and decreases. If the gas from the compressor space moves to the right, it enters the regenerator with temperature TH and leaves the regenerator at the cold end with temperature TL, hence heat is transferred into the regenerator material. On its return, the heat stored within the regenerator is transferred back into the gas. In the tube, the gas is thermally isolated (adiabatic), so the temperature of the gas in the tube varies with the pressure. At the cold end of the tube, the gas enters the tube via X2 when the pressure is high with temperature TL and returns when the pressure is low with a temperature below TL, hence taking up heat from X2: this gives the desired cooling effect at X2. To understand why the low-pressure gas returns at a lower temperature, look at figure 1 and consider gas molecules close to X3 (at the hot end), which move in and out of the tube through the orifice. Molecules flow into the tube (to the left) when the pressure in the tube is low (it is sucked into the tube via X3, coming from the orifice and the buffer). Upon entering the tube, it has the temperature TH. Later in the cycle, the same mass of gas is pushed out from the tube again when the pressure inside the tube is high. As a consequence, its temperature will be higher than TH. In the heat exchanger X3, it releases heat and cools down to the ambient temperature TH. Figure 3 shows a coaxial pulse tube, which is a more useful configuration in which the regenerator surrounds the central pulse tube. This is compact and places the cold head at an end, so it is easy to integrate with whatever is to be cooled. The displacer can be passively driven, and this recovers work that would otherwise be dissipated in the orifice. Performance The performance of the cooler is determined mainly by the quality of the regenerator. It has to satisfy conflicting requirements: it must have a low flow resistance (so it must be short with wide channels), but the heat exchange should also be good (so it must be long with narrow channels). The material must have a large heat capacity. At temperatures above 50K practically all materials are suitable. Bronze or stainless steel is often used. For temperatures between 10 and 50K lead is most suitable. Below 10K one uses magnetic materials which are specially developed for this application. The so-called coefficient of performance (COP; denoted ) of coolers is defined as the ratio between the cooling power and the compressor power P. In formula: . For a perfectly reversible cooler, is given by Carnot's theorem: However, a pulse-tube refrigerator is not perfectly reversible due to the presence of the orifice, which has flow resistance. Instead, the COP of an ideal PTR is given by which is lower than that of ideal coolers. Comparison with other coolers In most coolers gas is compressed and expanded periodically. Well-known coolers such as the Stirling engine coolers and the popular Gifford-McMahon coolers have a displacer that ensures that the cooling (due to expansion) takes place in a different region of the machine than the heating (due to compression). Due to its clever design, the PTR does not have such a displacer, making the construction of a PTR simpler, cheaper, and more reliable. Furthermore, there are no mechanical vibrations and no electro-magnetic interferences. The basic operation of cryocoolers and related thermal machines is described by De Waele History W. E. Gifford and R. C. Longsworth, in the 1960s, invented the so-called Basic Pulse Tube Refrigerator. The modern PTR was invented in 1984 by Mikulin who introduced an orifice to the basic pulse tube. He reached a temperature of 105K. Soon after that, PTRs became better due to the invention of new variations. This is shown in figure 4, where the lowest temperature for PTRs is plotted as a function of time. At the moment, the lowest temperature is below the boiling point of helium (4.2K). Originally this was considered to be impossible. For some time it looked as if it would be impossible to cool below the lambda point of 4He (2.17K), but the low-temperature group of the Eindhoven University of Technology managed to cool to a temperature of 1.73K by replacing the usual 4He as refrigerant by its rare isotope 3He. Later this record was broken by the Giessen Group that managed to get even below 1.3K. In a collaboration between the groups from Giessen and Eindhoven a temperature of 1.2K was reached by combining a PTR with a superfluid vortex cooler. Types For cooling, the source of the pressure variations is unimportant. PTRs for temperatures below 20K usually operate at frequencies of 1 to 2 Hz and with pressure variations from 10 to 25 bar. The swept volume of the compressor would be very high (up to one liter and more). Therefore, the compressor is uncoupled from the cooler. A system of valves (usually a rotating valve) alternately connects the high-pressure and the low-pressure side of the compressor to the hot end of the regenerator. As the high-temperature part of this type of PTR is the same as of GM-coolers, this type of PTR is called a GM-type PTR. The gas flows through the valves are accompanied by losses which are absent in the Stirling-type PTR. PTRs can be classified according to their shape. If the regenerator and the tube are in line (as in fig. 1) we talk about a linear PTR. The disadvantage of the linear PTR is that the cold spot is in the middle of the cooler. For many applications it is preferable that the cooling is produced at the end of the cooler. By bending the PTR we get a U-shaped cooler. Both hot ends can be mounted on the flange of the vacuum chamber at room temperature. This is the most common shape of PTRs. For some applications it is preferable to have a cylindrical geometry. In that case the PTR can be constructed in a coaxial way so that the regenerator becomes a ring-shaped space surrounding the tube. The lowest temperature reached with single-stage PTRs is just above 10K. However, one PTR can be used to precool the other. The hot end of the second tube is connected to room temperature and not to the cold end of the first stage. In this clever way it is avoided that the heat, released at the hot end of the second tube, is a load on the first stage. In applications the first stage also operates as a temperature-anchoring platform for e.g. shield cooling of superconducting-magnet cryostats. Matsubara and Gao were the first to cool below 4K with a three-stage PTR. With two-stage PTRs temperatures of 2.1K, so just above the λ-point of helium, have been obtained. With a three-stage PTR 1.73K has been reached using 3He as the working fluid. Prospects The coefficient of performance of PTRs at room temperature is low, so it is not likely that they will play a role in domestic cooling. However, below about 80K the coefficient of performance is comparable with other coolers (compare equations () and ()) and in the low-temperature region the advantages get the upper hand. PTRs are commercially available for temperatures in the region of 70K and 4K. They are applied in infrared detection systems, for reduction of thermal noise in devices based on (high-Tc) superconductivity such as SQUIDs, and filters for telecommunication. PTRs are also suitable for cooling MRI-systems and energy-related systems using superconducting magnets. In so-called dry magnets, coolers are used so that no cryoliquid is needed at all or for the recondensation of the evaporated helium. Also the combination of cryocoolers with 3He-4He dilution refrigerators for the temperature region down to 2mK is attractive since in this way the whole temperature range from room temperature to 2mK is easier to access. For many low temperature experiments, mechanical vibrations caused by PTRs can cause microphonics on measurement lines, which is a big disadvantage of PTRs. Particularly for scanning probe microscopy uses, PTR-based scanning tunneling microscopes (STMs) have historically difficult due to the extreme vibration sensitivity of STM. Use of an exchange gas above the vibration sensitive scanning head enabled the first PTR based low temperature STMs. Now, there are commercially available PTR-based, cryogen free scanning probe systems. See also Cryocooler Regenerative cooling Timeline of low-temperature technology References External links A Short History of Pulse Tube Refrigerators (NASA) SHI Cryogenics Group Home Cryomech Home Pulse-tube animation (Thales Cryogenics) The James Webb Space Telescope Cryocooler (JWST/NASA) Cooling technology Cryogenics Thermodynamic cycles
Pulse tube refrigerator
[ "Physics" ]
2,490
[ "Applied and interdisciplinary physics", "Cryogenics" ]
8,461,487
https://en.wikipedia.org/wiki/CCL8
Chemokine (C-C motif) ligand 8 (CCL8), also known as monocyte chemoattractant protein 2 (MCP2), is a protein that in humans is encoded by the CCL8 gene. CCL8 is a small cytokine belonging to the CC chemokine family. The CCL8 protein is produced as a precursor containing 109 amino acids, which is cleaved to produce mature CCL8 containing 75 amino acids. The gene for CCL8 is encoded by 3 exons and is located within a large cluster of CC chemokines on chromosome 17q11.2 in humans. MCP-2 is chemotactic for and activates many different immune cells, including mast cells, eosinophils and basophils, (that are implicated in allergic responses), and monocytes, T cells, and NK cells that are involved in the inflammatory response. CCL8 elicits its effects by binding to several different cell surface receptors called chemokine receptors. These receptors include CCR1, CCR2B, CCR3 and CCR5. CCL8 is a CC chemokine that utilizes multiple cellular receptors to attract and activate human leukocytes. CCL8 is a potent inhibitor of HIV1 by virtue of its high-affinity binding to the receptor CCR5, one of the major co-receptors for HIV1. In addition, CCL8 attributes to the growth of metastasis in breast cancer cells. The manipulation of this chemokine activity influences the histology of tumors promoting steps of metastatic processes. CCL8 is also involved in attracting macrophages to the decidua in labor. References External links Further reading Cytokines
CCL8
[ "Chemistry" ]
370
[ "Cytokines", "Signal transduction" ]
8,461,989
https://en.wikipedia.org/wiki/CCL11
C-C motif chemokine 11 also known as eosinophil chemotactic protein and eotaxin-1 is a protein that in humans is encoded by the CCL11 gene. This gene is encoded on three exons and is located on chromosome 17. Function CCL11 is a small cytokine belonging to the CC chemokine family. CCL11 selectively recruits eosinophils by inducing their chemotaxis, and therefore, is implicated in allergic responses. The effects of CCL11 are mediated by its binding to a G-protein-linked receptor known as a chemokine receptor. Chemokine receptors for which CCL11 is a ligand include CCR2, CCR3 and CCR5. However, it has been found that eotaxin-1 (CCL11) has high degree selectivity for its receptor, such that they are inactive on neutrophils and monocytes, which do not express CCR3. Clinical significance Increased CCL11 levels in blood plasma are associated with aging in mice and humans. Additionally, it has been demonstrated that exposing young mice to CCL11 or the blood plasma of older mice decreases their neurogenesis and cognitive performance on behavioural tasks thought to be dependent on neurogenesis in the hippocampus. Higher plasma concentrations of CCL11 have been found in current cannabis users compared to past users and those who had never used. CCL11 has also been found in higher concentrations in people with schizophrenia; cannabis is a known trigger of schizophrenia. It's also a biomarker for CTE or punch-drunk syndrome. During periods of bone inflammation, CCL11 and CCR3 are upregulated. This is associated with an increase in osteoclast activity. In 2022, Monje et al demonstrated that elevated levels of CCL11 may contribute to the brain fog associated with both chemotherapy and so-called long covid References Further reading External links Cytokines Aging-related proteins
CCL11
[ "Chemistry", "Biology" ]
420
[ "Senescence", "Cytokines", "Aging-related proteins", "Signal transduction" ]
8,462,444
https://en.wikipedia.org/wiki/Xylenol%20orange
Xylenol orange is an organic reagent, most commonly used as a tetrasodium salt as an indicator for metal titrations. When used for metal titrations, it will appear red in the titrand and become yellow once it reaches its endpoint. Historically, commercial preparations of it have been notoriously impure, sometimes consisting of as little as 20% xylenol orange, and containing large amounts of semi-xylenol orange and iminodiacetic acid. Purities as high as 90% are now available. It is fluorescent, and has excitation maximums of 440 & 570 nm and an emission maximum of 610 nm. References Analytical reagents Triarylmethane dyes Benzoxathioles Acetic acids
Xylenol orange
[ "Chemistry" ]
162
[ "Organic compounds", "Organic compound stubs", "Analytical reagents", "Organic chemistry stubs" ]
8,464,397
https://en.wikipedia.org/wiki/Uniform%20star%20polyhedron
In geometry, a uniform star polyhedron is a self-intersecting uniform polyhedron. They are also sometimes called nonconvex polyhedra to imply self-intersecting. Each polyhedron can contain either star polygon faces, star polygon vertex figures, or both. The complete set of 57 nonprismatic uniform star polyhedra includes the 4 regular ones, called the Kepler–Poinsot polyhedra, 14 quasiregular ones, and 39 semiregular ones. There are also two infinite sets of uniform star prisms and uniform star antiprisms. Just as (nondegenerate) star polygons (which have polygon density greater than 1) correspond to circular polygons with overlapping tiles, star polyhedra that do not pass through the center have polytope density greater than 1, and correspond to spherical polyhedra with overlapping tiles; there are 47 nonprismatic such uniform star polyhedra. The remaining 10 nonprismatic uniform star polyhedra, those that pass through the center, are the hemipolyhedra as well as Miller's monster, and do not have well-defined densities. The nonconvex forms are constructed from Schwarz triangles. All the uniform polyhedra are listed below by their symmetry groups and subgrouped by their vertex arrangements. Regular polyhedra are labeled by their Schläfli symbol. Other nonregular uniform polyhedra are listed with their vertex configuration. An additional figure, the pseudo great rhombicuboctahedron, is usually not included as a truly uniform star polytope, despite consisting of regular faces and having the same vertices. Note: For nonconvex forms below an additional descriptor nonuniform is used when the convex hull vertex arrangement has same topology as one of these, but has nonregular faces. For example an nonuniform cantellated form may have rectangles created in place of the edges rather than squares. Dihedral symmetry See Prismatic uniform polyhedron. Tetrahedral symmetry There is one nonconvex form, the tetrahemihexahedron which has tetrahedral symmetry (with fundamental domain Möbius triangle (3 3 2)). There are two Schwarz triangles that generate unique nonconvex uniform polyhedra: one right triangle ( 3 2), and one general triangle ( 3 3). The general triangle ( 3 3) generates the octahemioctahedron which is given further on with its full octahedral symmetry. Octahedral symmetry There are 8 convex forms, and 10 nonconvex forms with octahedral symmetry (with fundamental domain Möbius triangle (4 3 2)). There are four Schwarz triangles that generate nonconvex forms, two right triangles ( 4 2), and ( 3 2), and two general triangles: ( 4 3), ( 4 4). Icosahedral symmetry There are 8 convex forms and 46 nonconvex forms with icosahedral symmetry (with fundamental domain Möbius triangle (5 3 2)). (or 47 nonconvex forms if Skilling's figure is included). Some of the nonconvex snub forms have reflective vertex symmetry. Degenerate cases Coxeter identified a number of degenerate star polyhedra by the Wythoff construction method, which contain overlapping edges or vertices. These degenerate forms include: Small complex icosidodecahedron Great complex icosidodecahedron Small complex rhombicosidodecahedron Great complex rhombicosidodecahedron Complex rhombidodecadodecahedron Skilling's figure One further nonconvex degenerate polyhedron is the great disnub dirhombidodecahedron, also known as Skilling's figure, which is vertex-uniform, but has pairs of edges which coincide in space such that four faces meet at some edges. It is counted as a degenerate uniform polyhedron rather than a uniform polyhedron because of its double edges. It has Ih symmetry. See also Star polygon List of uniform polyhedra List of uniform polyhedra by Schwarz triangle References Brückner, M. Vielecke und vielflache. Theorie und geschichte.. Leipzig, Germany: Teubner, 1900. Har'El, Z. Uniform Solution for Uniform Polyhedra., Geometriae Dedicata 47, 57-110, 1993. Zvi Har’El, Kaleido software, Images, dual images Mäder, R. E. Uniform Polyhedra. Mathematica J. 3, 48-57, 1993. Messer, Peter W. Closed-Form Expressions for Uniform Polyhedra and Their Duals., Discrete & Computational Geometry 27:353-375 (2002). External links Uniform polyhedra
Uniform star polyhedron
[ "Physics" ]
1,018
[ "Uniform polytopes", "Uniform polyhedra", "Symmetry" ]
8,464,940
https://en.wikipedia.org/wiki/Range%20of%20a%20projectile
In physics, a projectile launched with specific initial conditions will have a range. It may be more predictable assuming a flat Earth with a uniform gravity field, and no air resistance. The horizontal ranges of a projectile are equal for two complementary angles of projection with the same velocity. The following applies for ranges which are small compared to the size of the Earth. For longer ranges see sub-orbital spaceflight. The maximum horizontal distance travelled by the projectile, neglecting air resistance, can be calculated as follows: where d is the total horizontal distance travelled by the projectile. v is the velocity at which the projectile is launched g is the gravitational acceleration—usually taken to be 9.81 m/s2 (32 f/s2) near the Earth's surface θ is the angle at which the projectile is launched y0 is the initial height of the projectile If y0 is taken to be zero, meaning that the object is being launched on flat ground, the range of the projectile will simplify to: Ideal projectile motion Ideal projectile motion states that there is no air resistance and no change in gravitational acceleration. This assumption simplifies the mathematics greatly, and is a close approximation of actual projectile motion in cases where the distances travelled are small. Ideal projectile motion is also a good introduction to the topic before adding the complications of air resistance. Derivations A launch angle of 45 degrees displaces the projectile the farthest horizontally. This is due to the nature of right triangles. Additionally, from the equation for the range : We can see that the range will be maximum when the value of is the highest (i.e. when it is equal to 1). Clearly, has to be 90 degrees. That is to say, is 45 degrees. Flat ground First we examine the case where (y0) is zero. The horizontal position of the projectile is In the vertical direction We are interested in the time when the projectile returns to the same height it originated. Let tg be any time when the height of the projectile is equal to its initial value. By factoring: or but t = T = time of flight The first solution corresponds to when the projectile is first launched. The second solution is the useful one for determining the range of the projectile. Plugging this value for (t) into the horizontal equation yields Applying the trigonometric identity If x and y are same, allows us to simplify the solution to Note that when (θ) is 45°, the solution becomes Uneven ground Now we will allow (y0) to be nonzero. Our equations of motion are now and Once again we solve for (t) in the case where the (y) position of the projectile is at zero (since this is how we defined our starting height to begin with) Again by applying the quadratic formula we find two solutions for the time. After several steps of algebraic manipulation The square root must be a positive number, and since the velocity and the sine of the launch angle can also be assumed to be positive, the solution with the greater time will occur when the positive of the plus or minus sign is used. Thus, the solution is Solving for the range once again To maximize the range at any height Checking the limit as approaches 0 Angle of impact The angle ψ at which the projectile lands is given by: For maximum range, this results in the following equation: Rewriting the original solution for θ, we get: Multiplying with the equation for (tan ψ)^2 gives: Because of the trigonometric identity , this means that θ + ψ must be 90 degrees. Actual projectile motion In addition to air resistance, which slows a projectile and reduces its range, many other factors also have to be accounted for when actual projectile motion is considered. Projectile characteristics Generally speaking, a projectile with greater volume faces greater air resistance, reducing the range of the projectile. (And see Trajectory of a projectile.) Air resistance drag can be modified by the projectile shape: a tall and wide, but short projectile will face greater air resistance than a low and narrow, but long, projectile of the same volume. The surface of the projectile also must be considered: a smooth projectile will face less air resistance than a rough-surfaced one, and irregularities on the surface of a projectile may change its trajectory if they create more drag on one side of the projectile than on the other. However, certain irregularities such as dimples on a golf ball may actually increase its range by reducing the amount of turbulence caused behind the projectile as it travels. Mass also becomes important, as a more massive projectile will have more kinetic energy, and will thus be less affected by air resistance. The distribution of mass within the projectile can also be important, as an unevenly weighted projectile may spin undesirably, causing irregularities in its trajectory due to the magnus effect. If a projectile is given rotation along its axes of travel, irregularities in the projectile's shape and weight distribution tend to be cancelled out. See rifling for a greater explanation. Firearm barrels For projectiles that are launched by firearms and artillery, the nature of the gun's barrel is also important. Longer barrels allow more of the propellant's energy to be given to the projectile, yielding greater range. Rifling, while it may not increase the average (arithmetic mean) range of many shots from the same gun, will increase the accuracy and precision of the gun. Very large ranges Some cannons or howitzers have been created with a very large range. During World War I the Germans created an exceptionally large cannon, the Paris Gun, which could fire a shell more than 80 miles (130 km). North Korea has developed a gun known in the West as Koksan, with a range of 60 km using rocket-assisted projectiles. (And see Trajectory of a projectile.) Such cannons are distinguished from rockets, or ballistic missiles, which have their own rocket engines, which continue to accelerate the missile for a period after they have been launched. See also Trajectory Projectile motion Escape velocity References Ballistics
Range of a projectile
[ "Physics" ]
1,224
[ "Applied and interdisciplinary physics", "Ballistics" ]
8,465,178
https://en.wikipedia.org/wiki/Cockpit%20display%20system
The Cockpit display systems (or CDS) provides the visible (and audible) portion of the Human Machine Interface (HMI) by which aircrew manage the modern Glass cockpit and thus interface with the aircraft avionics. History Prior to the 1970s, cockpits did not typically use any electronic instruments or displays (see Glass cockpit history). Improvements in computer technology, the need for enhancement of situational awareness in more complex environments, and the rapid growth of commercial air transportation, together with continued military competitiveness, led to increased levels of integration in the cockpit. The average transport aircraft in the mid-1970s had more than one hundred cockpit instruments and controls, and the primary flight instruments were already crowded with indicators, crossbars, and symbols, and the growing number of cockpit elements were competing for cockpit space and pilot attention. Architecture Glass cockpits routinely include high-resolution multi-color displays (often LCD displays) that present information relating to the various aircraft systems (such as flight management) in an integrated way. Integrated Modular Avionics (IMA) architecture allows for the integration of the cockpit instruments and displays at the hardware and software level to be maximized. CDS software typically uses API code to integrate with the platform (such as OpenGL to access the graphics drivers for example). This software may be written manually or with the help of COTS tools such as GL Studio, VAPS, VAPS XT or SCADE Display. Standards such as ARINC 661 specify the integration of the CDS at the software level with the aircraft system applications (called User Applications or UA). See also Acronyms and abbreviations in avionics Avionics software Integrated Modular Avionics References Embedded systems Aircraft instruments
Cockpit display system
[ "Technology", "Engineering" ]
349
[ "Computer engineering", "Embedded systems", "Measuring instruments", "Computer systems", "Computer science", "Aircraft instruments" ]
8,468,050
https://en.wikipedia.org/wiki/Gas%20Exporting%20Countries%20Forum
The Gas Exporting Countries Forum (GECF) is an intergovernmental organization currently comprising 19 Member Countries of the world's leading natural gas producers: Algeria, Bolivia, Egypt, Equatorial Guinea, Iran, Libya, Nigeria, Qatar, Russia, Trinidad and Tobago, and Venezuela are members and Angola, Azerbaijan, Iraq, Mozambique, Malaysia, Norway, Peru and the United Arab Emirates are observers. GECF members together control over 71% of the world's natural proven gas reserves, 44% of its marketed production, 53% of the pipeline, and 57% of the liquefied natural gas (LNG) exports across the globe. It is headquartered in Doha, Qatar. History The idea of creating a forum as an official organization was first discussed at the meeting in 2001 in Tehran, but it was legally instituted after the idea was supported by Russia. Vladimir Putin, on a visit to Qatar, one of the largest gas-producing countries, reached an agreement with the Emir Hamad bin Khalifa Al Thani to coordinate activities in the gas sector. Until 2007, the GECF was a platform for the exchange of experience in the gas sector, which did not have a permanent leadership, budget and headquarters. But within the framework of this platform, high-level meetings were regularly held. At the 6th Ministerial Meeting of the GECF in Doha, it was decided to create a working group under the leadership of the Ministry of Industry and Energy of Russia to coordinate actions to form a full-fledged organization. This step was perceived as the inevitability of creating a gas analogue of OPEC. As a result, the agreement on the establishment of the organization with the preservation of the name of the Gas Exporting Countries Forum was signed a year later on December 23, 2008 at the 7th Ministerial Meeting in Moscow. Since 2008, the Forum has had three governing tools: the Ministerial Meeting (held once a year), the Executive Board Meeting and the Secretariat. On December 9, 2009 the Secretary General of the GECF was elected vice-president of "Stroytransgaz" Leonid Bokhanovskiy, whose candidacy was put forward for a vote by Russia. November 13, 2011, Leonid Bokhanovskiy was re-elected as Secretary General of the Forum. On November 15, 2011, a declaration was adopted at the first GECF summit in Doha. It confirmed the importance of natural gas for the world economy, determined the course for deepening the coordination of exporting countries and the need to establish fair gas prices and the principle of balanced distribution of risks for gas producers and consumers. In November 2013, the Iranian diplomat Seyed Mohammad Hossein Adeli, was elected Secretary General of the GECF and in November 2015 he was re-elected for a second term. At the third summit in 2015, the GECF presented a forecast for the development of the gas market until 2050. According to GECF analysts, the key to the successful development of the global gas industry is the growth of the economy and population. Analysts have determined that by 2050 the population will grow by 2.2 billion people and reach 9.8 billion. The main trend for the gas industry: energy will become more affordable, and this will provide almost 30% of additional demand. However, in 2020, analysts announced that due to the minimum oil price and the consequences of the pandemic, this forecast could be revised. According to GECF experts, the Asia-Pacific region, North America and the Middle East will become the regions-drivers of demand. The growth of future demand will be 39%, 24% and 13%, respectively. Demand in Europe will grow until 2030, and then there will be a gradual decline. This gas market forecast until 2050 is updated annually. In January 2018, Yuri Sentyurin became the 3rd General Secretary of the GECF. In 2019 the members of the GECF countries joined Angola and Malaysia in 2020. Also, prospective members participating Mozambique, Tanzania, Senegal, Mauritania, Turkmenistan, Uzbekistan. In 2021, GECF sent an official submission to the United Nations in the wake of the Glasgow climate talks where GECF complained that gas exporters were a victim of "cancel culture." Gas OPEC Since the establishment of the GECF in 2001 there has always been speculation that some of the world's largest producers of natural gas, in particular Russia and Iran, intend to create a gas cartel equivalent to OPEC which would set quotas and prices. The idea of a gas OPEC was first floated by Russian President Vladimir Putin and backed by Kazakh President Nursultan Nazarbaev in 2002. In May 2006 Gazprom deputy chairman Alexander Medvedev threatened that Russia would create "an alliance of gas suppliers that will be more influential than OPEC" if Russia did not get its way in energy negotiations with Europe. Iranian officials have explicitly expressed strong support for a gas cartel and held official talks with Russia. Cartel speculation was again raised when the ministers met on 9 April 2007. The 6th Ministerial Meeting of the GECF established an expert group, chaired by Russia, to study how to strengthen the GECF. According to the Algerian Energy and Mines Minister Chakib Khelil, this mean that in the long term the GECF will move toward becoming a gas OPEC. On 11 December 2009, Russia's Energy Minister Sergey Shmatko stated: "Today we can speak about gas OPEC as a fully fledged international organization. By a unanimous decision a Russian national was elected its secretary general. This is to show that member countries expect Russia to use its political weight to promote it." Creation of the Gas OPEC was one of the topics of the first GECF summit. However, some GECF's members are concerned over the gas exports to be politicized. GECF generally refrains from coordinating production rates. According to GECF General Secretary Yuri Sentyurin, the issue of creating the creation of a "gas OPEC " is regularly raised at ministerial meetings. But unlike the oil market, there is no single market and pricing on the gas market. In addition, the forum was originally conceived as a discussion platform, therefore, without changing the Charter, it is premature to talk about practical instruments by analogy with OPEC. Organisational structure The highest body of the GECF is a ministerial meeting. In between of ministerial meetings, the work is organized through the Secretariat, headquartered in Doha, Qatar. The 2009 chairman of the GECF was Abdullah bin Hamad Al Attiyah and the vice chairman was Chakib Khelil. The Secretary-General is Mohammad Hamel. Secretaries-General Ministerial Meetings This meeting of senior government officials in the energy sector is the supreme authority of the Forum. The GECF has had ministerial meetings since 2001: Heads of State and Government Summits The Gas Summit is a meeting of Heads of State and Government of countries Members of the Gas Exporting Countries Forum. Holding the GECF's summit was decided at the 10th ministerial meeting in Oran in 2010. The first GECF's summit was held in Doha on 15 November 2011, under patronage of Emir Sheikh Hamad bin Khalifa al-Thani, following the thirteenth ministerial meeting held at the same place on 13 November 2011. Two main issues which were discussed at the summit, were natural gas prices and a common approach to the natural gas market. It was agreed on the summit that the price of gas used to generate electricity is too low and the gap between prices for gas and crude oil need to be narrowed. The linking of gas prices to the oil price was considered. However, the GECF will not set output limits for its members. The final communique issued was the Doha Declaration, which read that GECF members "recognized the importance of long-term gas contracts to achieve a balanced risk sharing mechanism between producers and consumers" and "acknowledge the need to reach a fair price for natural gas based on gas to oil/oil products prices indexation with the objective of an oil and gas price convergence ..." Russian president Dmitry Medvedev made a statement calling the summit "an important event, which marked a new stage in the development of the global energy sector and the gas sector in particular." The 2nd Gas Summit was held in Moscow on July 1, 2013. The key outcomes of the 2nd GECF Summit were reflected in the Moscow Declaration: "Natural gas: the answer to the 21st century sustainable development challenges." The final communique stresses the importance of the fundamental principles of long-term contracts that guarantee the safety of investments for producers and preservation of prices for consumers. The 3rd GECF Summit was held 23 November 2015 in Tehran. The main topics were the transfer of expertise of members countries and pricing mechanism for natural gas. The participants also called for cooperation in ensuring the security of natural gas supplies to world markets. The 4th GECF Summit convened in Santa Cruz, Bolivia on November 24, 2017. The outcome of the Summit was the Declaration of Santa Cruz de la Sierra. Basic principles: promoting gas as a reliable, secure, clean source of energy. Attracting investment to the global natural gas market. Fair price for natural gas considering its energy efficiency and environmental benefits. As the outcome of the 5th Gas Exporting Countries Forum in Malabo The Declaration of Malabo was published. It stressed the importance of the role of natural gas for African countries. The GECF members have specified the terms of contracts between producers and consumers. To ensure that the pricing associated with oil indexation serves the benefit of the member countries to ensure the implementation of their projects. The Sixth Gas Summit of Heads of State and Government of GECF Member Countries will convey in Doha, Qatar on 18 November 2021. Membership The members are Algeria, Bolivia, Egypt, Equatorial Guinea, Iran, Libya, Nigeria, Qatar, Russia, Trinidad and Tobago, Venezuela and Angola, Azerbaijan, Iraq, Kazakhstan, Malaysia, Norway, Peru and the United Arab Emirates are observers. Other countries like Turkmenistan, Brunei, Indonesia, Malaysia, and Yemen have participated at different meetings. Yemen is interested to become members of the organisation. Any gas exporting country can become a member, the full membership will be granted by the approval of at least three quarters of all members at the ministerial meeting. Also, to become an observer, a country can apply to the Secretariat. Such a resolution is adopted by a majority of three quarters of the members at the ministerial meeting. Observer members may attend ministerial plenary meetings and participate without the right to vote. See also Energy security Energy superpower Petrochemical Exporting Countries Forum References Bibliography External links GECF.org Gas Exporting Countries Forum: The Russian-Iranian Gas Cartel 2001 establishments in Iran Energy economics Energy policy International energy organizations International trade organizations Intergovernmental organizations Gas Natural gas organizations Organizations established in 2001 Organisations based in Doha
Gas Exporting Countries Forum
[ "Engineering", "Environmental_science" ]
2,232
[ "Energy economics", "Energy policy", "International energy organizations", "Natural gas organizations", "Environmental social science", "Energy organizations" ]
8,469,271
https://en.wikipedia.org/wiki/Abacus%20Harmonicus
Abacus Harmonicus, or Abacum Arithmetico-Harmonicum, is a table and tabular method described in Athanasius Kircher's comprehensive 1650 work on music, the Musurgia Vniversalis. The purpose is to generate counterpoint combinations. Also mentioned in early editions of the Encyclopædia Britannica, it is best described by the author's caption: "wonderful table that reveals all the secret art of counterpoint". References Counterpoint Mathematics of music Athanasius Kircher
Abacus Harmonicus
[ "Mathematics" ]
112
[ "Applied mathematics", "Mathematics of music" ]
8,469,771
https://en.wikipedia.org/wiki/CCL13
Chemokine (C-C motif) ligand 13 (CCL13) is a small cytokine belonging to the CC chemokine family. Its gene is located on human chromosome 17 within a large cluster of other CC chemokines. CCL13 induces chemotaxis in monocytes, eosinophils, T lymphocytes, and basophils by binding cell surface G-protein linked chemokine receptors such as CCR2, CCR3 and CCR5. Activity of this chemokine has been implicated in allergic reactions such as asthma. CCL13 can be induced by the inflammatory cytokines interleukin-1 and TNF-α. References Cytokines
CCL13
[ "Chemistry" ]
157
[ "Cytokines", "Signal transduction" ]
8,470,143
https://en.wikipedia.org/wiki/Anti-idiotypic%20vaccine
Anti-idiotypic vaccines consist of antibodies that have three-dimensional immunogenic regions, termed idiotopes, that consist of protein sequences that bind to cell receptors. Idiotopes are aggregated into idiotypes specific to their target antigen. An example of an anti-idiotype antibody is Racotumomab. Production and use To produce an anti-idiotypic vaccine, antibodies that bind tumor-associated antigens (TAA) are isolated and injected into mice. To the murine immune system, the TAA antibodies are antigens and cause an immunogenic reaction producing murine antibodies that can bind to the "TAA idiotype" and is said to be "anti-idiotypic". The resulting murine antibodies are harvested and used to vaccinate other mice. The resulting antibodies in the second set of mice have a three-dimensional binding site that mimics the original antibodies that bind tumor-associated antigens. These antibodies are combined with an adjuvant and given as a vaccine. The murine immune system essentially "amplifies" a small mass of TAA antibodies into a much larger mass used to vaccinate humans. Because the antibody produced using the "anti-idiotypic" process closely resembles the original epitope of the antigen, these antibodies can be used to induce immune responses from cellular to antibody-antigen for a given antigen, e. g., TAA, when administered as a vaccine to a human. They are mainly used for high risk cancer patients. References Ansel's Pharmaceutical Dosage Forms and Drug Delivery System (Page 513) () Vaccines
Anti-idiotypic vaccine
[ "Biology" ]
334
[ "Vaccination", "Vaccines" ]
8,470,584
https://en.wikipedia.org/wiki/Circumzenithal%20arc
The circumzenithal arc, also called the circumzenith arc (CZA), the upside-down rainbow, and the Bravais arc, is an optical phenomenon similar in appearance to a rainbow, but belonging to the family of halos arising from refraction of sunlight through ice crystals, generally in cirrus or cirrostratus clouds, rather than from raindrops. The arc is located a considerable distance (approximately 46°) above the observed Sun and at most forms a quarter of a circle centered on the zenith. It has been called "a smile in the sky", its first impression being that of an upside-down rainbow. The CZA is one of the brightest and most colorful members of the halo family. Its colors, ranging from violet on top to red at the bottom, are purer than those of a rainbow because there is much less overlap in their formation. The intensity distribution along the circumzenithal arc requires consideration of several effects: Fresnel's reflection and transmission amplitudes, atmospheric attenuation, chromatic dispersion (i.e. the width of the arc), azimuthal angular dispersion (ray bundling), and geometrical constraints. In effect, the CZA is brightest when the Sun is observed at about 20°. Contrary to public awareness, the CZA is not a rare phenomenon, but it tends to be overlooked, since it occurs so far overhead. It is worthwhile to look out for it when sun dogs are visible, since the same type of ice crystals that cause them are responsible for the CZA. Formation CZA is caused by ice crystals that form plate-shaped hexagonal prisms, in horizontal orientation. The light that forms the CZA enters an ice crystal through its flat top face, and exits through a side prism face. The refraction of almost-parallel sunlight through what is essentially a 90-degree prism accounts for the wide color separation and the purity of color. The CZA can only form when the sun is at an altitude lower than 32.2°. The CZA is brightest when the sun is at 22° above the horizon, which causes sunlight to enter and exit the crystals at the minimum deviation angle; then it is also about 22° in radius, 1.5° in width. The CZA radius varies between 32.2° and 0°, getting smaller with rising solar altitude. It is best observed with solar altitudes of about 15°-25°; towards either extreme, it is vanishingly faint. When the Sun is observed above 32.2°, light exits the crystals through the bottom face instead, contributing to the almost colorless parhelic circle. Because the phenomenon also requires that the ice crystals have a common orientation, it occurs only in the absence of turbulence and when there is no significant up- or downdraft. Lunar circumzenithal arc As with all halos, the CZA can be caused by light from the Moon as well as from the Sun: the former is referred to as a lunar circumzenithal arc. Its occurrence is rarer than solar CZA, since it requires the Moon to be sufficiently bright, which is typically only the case around full moon. Artificial circumzenithal arc A water glass experiment (known at least since 1920, cf. image on the right) may be used to create an artificial circumzenithal arc. Illuminating the top air-water interface of a nearly completely water-filled cylindrical glass under a shallow angle will refract the light into the water. The glass should be situated at the edge of a table. The second refraction at the cylinder's side face is then a skew-ray refraction. The overall refraction turns out to be equivalent to the refraction through an upright hexagonal plate crystal when the rotational averaging is taken into account. A colorful artificial circumzenithal arc will then appear projected on the floor. Other artificial halos can be created by similar means. See also Circumhorizontal arc Circumscribed halo Kern arc Sun dog References David K. Lynch and William Livingston. Color and Light in Nature. 2nd ed, 2004 printing. External links Atmospheric Optics - About CZAs Atmospheric Optics - Circumzenithal Arc Gallery Circumzenithal arc over Rome, Italy Timelapse video of weak Circumzenithal Arc Physics of the circumzenithal arc Circumzenithal Arc Over Frisco, TX | 1-23-11 | Clouds 365 Project - Year 2 Spaceweather.com Atmospheric optics expert Les Cowley created a diagram labeling the halos Images of artificial circumzenithal, circumhorizontal and suncave Parry arcs Italian Aviation Meteo service Geometrical optics Atmospheric optical phenomena
Circumzenithal arc
[ "Physics" ]
1,008
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
25,469,138
https://en.wikipedia.org/wiki/International%20Plumbing%20Code
The International Plumbing Code is a plumbing code and standard that sets minimum requirements for plumbing systems in their design and function, and which sets out rules for the acceptance of new plumbing-related technologies. It is published by the International Code Council based in Washington, D.C., through the governmental consensus process and updated on a three-year cycle to include the latest advances in technology and safest plumbing practices. The current version of this code is the 2021 edition. The IPC protects public health and safety in buildings for all water and wastewater related design, installation and inspection by providing minimum safeguards for plumbers and people at homes, schools and workplaces. Water heaters, anti-scalding devices, back-flow prevention methods, water pipe sizing and many other such issues are addressed in the IPC. Adoption The IPC is the most widely used plumbing code in the United States and is also used as the basis for the plumbing code of several other countries. Wide adoptions are important as they help reduce manufacturer and end-user costs by allowing the use of materials across a wide user base, thus allowing economies of scale in the production of materials used in construction. Uniformity in the codes adopted across many areas also allows a broader sharing of best building practices and techniques and improves the transferability of experts such as architects, engineers, code officials, building inspectors, and other building professionals among those different areas. More adoptions also invite broader participation in the formulation the codes, which lends to the incorporation of the latest and best building techniques that enhance the safety of citizens in the areas using the codes. Some jurisdictions have adopted the International Plumbing Code in a way that gives it the force of law, while others have their own codes. Applications Regulatory applications International Plumbing Code, are used in a variety of ways in both the public and private sectors. Most industry professionals are familiar with International Building Codes as the basis of laws and regulations in communities across the U.S. and in other countries. Non-regulatory applications The impact of the IPC codes extends well beyond the regulatory frameworks, as they are used in a variety of non-regulatory settings. Non-regulatory uses of IPC codes Voluntary compliance programs such as those promoting sustainability, energy efficiency and disaster resistance. The insurance industry, to estimate and manage risk, and as a tool in underwriting and rate decisions. Certification and credentialing of individuals involved in the fields of building design, construction and safety. Certification of building and construction-related products. U.S. federal agencies, to guide construction in an array of government-owned properties. Facilities management. Used to benchmark best practices for designers and builders, Used by plumbers who are engaged in projects in jurisdictions that do not have a formal regulatory system or a governmental enforcement mechanism. College, university and professional school textbooks and curricula. Reference works related to building design and construction. References Plumbing Safety codes
International Plumbing Code
[ "Engineering" ]
590
[ "Construction", "Plumbing" ]
1,318,031
https://en.wikipedia.org/wiki/Robot%20kinematics
In robotics, robot kinematics applies geometry to the study of the movement of multi-degree of freedom kinematic chains that form the structure of robotic systems. The emphasis on geometry means that the links of the robot are modeled as rigid bodies and its joints are assumed to provide pure rotation or translation. Robot kinematics studies the relationship between the dimensions and connectivity of kinematic chains and the position, velocity and acceleration of each of the links in the robotic system, in order to plan and control movement and to compute actuator forces and torques. The relationship between mass and inertia properties, motion, and the associated forces and torques is studied as part of robot dynamics. Kinematic equations A fundamental tool in robot kinematics is the kinematics equations of the kinematic chains that form the robot. These non-linear equations are used to map the joint parameters to the configuration of the robot system. Kinematics equations are also used in biomechanics of the skeleton and computer animation of articulated characters. Forward kinematics uses the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. The reverse process that computes the joint parameters that achieve a specified position of the end-effector is known as inverse kinematics. The dimensions of the robot and its kinematics equations define the volume of space reachable by the robot, known as its workspace. There are two broad classes of robots and associated kinematics equations: serial manipulators and parallel manipulators. Other types of systems with specialized kinematics equations are air, land, and submersible mobile robots, hyper-redundant, or snake, robots and humanoid robots. Forward kinematics Forward kinematics specifies the joint parameters and computes the configuration of the chain. For serial manipulators this is achieved by direct substitution of the joint parameters into the forward kinematics equations for the serial chain. For parallel manipulators substitution of the joint parameters into the kinematics equations requires solution of the a set of polynomial constraints to determine the set of possible end-effector locations. Inverse kinematics Inverse kinematics specifies the end-effector location and computes the associated joint angles. For serial manipulators this requires solution of a set of polynomials obtained from the kinematics equations and yields multiple configurations for the chain. The case of a general 6R serial manipulator (a serial chain with six revolute joints) yields sixteen different inverse kinematics solutions, which are solutions of a sixteenth degree polynomial. For parallel manipulators, the specification of the end-effector location simplifies the kinematics equations, which yields formulas for the joint parameters. Robot Jacobian The time derivative of the kinematics equations yields the Jacobian of the robot, which relates the joint rates to the linear and angular velocity of the end-effector. The principle of virtual work shows that the Jacobian also provides a relationship between joint torques and the resultant force and torque applied by the end-effector. Singular configurations of the robot are identified by studying its Jacobian. Velocity kinematics The robot Jacobian results in a set of linear equations that relate the joint rates to the six-vector formed from the angular and linear velocity of the end-effector, known as a twist. Specifying the joint rates yields the end-effector twist directly. The inverse velocity problem seeks the joint rates that provide a specified end-effector twist. This is solved by inverting the Jacobian matrix. It can happen that the robot is in a configuration where the Jacobian does not have an inverse. These are termed singular configurations of the robot. Static force analysis The principle of virtual work yields a set of linear equations that relate the resultant force-torque six vector, called a wrench, that acts on the end-effector to the joint torques of the robot. If the end-effector wrench is known, then a direct calculation yields the joint torques. The inverse statics problem seeks the end-effector wrench associated with a given set of joint torques, and requires the inverse of the Jacobian matrix. As in the case of inverse velocity analysis, at singular configurations this problem cannot be solved. However, near singularities small actuator torques result in a large end-effector wrench. Thus near singularity configurations robots have large mechanical advantage. Fields of study Robot kinematics also deals with motion planning, singularity avoidance, redundancy, collision avoidance, as well as the kinematic synthesis of robots. See also Robotics conventions Mobile robot Robot locomotion References
Robot kinematics
[ "Engineering" ]
970
[ "Robotics engineering", "Robot kinematics" ]
1,318,037
https://en.wikipedia.org/wiki/Screw%20theory
Screw theory is the algebraic calculation of pairs of vectors, also known as dual vectors – such as angular and linear velocity, or forces and moments – that arise in the kinematics and dynamics of rigid bodies. Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors. Important theorems of screw theory include: the transfer principle proves that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws; Chasles' theorem proves that any change between two rigid object poses can be performed by a single screw; Poinsot's theorem proves that rotations about a rigid object's major and minor – but not intermediate – axes are stable. Screw theory is an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics. This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots). Basic concepts A spatial displacement of a rigid body can be defined by a rotation about a line and a translation along the same line, called a . This is known as Chasles' theorem. The six parameters that define a screw motion are the four independent components of the Plücker vector that defines the screw axis, together with the rotation angle about and linear slide along this line, and form a pair of vectors called a screw. For comparison, the six parameters that define a spatial displacement can also be given by three Euler angles that define the rotation and the three components of the translation vector. Screw A screw is a six-dimensional vector constructed from a pair of three-dimensional vectors, such as forces and torques and linear and angular velocity, that arise in the study of spatial rigid body movement. The components of the screw define the Plücker coordinates of a line in space and the magnitudes of the vector along the line and moment about this line. Twist A twist is a screw used to represent the velocity of a rigid body as an angular velocity around an axis and a linear velocity along this axis. All points in the body have the same component of the velocity along the axis, however the greater the distance from the axis the greater the velocity in the plane perpendicular to this axis. Thus, the helicoidal field formed by the velocity vectors in a moving rigid body flattens out the further the points are radially from the twist axis. The points in a body undergoing a constant twist motion trace helices in the fixed frame. If this screw motion has zero pitch then the trajectories trace circles, and the movement is a pure rotation. If the screw motion has infinite pitch then the trajectories are all straight lines in the same direction. Wrench The force and torque vectors that arise in applying Newton's laws to a rigid body can be assembled into a screw called a wrench. A force has a point of application and a line of action, therefore it defines the Plücker coordinates of a line in space and has zero pitch. A torque, on the other hand, is a pure moment that is not bound to a line in space and is an infinite pitch screw. The ratio of these two magnitudes defines the pitch of the screw. Algebra of screws Let a screw be an ordered pair where and are three-dimensional real vectors. The sum and difference of these ordered pairs are computed componentwise. Screws are often called dual vectors. Now, introduce the ordered pair of real numbers , called a dual scalar. Let the addition and subtraction of these numbers be componentwise, and define multiplication as The multiplication of a screw by the dual scalar is computed componentwise to be, Finally, introduce the dot and cross products of screws by the formulas: which is a dual scalar, and which is a screw. The dot and cross products of screws satisfy the identities of vector algebra, and allow computations that directly parallel computations in the algebra of vectors. Let the dual scalar define a dual angle, then the infinite series definitions of sine and cosine yield the relations which are also dual scalars. In general, the function of a dual variable is defined to be , where ′(φ) is the derivative of (φ). These definitions allow the following results: Unit screws are Plücker coordinates of a line and satisfy the relation Let be the dual angle, where φ is the angle between the axes of S and T around their common normal, and d is the distance between these axes along the common normal, then Let N be the unit screw that defines the common normal to the axes of S and T, and is the dual angle between these axes, then Wrench A common example of a screw is the wrench associated with a force acting on a rigid body. Let P be the point of application of the force F and let P be the vector locating this point in a fixed frame. The wrench is a screw. The resultant force and moment obtained from all the forces Fi, , acting on a rigid body is simply the sum of the individual wrenches Wi, that is Notice that the case of two equal but opposite forces F and −F acting at points A and B respectively, yields the resultant This shows that screws of the form can be interpreted as pure moments. Twist In order to define the twist of a rigid body, we must consider its movement defined by the parameterized set of spatial displacements, , where [A] is a rotation matrix and d is a translation vector. This causes a point p that is fixed in moving body coordinates to trace a curve P(t) in the fixed frame given by The velocity of P is where v is velocity of the origin of the moving frame, that is dd/dt. Now substitute p =  [AT](P − d) into this equation to obtain, where [Ω] = [dA/dt][AT] is the angular velocity matrix and ω is the angular velocity vector. The screw is the twist of the moving body. The vector V = v + d × ω is the velocity of the point in the body that corresponds with the origin of the fixed frame. There are two important special cases: (i) when d is constant, that is v = 0, then the twist is a pure rotation about a line, then the twist is and (ii) when [Ω] = 0, that is the body does not rotate but only slides in the direction v, then the twist is a pure slide given by Revolute joints For a revolute joint, let the axis of rotation pass through the point q and be directed along the vector ω, then the twist for the joint is given by, Prismatic joints For a prismatic joint, let the vector v pointing define the direction of the slide, then the twist for the joint is given by, Coordinate transformation of screws The coordinate transformations for screws are easily understood by beginning with the coordinate transformations of the Plücker vector of line, which in turn are obtained from the transformations of the coordinate of points on the line. Let the displacement of a body be defined by D = ([A], d), where [A] is the rotation matrix and d is the translation vector. Consider the line in the body defined by the two points p and q, which has the Plücker coordinates, then in the fixed frame we have the transformed point coordinates P = [A]p + d and Q = [A]q + d, which yield. Thus, a spatial displacement defines a transformation for Plücker coordinates of lines given by The matrix [D] is the skew-symmetric matrix that performs the cross product operation, that is [D]y = d × y. The 6×6 matrix obtained from the spatial displacement D = ([A], d) can be assembled into the dual matrix which operates on a screw s = (s.v) to obtain, The dual matrix [Â] = ([A], [DA]) has determinant 1 and is called a dual orthogonal matrix. Twists as elements of a Lie algebra Consider the movement of a rigid body defined by the parameterized 4x4 homogeneous transform, This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. The velocity of this movement is defined by computing the velocity of the trajectories of the points in the body, The dot denotes the derivative with respect to time, and because p is constant its derivative is zero. Substitute the inverse transform for p into the velocity equation to obtain the velocity of P by operating on its trajectory P(t), that is where Recall that [Ω] is the angular velocity matrix. The matrix [S] is an element of the Lie algebra se(3) of the Lie group SE(3) of homogeneous transforms. The components of [S] are the components of the twist screw, and for this reason [S] is also often called a twist. From the definition of the matrix [S], we can formulate the ordinary differential equation, and ask for the movement [T(t)] that has a constant twist matrix [S]. The solution is the matrix exponential This formulation can be generalized such that given an initial configuration g(0) in SE(n), and a twist ξ in se(n), the homogeneous transformation to a new location and orientation can be computed with the formula, where θ represents the parameters of the transformation. Screws by reflection In transformation geometry, the elemental concept of transformation is the reflection (mathematics). In planar transformations a translation is obtained by reflection in parallel lines, and rotation is obtained by reflection in a pair of intersecting lines. To produce a screw transformation from similar concepts one must use planes in space: the parallel planes must be perpendicular to the screw axis, which is the line of intersection of the intersecting planes that generate the rotation of the screw. Thus four reflections in planes effect a screw transformation. The tradition of inversive geometry borrows some of the ideas of projective geometry and provides a language of transformation that does not depend on analytic geometry. Homography The combination of a translation with a rotation effected by a screw displacement can be illustrated with the exponential mapping. Since ε2 = 0 for dual numbers, exp(aε) = 1 + aε, all other terms of the exponential series vanishing. Let F = {1 + εr : r ∈ H}, ε2 = 0. Note that F is stable under the rotation and under the translation for any vector quaternions r and s. F is a 3-flat in the eight-dimensional space of dual quaternions. This 3-flat F represents space, and the homography constructed, restricted to F, is a screw displacement of space. Let a be half the angle of the desired turn about axis r, and br half the displacement on the screw axis. Then form and . Now the homography is The inverse for z* is so, the homography sends q to Now for any quaternion vector p, , let , where the required rotation and translation are effected. Evidently the group of units of the ring of dual quaternions is a Lie group. A subgroup has Lie algebra generated by the parameters a r and b s, where , and . These six parameters generate a subgroup of the units, the unit sphere. Of course it includes F and the 3-sphere of versors. Work of forces acting on a rigid body Consider the set of forces F1, F2 ... Fn act on the points X1, X2 ... Xn in a rigid body. The trajectories of Xi, i = 1,...,n are defined by the movement of the rigid body with rotation [A(t)] and the translation d(t) of a reference point in the body, given by where xi are coordinates in the moving body. The velocity of each point Xi is where ω is the angular velocity vector and v is the derivative of d(t). The work by the forces over the displacement δri=viδt of each point is given by Define the velocities of each point in terms of the twist of the moving body to obtain Expand this equation and collect coefficients of ω and v to obtain Introduce the twist of the moving body and the wrench acting on it given by then work takes the form The 6×6 matrix [Π] is used to simplify the calculation of work using screws, so that where and [I] is the 3×3 identity matrix. Reciprocal screws If the virtual work of a wrench on a twist is zero, then the forces and torque of the wrench are constraint forces relative to the twist. The wrench and twist are said to be reciprocal, that is if then the screws W and T are reciprocal. Twists in robotics In the study of robotic systems the components of the twist are often transposed to eliminate the need for the 6×6 matrix [Π] in the calculation of work. In this case the twist is defined to be so the calculation of work takes the form In this case, if then the wrench W is reciprocal to the twist T. History The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips. The homography idea in transformation geometry was advanced by Sophus Lie more than a century ago. Even earlier, William Rowan Hamilton displayed the versor form of unit quaternions as exp(a r)= cos a + r sin a. The idea is also in Euler's formula parametrizing the unit circle in the complex plane. William Kingdon Clifford initiated the use of dual quaternions for kinematics, followed by Aleksandr Kotelnikov, Eduard Study (Geometrie der Dynamen), and Wilhelm Blaschke. However, the point of view of Sophus Lie has recurred. In 1940, Julian Coolidge described the use of dual quaternions for screw displacements on page 261 of A History of Geometrical Methods. He notes the 1885 contribution of Arthur Buchheim. Coolidge based his description simply on the tools Hamilton had used for real quaternions. See also Screw axis Newton–Euler equations uses screws to describe rigid body motions and loading. Twist (differential geometry) Twist (rational trigonometry) References External links Joe Rooney William Kingdon Clifford, Department of Design and Innovation, the Open University, London. Ravi Banavar notes on Robotics, Geometry and Control Mechanical engineering Mechanics Rigid bodies Kinematics fr:Torseur
Screw theory
[ "Physics", "Technology", "Engineering" ]
3,217
[ "Machines", "Kinematics", "Applied and interdisciplinary physics", "Physical phenomena", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics", "Mechanical engineering" ]
1,320,360
https://en.wikipedia.org/wiki/Spectral%20power%20distribution
In radiometry, photometry, and color science, a spectral power distribution (SPD) measurement describes the power per unit area per unit wavelength of an illumination (radiant exitance). More generally, the term spectral power distribution can refer to the concentration, as a function of wavelength, of any radiometric or photometric quantity (e.g. radiant energy, radiant flux, radiant intensity, radiance, irradiance, radiant exitance, radiosity, luminance, luminous flux, luminous intensity, illuminance, luminous emittance). Knowledge of the SPD is crucial for optical-sensor system applications. Optical properties such as transmittance, reflectivity, and absorbance as well as the sensor response are typically dependent on the incident wavelength. Physics Mathematically, for the spectral power distribution of a radiant exitance or irradiance one may write: where M(λ) is the spectral irradiance (or exitance) of the light (SI units: W/m2 = kg·m−1·s−3); Φ is the radiant flux of the source (SI unit: watt, W); A is the area over which the radiant flux is integrated (SI unit: square meter, m2); and λ is the wavelength (SI unit: meter, m). (Note that it is more convenient to express the wavelength of light in terms of nanometers; spectral exitance would then be expressed in units of W·m−2·nm−1.) The approximation is valid when the area and wavelength interval are small. Relative SPD The ratio of spectral concentration (irradiance or exitance) at a given wavelength to the concentration of a reference wavelength provides the relative SPD. This can be written as: For instance, the luminance of lighting fixtures and other light sources are handled separately, a spectral power distribution may be normalized in some manner, often to unity at 555 or 560 nanometers, coinciding with the peak of the eye's luminosity function. Responsivity The SPD can be used to determine the response of a sensor at a specified wavelength. This compares the output power of the sensor to the input power as a function of wavelength. This can be generalized in the following formula: Knowing the responsitivity is beneficial for determination of illumination, interactive material components, and optical components to optimize performance of a system's design. Source SPD and matter The spectral power distribution over the visible spectrum from a source can have varying concentrations of relative SPDs. The interactions between light and matter affect the absorption and reflectance properties of materials and subsequently produces a color that varies with source illumination. For example, the relative spectral power distribution of the sun produces a white appearance if observed directly, but when the sunlight illuminates the Earth's atmosphere the sky appears blue under normal daylight conditions. This stems from the optical phenomenon called Rayleigh scattering which produces a concentration of shorter wavelengths and hence the blue color appearance. Source SPD and color appearance The human visual response relies on trichromacy to process color appearance. While the human visual response integrates over all wavelengths, the relative spectral power distribution will provide color appearance modeling information as the concentration of wavelength band(s) will become the primary contributors to the perceived color. This becomes useful in photometry and colorimetry as the perceived color changes with source illumination and spectral distribution and coincides with metamerisms where an object's color appearance changes. The spectral makeup of the source can also coincide with color temperature producing differences in color appearance due to the source's temperature. See also References External links Spectral Power Distribution Curves, GE Lighting. Radiometry Color Lighting Physical quantities
Spectral power distribution
[ "Physics", "Mathematics", "Engineering" ]
752
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Physical properties", "Radiometry" ]
1,321,000
https://en.wikipedia.org/wiki/Carboplatin
{{Infobox drug | Verifiedfields = changed | verifiedrevid = 460018795 | image = Carboplatin-skeletal.svg | image_class = skin-invert-image | alt = | image2 = Carboplatin-from-xtal-view-1-Mercury-3D-balls.png | image_class2 = bg-transparent | alt2 = | pronounce = | tradename = Paraplatin, others | Drugs.com = | MedlinePlus = a695017 | routes_of_administration = Intravenous | ATC_prefix = L01 | ATC_suffix = XA02 | ATC_supplemental = | legal_CA = Rx-only | legal_CA_comment = | legal_status = Rx-only | bioavailability = complete | protein_bound = Very low | metabolism = | elimination_half-life = 1.1-2 hours | excretion = Kidney | CAS_number_Ref = | CAS_number = 41575-94-4 | PubChem = 498142 | DrugBank_Ref = | DrugBank = DB00958 | ChemSpiderID_Ref = | ChemSpiderID = 8514637 | UNII_Ref = | UNII = BG3F62OND5 | KEGG_Ref = | KEGG = D01363 | ChEBI_Ref = | ChEBI = 31355 | ChEMBL_Ref = | ChEMBL = 288376 | synonyms = | IUPAC_name = cis-diammine(cyclobutane-1,1-dicarboxylate-O,O)platinum(II) | C=6 | H=12 | N=2 | O=4 | Pt=1 | smiles = C1CC2(C1)C(=O)O[Pt-2]([NH3+])([NH3+])OC2=O | StdInChI_Ref = | StdInChI = 1S/C6H8O4.2H3N.Pt/c7-4(8)6(5(9)10)2-1-3-6;;;/h1-3H2,(H,7,8)(H,9,10);2*1H3;/q;;;+2/p-2 | StdInChIKey_Ref = | StdInChIKey = OLESAACUTLOWQZ-UHFFFAOYSA-L }}Carboplatin, sold under the brand name Paraplatin''' among others, is a chemotherapy medication used to treat a number of forms of cancer. This includes ovarian cancer, lung cancer, head and neck cancer, brain cancer, and neuroblastoma. It is used by injection into a vein. Side effects generally occur. Common side effects include low blood cell levels, nausea, and electrolyte problems. Other serious side effects include allergic reactions and mutagenesis. It may be carcinogenic, but further research is needed to confirm this. Use during pregnancy may result in harm to the baby. Carboplatin is in the platinum-based antineoplastic family of medications and works by interfering with duplication of DNA. Carboplatin was developed as a less toxic analogue of cisplatin. It was patented in 1972 and approved for medical use in 1989. It is on the 2023 World Health Organization's List of Essential Medicines. Medical uses Carboplatin is used to treat a number of forms of cancer. This includes ovarian cancer, lung cancer, head and neck cancer, brain cancer, and neuroblastoma. It may be used for some types of testicular cancer but cisplatin is generally more effective. It has also been used to treat triple-negative breast cancer. Side effects Relative to cisplatin, the greatest benefit of carboplatin is its reduced side effects, particularly the elimination of nephrotoxic effects. Nausea and vomiting are less severe and more easily controlled. The main drawback of carboplatin is its myelosuppressive effect. This causes the blood cell and platelet output of bone marrow in the body to decrease quite dramatically, sometimes as low as 10% of its usual production levels. The nadir of this myelosuppression usually occurs 21–28 days after the first treatment, after which the blood cell and platelet levels in the blood begin to stabilize, often coming close to its pre-carboplatin levels. This decrease in white blood cells (neutropenia) can cause complications, and is sometimes treated with drugs like filgrastim. The most notable complication of neutropenia is increased probability of infection by opportunistic organisms, which necessitates hospital readmission and treatment with antibiotics. Mechanism of action Carboplatin differs from cisplatin in that it has a bidentate dicarboxylate (the ligand is cyclobutane dicarboxylate, CBDCA) in place of the two chloride ligands. Both drugs are alkylating agents. CBDCA and chloride are the leaving groups in these respective drugs Carboplatin exhibits slower aquation (replacement of CBDCA by water) and thus slower DNA binding kinetics, although it forms the same reaction products in vitro'' at equivalent doses with cisplatin. Unlike cisplatin, carboplatin may be susceptible to alternative mechanisms. Some results show that cisplatin and carboplatin cause different morphological changes in MCF-7 cell lines while exerting their cytotoxic behaviour. The diminished reactivity limits protein-carboplatin complexes, which are excreted. The lower excretion rate of carboplatin means that more is retained in the body, and hence its effects are longer lasting (a retention half-life of 30 hours for carboplatin, compared to 1.5-3.6 hours in the case of cisplatin). Like cisplatin, carboplatin binds to and cross-links DNA, interfering with the replication and suppressing growth of the cancer cell. Dose Calculation - Calvert Equation Prior to 1989, most carboplatin dosing used body surface area dosing as with other chemotherapy. However, toxicity from treatment was variable, and therefore Professor Hillary Calvert (University of Newcastle) developed a formula to dose carboplatin based on renal function. Calvert's formula considers the creatinine clearance and the desired area under curve. After 24 hours, close to 70% of carboplatin is excreted in the urine unchanged. This means that the dose of carboplatin must be adjusted for any impairment in kidney function. Calvert formula: The typical area under the curve (AUC) for carboplatin ranges from 3-7 (mg/ml)*min. GFR (Glomerular Filtration Rate) is a measure or estimate of kidney function. This is either measured, by measuring clearance of a radioisotope or estimated using serum and (sometimes) urine creatine measurements. The Calvert formula was developed in 18 patients with GFR measurements up to 133ml/min. It's applicability at very high doses of carboplatin has been challenged and in the US the Food and Drug Administration has recommended capping GFR at 125ml/min. This may be more important where dosing is based on calculations using more modern methods of creatinine measurement. The approach is not supported by all clinicians and certainly less so in those treating seminomas. Synthesis Cisplatin reacts with silver nitrate and then cyclobutane-1,1-dicarboxylic acid to form carboplatin. History Carboplatin, a cisplatin analogue, was developed by Bristol Myers Squibb and the Institute of Cancer Research in order to reduce the toxicity of cisplatin. It gained U.S. Food and Drug Administration (FDA) approval for carboplatin, under the brand name Paraplatin, in March 1989. Starting in October 2004, generic versions of the drug became available. Research Carboplatin has also been used for adjuvant therapy of stage 1 seminomatous testicular cancer. Research has indicated that it is not less effective than adjuvant radiotherapy for this treatment, while having fewer side effects. This has led to carboplatin based adjuvant therapy being generally preferred over adjuvant radiotherapy in clinical practice. Carboplatin combined with hexadecyl chain and polyethylene glycol appears to have increased liposolubility and PEGylation. This is useful in chemotherapy, specifically for non-small cell lung cancer. References Further reading Ammine complexes Coordination complexes Cyclobutanes Organoplatinum compounds Platinum(II) compounds Platinum-based antineoplastic agents Wikipedia medicine articles ready to translate World Health Organization essential medicines
Carboplatin
[ "Chemistry" ]
1,905
[ "Coordination chemistry", "Coordination complexes" ]
1,321,047
https://en.wikipedia.org/wiki/Commercial%20animal%20cloning
Commercial animal cloning is the cloning of animals for commercial purposes, including animal husbandry, medical research, competition camels and horses, pet cloning, and restoring populations of endangered and extinct animals. The practice was first demonstrated in 1996 with Dolly the sheep. Cloning methods Moving or copying all (or nearly all) genes from one animal to form a second, genetically nearly identical, animal is usually done using one of three methods: the Roslin technique, the Honolulu technique, or Artificial Twinning. The first two of these involve a process known as somatic cell nuclear transfer. In this process, an oocyte is taken from a surrogate mother and undergoes enucleation, a process that removes the nucleus from inside the oocyte. Somatic cells are then taken from the animal that is being cloned, transferred into the blank oocyte in order to provide genetic material, and fused with the oocyte using an electrical current. The oocyte is then activated and re-inserted into the surrogate mother. The result is the formation of an animal that is almost genetically identical to the animal the somatic cells were taken from. While somatic cell nuclear transfer was previously believed to only work using genetic material from somatic cells that were unfrozen or were frozen with cryoprotectant (to avoid cell damage caused by freezing), successful dog cloning in various breeds has now been shown using somatic cells from unprotected specimens that had been frozen for up to four days. The third method of cloning involves embryo splitting, the process of taking the blastomeres from a very early animal embryo and separating them before they become differentiated in order to create two or more separate organisms. When using embryo splitting, cloning must occur before the birth of the animal, and clones grow up at the same time (in a similar fashion to monozygotic twins). Livestock cloning The US Food and Drug Administration has concluded that "Food from cattle, swine, and goat clones is as safe to eat as food from any other cattle, swine, or goat." It has also been noted that the main use of agricultural clones is to produce breeding stock, not food. Clones allow farmers to upgrade the overall quality of their herds by producing more copies of the best animals in the herd. These animals are then used for conventional breeding, and the sexually reproduced offspring become the food producing animals. The goals of cloning listed by the FDA include "disease resistance ... suitability to climate ... quality body type .. fertility ... and market preference (leanness, tenderness, color, size of various cuts, etc.)" Milk productivity is another desirable trait that cloning is used for, including in the case of cloned "supercows". Medical uses Organs from cloned pigs have been transplanted into human patients. (See Xenotransplantation) Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Other working animals with high performance Cloning of super sniffer dogs for airports was reported in 2011, four years after the dog that served as their genetic donor retired. Cloning of a successful rescue dog was reported in 2009 and of a police dog in 2019. Endangered and extinct animals The only extinct animal to be cloned as of 2022 is a Pyrenean ibex, born on July 30, 2003, in Spain, which died minutes later due to physical defects in the lungs. Some animals have been cloned to add genetic diversity to endangered species with small remaining populations, thereby avoiding inbreeding depression. Centers performing this include ViaGen, aided by the San Diego Frozen Zoo, and Revive & Restore. This is also referred to as "conservation cloning". Two examples are the black-footed ferret and Przewalski's horse. In 2022, the world's first cloned Arctic wolf "Maya" was born in Beijing by Sinogene. Although Arctic wolves are no longer listed by the IUCN Red List as an endangered species, the technique can be used to help other animals at risk of extinction, such as Mexican gray wolves and red wolves. The team of Sinogene plans to restore lost species or boost numbers in endangered animal populations. In a recent study using sturgeons, scientists have made improvements to a technique known as somatic nuclear cell transfer, with the ultimate goal being to save endangered species. Sturgeons are endangered due to the high levels of poaching, increased destruction to habitats, water pollution, and overfishing. The somatic nuclear cell transfer technique is a well-known cloning method that has been used for years but focuses on species that are thriving rather than endangered or extinct species. This technique usually uses a single somatic donor cell with a single manipulation and inserts it into a recipient egg of the species of interest. It has recently been found that the position by which that somatic cell is located inside the recipient is very important in order to successfully clone a species. By making adjustments to the original method of using a single somatic cell and instead use multiple somatic donor cells to insert into the recipient egg, the likeliness of the somatic donor cells being in the crucial position on the egg will increase tremendously. This increase will then result in higher success rates with cloning. There is ongoing research using this improved method, but from the data collected thus far, it seems to be a reasonable method to continue and soon be able to help stop species like the sturgeons from becoming endangered and possibly stop extinction from occurring. Cloning long-extinct animals using current methods is impossible because DNA begins to denature after death, meaning the entire genome of an extinct species is not available to be reproduced. However, new studies using genome editing have suggested it may be possible to "bring back" traits of extinct species by incorporating genes from the extinct species into the genome of a closely related living organism. Currently, George Church's lab at Harvard University's Wyss Institute is conducting research into genetically modifying Asian elephants to express genes from the extinct woolly mammoth. Their goals in doing this are to expand the habitat available to Asian elephants and reestablish the ecological interactions woolly mammoths played a role in prior to their extinction. History and commercialization ViaGen began by offering cloning to the livestock and equine industry in 2003, and later as ViaGen Pets included cloning of cats and dogs in 2016. ViaGen's subsidiary, start licensing, owns a cloning patent which is licensed to their only competitor as of 2018, who also offers animal cloning services. (Viagen is a subsidiary of Precigen.) The first commercially cloned pet was a cat named Little Nicky, produced in 2004 by Genetic Savings & Clone for a north Texas woman for the fee of US$50,000. On May 21, 2008, BioArts International announced a limited commercial dog cloning service (through a program it called Best Friends Again) in partnership with a Korean company named Sooam Biotech. This program came after the announcement of the successful cloning of a family dog named Missy, an achievement widely publicized in the Missyplicity Project. In September 2009, BioArts announced the end of its dog cloning service. In July 2008, the Seoul National University (co-parents of Snuppy, reputedly the world's first cloned dog in 2005) created five clones of a dog named Booger for its Californian owner. The woman paid $50,000 for this service. Sooam Biotech continued developing proprietary techniques for cloning dogs based on a licence from ViaGen's subsidiary, stART Licensing (which owned the original patent for the process of animal cloning). (Although the animal itself is not patentable, the process is protected by a patent). Sooam created cloned puppies for owners whose dogs had died, charging $100,000 per clone. Sooam Biotech was reported to have cloned approximately 700 dogs by 2015 and to be producing 500 cloned embryos of various breeds a day in 2016. In 2015, the longest period after which Sooam Biotech could clone a puppy was 12 days from the death of the original pet dog. Sinogene Biotechnology created the first Chinese clone dog in 2017 before commercializing the cloning service and joining in the pet cloning market. In 2019, Sinogene successfully created the first Chinese cloned cat. In June 2022, "Zhuang Zhuang" was cloned by the Beijing laboratory Sinogene. He is the first from the "warmblood" group of breeds to be born in China and to be officially approved by the China Horse Industry Association. Controversies Animal welfare The mortality rate for cloned animals is higher than for those born of natural processes. This includes a discrepancy pre-birth, during birth, and after birth in survival rates and quality of life, leading to ethical concerns. Many of these discrepancies are thought to come from maternal mRNA already present in the oocyte prior to the transfer of genetic material as well as from DNA methylation, both of which contribute to the development of the animal in the womb of the surrogate. Some common issues seen with cloned animals are shortened telomeres, the repetitive end sequences of DNA whose decreasing length over the lifespan of an organism have been associated with aging; large offspring syndrome, the abnormal size of cloned individuals due to epigenetic (gene expression) changes; and methylation patterns of genetic material that are so abnormal compared to standard embryos of the species being cloned as to be incompatible with life. Pet cloning While pet cloning is sometimes advertised as a prospective method for re-gaining a deceased companionship animal, pet cloning does not result in animals that are exactly like the previous pet (in looks or personality). Although the animal in question is cloned, there are still phenotypical differences that may affect its appearance or health. This issue was brought to light in the cloning of a cat named Rainbow. Rainbow's clone, later named CC, was genetically identical to Rainbow, yet CC's coloring patterns were not the same due to the development of the kitten inside the womb as well as random genetic disparities in the clone such as variable X-chromosome inactivation. Despite its controversies, the study of pet cloning holds the potential to contribute to scientific, veterinary, and medical knowledge, and it is a potential resource in efforts to preserve endangered cousins of the cat and dog. In 2005, California Assembly Member Lloyd Levine introduced a bill to ban the sale or transfer of pet clones in California. That bill was voted down. See also Biobank Cultivar: term used in botany to refer to specific breeds (made using selective cross breeding and sometimes genetic modification) that have distinct properties. Often reproduced using cloning to avoid properties being lost due to sexual propagation. List of animals that have been cloned Working animal References Pets Cloning
Commercial animal cloning
[ "Engineering", "Biology" ]
2,266
[ "Cloning", "Genetic engineering" ]
1,321,810
https://en.wikipedia.org/wiki/Jackal%20%28Marvel%20Comics%20character%29
The Jackal is an alias used by several supervillains appearing in American comic books published by Marvel Comics, usually depicted as enemies of the superhero Spider-Man. The original and best known incarnation, Miles Warren, was originally introduced in The Amazing Spider-Man #31 (December 1965) as a professor at the fictional Empire State University. Later storylines established him as also being a scientist researching genetics and biochemistry, and revealed an unhealthy romantic obsession he had for Gwen Stacy. Warren was driven mad with grief and jealousy so he created his Jackal alter-ego to seek revenge on Spider-Man, whom he blamed for Gwen's tragic death. To this end, he trained himself in martial arts, and created a green suit and gauntlets with claw-like razors. Although the Jackal initially didn't possess any superpowers, he later gained enhanced strength, speed and agility by mixing his genes with those of a jackal. The Jackal was introduced in The Amazing Spider-Man #129 (February 1974), but his human identity was not revealed until The Amazing Spider-Man #148 (September 1975). Originally one of Spider-Man's less popular rogues, the character rose to prominence after being one of the first in the Marvel Universe to master cloning technology, and creating various clones of Spider-Man, like the Scarlet Spiders Ben Reilly and Kaine Parker, as well as of other characters, including himself and the chimera Spider-Girl. His experiments went on to play a major role in several popular Spider-Man storylines, such as the "Clone Saga" (1994–1996), "Spider-Island" (2011), and "Dead No More: The Clone Conspiracy" (2016–2017), the latter storyline of which established Ben Reilly as the second Jackal; in Spider-Gwen, during the titular character's foray into Earth-616, the Jackal attempts to capture her to explicitly sexually assault her, after previously only subtextually expressing this desire towards her main continuity counterpart and her clones. In 2014, IGN ranked the Jackal as Spider-Man's 17th greatest enemy. The character has been featured in several media adaptations of Spider-Man, including animated series and video games. Publication history The character first appears in The Amazing Spider-Man #129 (February 1974), and was created by writer Gerry Conway and artist Ross Andru. In The Amazing Spider-Man #148 (September 1975), the Jackal's identity was revealed to be Professor Miles Warren who first appeared in The Amazing Spider-Man #31 (December 1965), and was created by writer Stan Lee and artist Steve Ditko. Prior to his Jackal reintroduction, his appearances were essentially limited to the occasional cameo in which he acts as simple background to Spider-Man's civilian life as a college student. When named at all in these early appearances, he is called only "Professor Warren". A "Mister Warren" had previously appeared in The Amazing Spider-Man #8 (January 1964) but he is a high school science teacher rather than a college professor, and is physically very distinct from Miles Warren. Despite this, Conway has said it was always his interpretation that "Mister Warren", "Professor Warren", and Professor Miles Warren/Jackal were the same character. The character was featured in the controversial 1990s "Clone Saga" story arc, the 2011 storyline "Spider-Island", and the 2016-2017 storyline "Dead No More: The Clone Conspiracy". Fictional character biography Miles Warren Miles Warren is a professor of biology at ESU/Empire State University, where he meets Peter Parker and Gwen Stacy. During his tenure there, Warren becomes secretly infatuated with the much younger Gwen to the point of obsession and jealousy of Parker. After Gwen is murdered by the Green Goblin, Warren swears vengeance on Spider-Man, since it was reported that it was Spider-Man who killed Gwen. Gwen's death drives Warren into depression, despair, and insanity as a mad geneticist who eventually turns into the Jackal. Miles is also the brother of science teacher Raymond Warren of Midtown High School. Early career Miles is an assistant of the High Evolutionary at Wundagore Mountain after earning his Ph.D. in biochemistry. Warren assists the High Evolutionary in experiments that involve turning animals into humans and vice versa. There is conflict between Warren and the High Evolutionary because Warren succeeds in creating "New Men" who looked practically human, whereas the High Evolutionary is not able to. Eventually, Warren evolves a jackal that exhibits a Jekyll and Hyde personality. When the test subject escapes, the High Evolutionary banishes Warren from Wundagore. Warren continues his research and eventually settles down with Monica, a woman who bears him two children who are all killed in what is originally believed to be a car crash; however, it is later revealed to be an assault by his highly evolved Man-Jackal envious of his creator. The Jackal's origin The day after Gwen's death, Warren's lab assistant Anthony Serba reveals that he successfully cloned a frog using their research technology. Warren gives Serba tissue samples of Gwen and Peter, telling Serba that they come from rat cells. Serba later confronts Warren, stating that the clones are human and must be destroyed immediately. Panicking, Warren attempts to cover Serba's mouth to shut him up, accidentally suffocating Serba. Unable to accept responsibility for his actions, Warren develops a second personality to carry the weight of his misdeeds dubbed "The Jackal". He further develops his alter ego by fashioning a green suit and gauntlets with sharp, claw-like razors on each finger, and by training himself athletically. During this time, he also continues to experiment with cloning humans. Kaine is the first successful clone of Peter despite suffering from a slow cloning degeneration and having regenerative abilities to repeatedly elude death. The Jackal's hatred for Spider-Man manifests in his belief that Spider-Man is solely responsible for allowing Gwen, whom he loved, to die at the Goblin's hands. He starts harassing Spider-Man, setting him up against other adversaries. Warren allies himself with the Punisher against Spider-Man. The Jackal next attempts to incite a gang war between Hammerhead and Doctor Octopus. Later, he equips wrestler Maxwell Markham with the Grizzly costume and a powerful exoskeleton to assassinate newspaper publisher J. Jonah Jameson. The Jackal then holds Parker hostage in a scheme to trap Spider-Man. He eventually learns Spider-Man's identity. Out of his numerous attempts to create clones of Peter, only one is a perfect copy of the original. He also creates two clones of himself, one a direct copy and the other a modified clone harboring the Carrion virus. The Jackal helps the Tarantula (Anton Miguel Rodriguez) escape prison, and the two become partners. The Jackal captures Spider-Man, but lets him go after proving that Spider-Man is no match for the Jackal in a fair fight. He then lures his nemesis to Shea Stadium and manipulates into battling his perfect clone of Peter by binding Daily Bugle reporter Ned Leeds to a bomb that only the original Spider-Man can disarm. But when a clone of Gwen tears off the Jackal's mask and confronts him on his crimes, Warren accepts responsibility for his actions. He attempts to correct his wrongdoings by freeing Leeds, only to be caught in the bomb's explosion. Clone Saga During the "Clone Saga", it is revealed that Peter's clone had survived the explosion and gone into hiding under the alias of Ben Reilly. The Jackal who died at Shea Stadium was also a clone. Nearly five years later, another clone of Jackal would marry Gwen's original clone, and the two would live under the assumed names Warren Miles and Gwen Miles. This clone of Warren eventually dies of the degeneration that afflicts most of the Jackal's clones. The real Jackal resurfaces, where his experiments mutate his own DNA and give him an actual jackal's attributes. When Reilly returns years later to New York City as the Scarlet Spider and allies with Spider-Man, the Jackal also returns to unleash his clone army and convinced both Parker and Reilly that the latter was the original Spider-Man, and the former was the clone. The Jackal creates clones of Peter who come into conflict with Spider-Man, the Scarlet Spider, and Kaine. Ultimately, the Jackal, in the process of attempting to kill and replace millions of people with clones that he can control, falls off a tall building while trying to save Gwen's clone and dies. Because of Norman Osborn, the Jackal and various others (including Kaine) had been tricked into thinking that Reilly was the original and that Peter was the clone. All of the Jackal's machinations were influenced by his incorrect assertion that he knew who the real Peter was. Spider-Island The Jackal returns in the "Spider-Island" storyline, being further genetically altered to the point where he frequently displays animalistic tendencies. His body is always cold, requiring him to wear a thick fur coat even in the hottest weather. He is now a crime lord calling himself "The Professor", and allies himself with Hammerhead, but the two eventually go to jail. He returns in the "Infestation" back-up feature of The Amazing Spider-Man, unleashing genetically engineered bedbugs to pass on Spider-like abilities to thousands of citizens in Manhattan. He achieves this through the aid of human clones of himself, and funding from Adriana Soria. Although the bedbugs later die, the virus gives New York's citizens spider-powers and becomes airborne to infect the world and create a new race of Homo-Arachnus as part of his co-conspirator's plan to overtake the Great Web of Life. The Jackal has also enlisted the aid of his mutated henchman Tarantula. It is revealed that the clone of Gwen introduced in The Amazing Spider-Man #144 was actually Abby-L (Gwen's second clone and the first cloning attempt), a flawed clone with degenerative debilities. Before this seemingly perfect copy of Gwen died at Abby-L's hands, it is revealed that the copy actually had some degeneration on her hand, suggesting not being perfect after all. Abby-L was also infected with the Carrion virus and had Carrion's same abilities. Abby-L was manipulated into killing Gwen's other clone, who was living in London under the alias of Joyce Delaney, and coming into conflict with the Jackal and Kaine. With his own ulterior motives, the Jackal manipulated gang leaders into adorning duplicate costumes of Spider-Man to cause chaos in New York City. He experimented on the Spider King's host form to spread New York City's infestation on a global scale. The Jackal still knows Spider-Man's true identity despite the worldwide mindwipe to the rest of the world. After a cure is created by Mister Fantastic with Max Modell's Horizon Labs using Anti-Venom's antibodies, the Jackal assures Soria that no cure is possible. Soria kills him after realizing that his co-conspirator's powers were amplified into the god-like Spider-Queen due to a frequency that returned Spider-Man's spider-sense. However, the Jackal who dies ends up being a clone while the real Jackal had kept his distance the entire time with his former self's surviving clones, anticipating the outcome and gaining a sample of his slain female co-conspirator's DNA, recognizing his success in obtaining DNA unbeknownst to the Avengers and Spider-Man. It is later revealed that the Jackal has been monitoring Peter's accidental creation of Alpha, and has set his sights on Spider-Man's new protégé. The Jackal resurfaces accompanied by his cloned mutated human-spider hybrids of Spider-Queen and is bent on harvesting Alpha's powers for himself to clone a race of Alpha males. But his plans fail as the Alpha energy cannot be cloned, resulting in powerless, near-mindless copies of Alpha, all of which are destroyed when the enraged Alpha breaks free. It is then revealed that the two versions of Jackal that Spider-Man and Alpha fought were also clones. Superior Spider-Man era When Otto Octavius's mind possesses Spider-Man's body, the X-Men battle a 30 ft. human-spider hybrid attacking New York which turns out to be a human mutant created by the Jackal using Mister Sinister's works, a "daughter" of Scott Summers, Gwen, and Soria named as "Gwen Warren / Spider-Girl". The Jackal later attacks the Superior Spider-Man and the new Scarlet Spider with mutant-powered spider-clones. Superior Spider-Man kills the clones by destroying the Jackal's hideout, but the Jackal escapes. It is revealed that he kept samples of Scarlet Spider's DNA. The Jackal tells Carrion that he is prepared to develop Spidercide 2.0. Dead No More: The Clone Conspiracy Ben's dissolved remains are collected by the Jackal who was resurrected thanks to a new cloning process. However, the Jackal found problems with the cellular degradation. He had Ben killed 26 more times, all of which had Ben's life (and most of Peter's) flash before his eyes. The ordeal of repeated death caused Ben to become mentally unbalanced and morally ambiguous, due to the trauma and very soul being damaged from being removed and replaced over and over. Ben eventually breaks free and knocks out the Jackal. After improving Warren's formula, Ben makes clones of Miles and persuades the Jackal that he is a clone, thereby making it nearly impossible to tell who is the real one. Now free with a number of clone of Miles as servants, Ben acts as the new Jackal during the "Dead No More: The Clone Conspiracy" storyline and is determined to repay the people who have heavily influenced Ben's and Peter's lives with the Jackal's technology to make sure that no one has to suffer again and that those who have can become whole, and even does this by establishing New U Technologies. When Spider-Man activates the Webware to stabilize the human and clone cells all across the world that were in danger of succumbing to clone degradation, the various clone of Miles melt as Ben fights Doctor Octopus. The so-called clone that does not melt realizes that he is the true Warren and vows to have revenge on Ben as the true Jackal. Ben returns to a safe-house and finds Miles in his Jackal outfit waiting in the living room. The Jackal proceeds to burn Ben's safe-house down and engages in one final battle. Ben defeats the Jackal and leaves him in the burning house to die. The Jackal however, survived the fire and targeted the neural net that was built by Dr. Yesenia Rosario when the woman was doing a presentation of it at Empire State University. The Jackal was defeated by Spider-Man and Ms. Marvel as Dr. Rosario destroys her own invention by setting it to self-destruct. He later surfaced in the Empire State University once more, under the guise of "Professor Guarinus" and is shocked to bump into Ghost-Spider who had recently enrolled in this universe's ESU. He injected himself with actual jackal DNA allowing himself to transform into looking exactly like his iconic green costume, but for real. The Jackal recruited another new student named Benji to befriend the alternate Gwen Stacy in exchange for the possibility of great power if she does her job well and while Benji is able to tell Miles that Gwen is a costumed hero from another world, she somehow had failed Warren and was punished for it. Spectacular Spider-Men The Jackal impersonated Raymond Warren by utilizing his cloning technology to take on his brother's likeness to continue his research without arousing suspicion. While acting as an acquaintance of Raymond's colleagues and friends, his secret lab in ESU contained clones of the Jackal which were defeated by the original Spider-Man and the other Spider-Man, and investigated by Carlie Cooper. While posing as Raymond, he manipulated the A.I. Arcadium created by Arcade, Mentallo and Hammerhead. The Jackal has an alliance with the A.I. as the organic form K.N.A.I.V.E. However, the Spider-Men defeat K.N.A.I.V.E. and the Jackal's identity theft is revealed to which he's punched by his brother's former student and is arrested by the police. Ben Reilly Benjamin "Ben" Reilly is the second major character to use the Jackal alias. Powers and abilities Prior to his regeneration, Miles Warren is a genius in the fields of biochemistry, genetics, and cloning. The Jackal was a talented martial artist and gymnast. He later spliced his genes with the DNA of a jackal, having the strength, durability, speed, and agility amplified to inhuman levels. Warren has access to state-of-the-art laboratories, as well as advanced gadgets or devices if needed. Copies of Jackal Prior to the death of the Warren clone at Shea Stadium, he had created a clone of himself. The clone remained in stasis within a cloning casket that malfunctioned and super-aged the clone beyond death. Eventually, it emerged and became known as Carrion that wielded power and had no conscience for its actions. He was the first carrier of the Carrion virus, which Warren designed to destroy humanity. Carrion contained all Warren's memories which contained within his RNA, that included his hatred and knowledge of Spider-Man's secret identity. Carrion wielded the power to create a Red Dust that would spread as pestilence as well as his touch that would incapacitate or even cause organic matter to degenerate to the point of disintegration. The original Carrion intended to kill Spider-Man with a spider-amoeba, but failed as Carrion was absorbed by the amoeba, engulfed in flames that ensued from his battle. Much later, fellow ESU rival Malcolm McBride stumbled across Warren's old lair, where he was infected with a strain of the Carrion virus and became the second incarnation of Carrion. The virus allowed McBride to become endowed with the knowledge of Spider-Man's secret identity; however, he was unsure whether he was Dr. Warren's first clone or Malcolm McBride. Eventually, McBride teamed with the likes of Demogoblin and Carnage, but was later cured of his condition and incarcerated in Ravencroft Asylum. A man dressed as the Jackal once attacked Alpha Flight and claimed to be Miles Warren's son. It was later indicated that this Jackal was the Ani-Man Warren created that ultimately murdered the Professor's family. A version of the Jackal dubbed as "The Professor" fought Daredevil and Punisher. He used multiple stand-ins, such as the clones of his human form and the Jackal in "Spider-Island". There was also an additional clone accompanying the Jackal in "Sibling Rivalry" after targeting the Superior Spider-Man and Scarlet Spider. Ben Reilly later made clones of Miles Warren to help run New U Technologies. Other clones The following clones were created by the Jackal: The clone of Miles Warren who died at Shea Stadium in The Amazing Spider-Man #149. The clone of Miles Warren who married the clone of Gwen Stacy and died of clone degeneration in Web of Spider-Man #125. The clone of Miles Warren in the Daredevil/Punisher limited series. The original clone of Miles Warren who is Carrion. The clone of Gwen Stacy introduced in The Amazing Spider-Man #144. She went by the aliases Joyce Delaney and Gwen Miles. Abby-L – The original clone of Gwen who is also infected with the Carrion virus; introduced in Spider Island: Deadly Foes. The clone of Gwen introduced in The Amazing Spider-Man #399 who dies of clone degeneration. Ben Reilly a.k.a. the Scarlet Spider/Spider-Man – A clone of Peter Parker. Kaine Parker a.k.a. the Tarantula/Scarlet Spider – The first clone of Peter Parker who suffers from clone degeneration. Spidercide – A clone of Peter who has control over his own molecules, used by the Jackal, like Jack and Guardian, as muscle. Died fighting Ben Reilly and Peter above the Daily Bugle before falling to its death. Jack – A clone of Peter who was the Jackal's diminutive henchman, armed with claw-like fingernails (much like Guardian) who dies from clone degeneration. Guardian – A clone of Peter with dense skin, super-strength, and claw-like fingernails who guarded the entrance to one of the Jackal's headquarters who also died of clone degeneration. The clone of Spider-Man whose skeleton was found in the smokestack that Ben Reilly was dumped down at the original Clone Story's end. The army of clones of Spider-Man in Maximum Clonage. The various clones of his human form featured in Spider-Island. Most, in human form, acted as henchmen for the Jackal and Adriana Soria. Two later kidnapped Alpha and his family which Spider-Man fought. The Spider clones that were harvested from Adriana Soria's DNA sent to fight Spider-Man. The Alpha clones created to harvest/clone the Parker Particles. Using the works of Mister Sinister, the Jackal created a clone that was a girl who can turn into a mutant spider. This girl can shoot mucus from her mouth and shoot optic blasts when in spider form. When the girl was defeated by Superior Spider-Man and Storm, she was taken into the X-Men's custody where Beast claims that three DNA sources were used to create her. In January 2023, she later adopted the name of Gwen Warren and the alias of Spider-Girl.<ref>X-Men Unlimited Infinity Comic #69. Marvel Comics.</ref> Cerebra later confirmed that Jackal used the DNA samples of Cyclops, Gwen Stacy, and Spider-Queen to create her. The Jackal then used the DNA samples of the X-Men that he obtained from one of Mister Sinister's laboratories to create mutant-powered spider-clones. One clone has optic blasts like Cyclops, one clone can use electrical attacks like Storm, one clone can teleport like Nightcrawler, and the final clone can do cryokinesis like Iceman. These mutant-powered spider-clones were killed when Superior Spider-Man blew up the Jackal's hideout. Reception J. M. DeMatteis, the creator of the "Clone Saga", claimed in an interview that he thought Jackal is "a terrific villain...one of his favorites", and that it "was a blast bringing the character back, if only for this one story." Dan Slott claimed in an interview with Newsarama about the "Spider-Island" saga that the Jackal is "one of the wonderful mad scientists of Spider-Man's world." Other versions Marvel Zombies In the Marvel Zombies universe, when the Zombie Galacti left the Earth (after eating Galactus), Wilson Fisk (Kingpin) makes an empire. The zombiefied Jackal plays an important part in it, creating human clones to feed the remaining Marvel Zombies. This process utilizes Inhuman technology. Spider-Man: Clone Saga Jackal appears in the re-imagining of the Clone Saga by Tom DeFalco, who was exploring the storyline as it was originally conceived. He infects both May Parker and Mary Jane Watson with a genetic virus. When Kaine betrays Jackal and leads Spider-Man and Scarlet Spider to his lair, all three are captured. The Jackal then reveals his plan to create an army of Spider-Clones to take over the world and clone Gwen Stacy. The clones prove unstable, however, and the Jackal comes to the conclusion that Ben is the original. Before he can do anything, Kaine breaks free and burns his mark onto the Jackal's face before breaking his neck. Ultimate Marvel The Ultimate Marvel version of Miles Warren is a hypnotherapist for Harry Osborn to help repress memories about the Green Goblin. Later in the Deadpool story arc of Ultimate Spider-Man, he was revealed to be dating May Parker. Additionally, his Clone Saga involvement has been taken over by Doctor Octopus. He last appeared when May tried to introduce him to Peter, but they had to leave town because of Norman Osborn and he had a patient to handle. Spider-Man: Life StorySpider-Man: Life Story features an alternate continuity where the characters naturally age after Peter Parker becomes Spider-Man in 1962. After the 60s, Miles eventually leaves Empire State and forms his own bio-engineering company. Peter Parker's wife, Gwen Stacy, became his chief biologist. During this time, he was also hired by Norman Osborn to create clones of Norman and Peter, but he also secretly created a clone of Gwen. In 1977, Norman gets word of Warren's additional clone and sends Harry Osborn as the Black Goblin to attack Warren's company, revealing the clones to Harry, Peter, and Gwen in the process. Harry blows up the containment tubes containing the clones, which kills all of them except for Peter's clone. However, Warren reveals that the "Gwen" that Peter was with was actually her clone, while the real Gwen died in the explosion, as he wanted to keep her for himself. Spider-Verse In the Spider-Verse storyline, the Miles Warren of Earth-802 is one of the top scientists working for Jennix of the Inheritors. Jennix once quoted to Miles "I keep you around because you were once the most brilliant mind on the planet." Spider-Man of Earth-94, Scarlet Spider, and Black Widow of Earth-1610 later encounter Miles Warren when they infiltrate the Baxter Building to disable Jennix' cloning device (which is used to create new bodies for the Inheritors if they get killed in action). Secret Wars During the Secret Wars storyline, Spider-Gwen encounters the Jackal of Arachnia and covers him with webbing as he is robbing a grave after which he exclaims he is the best geneticist of his generation. What If? In "What If The Punisher Had Killed Spider-Man?", Warren successfully dupes the Punisher into killing Spider-Man and abandons him to take the fall in his place. Becoming a hunted fugitive, the Punisher eventually tracks Warren down and intends to surrender him to the police. But when the NYPD is about to arrest Punisher instead, threatening to kill him should he shoot Warren, Warren is executed (off-panel) by the Punisher after the latter gleefully concludes the story with the words: "See you on the other side, Jackal." Dead No More: The Clone Conspiracy When Warren reveals his plans for New U, Kaine and the Gwen Stacy of Earth-65 step in to stop him from winning Peter to his side. Kaine later told Spider-Man that they have visited various unidentified alternate universes where Peter agreeing to Jackal's plans for New U Technologies have led to catastrophe in the form of the Carrion Virus. In other media Television Miles Warren appears in the Spider-Man (1994) two-part episode "The Return of Hydro-Man", voiced by Jonathan Harris. This version is a scientist whose cloning experiments were banned by the government. In an attempt to stabilize his clones, he uses a sample of Hydro-Man to create clones of the latter and Mary Jane Watson before being discovered by Spider-Man and his clones evaporate. An alternate universe version of Warren appears in the episode "I Really, Really Hate Clones", in which he captures and clones the web-slinger and tampers with both versions' memories, with one becoming the Scarlet Spider and the other later turning into Spider-Carnage. Miles Warren appears in The Spectacular Spider-Man, voiced by Brian George. This version is East Indian. While conducting research at the ESU labs with a grant from Norman Osborn, Warren turns Kraven the Hunter into a leonine creature and Mark Allan into Molten Man, and blackmails Doctors Curt and Martha Connors into moving away so he can take the Connors' lab for himself. The Jackal appears in Spider-Man (2017), voiced by John DiMaggio. This version is Raymond Warren while his DNA is mixed with that of his namesake and has mastered cloning technology, creating numerous clones of himself in case he is ever exposed or caught. Video games The Jackal appears as a boss in the PS2 and PSP versions of Spider-Man: Web of Shadows'', voiced by Greg Baldwin. References External links Jackal at Marvel.com Characters created by Gerry Conway Characters created by Ross Andru Characters created by Stan Lee Characters created by Steve Ditko Clone characters in comics Comics characters introduced in 1965 Comics characters introduced in 1974 Fictional biochemists Fictional characters from New York City Fictional geneticists Fictional prison escapees Fictional academics Genetically engineered characters in comics Marvel Comics characters who can move at superhuman speeds Marvel Comics characters with superhuman durability or invulnerability Marvel Comics characters with superhuman strength Marvel Comics hybrids Marvel Comics martial artists Marvel Comics mutates Marvel Comics scientists Marvel Comics supervillains Spider-Man characters
Jackal (Marvel Comics character)
[ "Chemistry" ]
6,172
[ "Fictional biochemists", "Biochemists" ]
1,323,240
https://en.wikipedia.org/wiki/Electrohydrodynamics
Electrohydrodynamics (EHD), also known as electro-fluid-dynamics (EFD) or electrokinetics, is the study of the dynamics of electrically charged fluids. Electrohydrodynamics (EHD) is a joint domain of electrodynamics and fluid dynamics mainly focused on the fluid motion induced by electric fields. EHD, in its simplest form, involves the application of an electric field to a fluid medium, resulting in fluid flow, form, or properties manipulation. These mechanisms arise from the interaction between the electric fields and charged particles or polarization effects within the fluid. The generation and movement of charge carriers (ions) in a fluid subjected to an electric field are the underlying physics of all EHD-based technologies. The electric forces acting on particles consist of electrostatic (Coulomb) and electrophoresis force (first term in the following equation)., dielectrophoretic force (second term in the following equation), and electrostrictive force (third term in the following equation): This electrical force is then inserted in Navier-Stokes equation, as a body (volumetric) force.EHD covers the following types of particle and fluid transport mechanisms: electrophoresis, electrokinesis, dielectrophoresis, electro-osmosis, and electrorotation. In general, the phenomena relate to the direct conversion of electrical energy into kinetic energy, and vice versa. In the first instance, shaped electrostatic fields (ESF's) create hydrostatic pressure (HSP, or motion) in dielectric media. When such media are fluids, a flow is produced. If the dielectric is a vacuum or a solid, no flow is produced. Such flow can be directed against the electrodes, generally to move the electrodes. In such case, the moving structure acts as an electric motor. Practical fields of interest of EHD are the common air ioniser, electrohydrodynamic thrusters and EHD cooling systems. In the second instance, the converse takes place. A powered flow of medium within a shaped electrostatic field adds energy to the system which is picked up as a potential difference by electrodes. In such case, the structure acts as an electrical generator. Electrokinesis Electrokinesis is the particle or fluid transport produced by an electric field acting on a fluid having a net mobile charge. (See -kinesis for explanation and further uses of the -kinesis suffix.) Electrokinesis was first observed by Ferdinand Frederic Reuss during 1808, in the electrophoresis of clay particles The effect was also noticed and publicized in the 1920s by Thomas Townsend Brown which he called the Biefeld–Brown effect, although he seems to have misidentified it as an electric field acting on gravity. The flow rate in such a mechanism is linear in the electric field. Electrokinesis is of considerable practical importance in microfluidics, because it offers a way to manipulate and convey fluids in microsystems using only electric fields, with no moving parts. The force acting on the fluid, is given by the equation where, is the resulting force, measured in newtons, is the current, measured in amperes, is the distance between electrodes, measured in metres, and is the ion mobility coefficient of the dielectric fluid, measured in m2/(V·s). If the electrodes are free to move within the fluid, while keeping their distance fixed from each other, then such a force will actually propel the electrodes with respect to the fluid. Electrokinesis has also been observed in biology, where it was found to cause physical damage to neurons by inciting movement in their membranes. It is discussed in R. J. Elul's "Fixed charge in the cell membrane" (1967). Water electrokinetics In October 2003, Dr. Daniel Kwok, Dr. Larry Kostiuk and two graduate students from the University of Alberta discussed a method to convert hydrodynamic to electrical energy by exploiting the natural electrokinetic properties of a liquid such as ordinary tap water, by pumping fluid through tiny micro-channels with a pressure difference. This technology could lead to a practical and clean energy storage device, replacing batteries for devices such as mobile phones or calculators which would be charged up by simply compressing water to high pressure. Pressure would then be released on demand, for the fluid to flow through micro-channels. When water travels, or streams over a surface, the ions in the water "rub" against the solid, leaving the surface slightly charged. Kinetic energy from the moving ions would thus be converted to electrical energy. Although the power generated from a single channel is extremely small, millions of parallel micro-channels can be used to increase the power output. This streaming potential, water-flow phenomenon was discovered in 1859 by German physicist Georg Hermann Quincke. Electrokinetic instabilities The fluid flows in microfluidic and nanofluidic devices are often stable and strongly damped by viscous forces (with Reynolds numbers of order unity or smaller). However, heterogeneous ionic conductivity fields in the presence of applied electric fields can, under certain conditions, generate an unstable flow field owing to electrokinetic instabilities (EKI). Conductivity gradients are prevalent in on-chip electrokinetic processes such as preconcentration methods (e.g. field amplified sample stacking and isoelectric focusing), multidimensional assays, and systems with poorly specified sample chemistry. The dynamics and periodic morphology of electrokinetic instabilities are similar to other systems with Rayleigh–Taylor instabilities. The particular case of a flat plane geometry with homogeneous ions injection in the bottom side leads to a mathematical frame identical to the Rayleigh–Bénard convection. EKI's can be leveraged for rapid mixing or can cause undesirable dispersion in sample injection, separation and stacking. These instabilities are caused by a coupling of electric fields and ionic conductivity gradients that results in an electric body force. This coupling results in an electric body force in the bulk liquid, outside the electric double layer, that can generate temporal, convective, and absolute flow instabilities. Electrokinetic flows with conductivity gradients become unstable when the electroviscous stretching and folding of conductivity interfaces grows faster than the dissipative effect of molecular diffusion. Since these flows are characterized by low velocities and small length scales, the Reynolds number is below 0.01 and the flow is laminar. The onset of instability in these flows is best described as an electric "Rayleigh number". Misc Liquids can be printed at nanoscale by pyro-EHD. See also Magnetohydrodynamic drive Magnetohydrodynamics Electrodynamic droplet deformation Electrospray Electrokinetic phenomena Optoelectrofluidics Electrostatic precipitator List of textbooks in electromagnetism References External links Dr. Larry Kostiuk's website. Science-daily article about the discovery. BBC article with graphics. Electrodynamics Energy conversion Fluid dynamics
Electrohydrodynamics
[ "Chemistry", "Mathematics", "Engineering" ]
1,482
[ "Electrodynamics", "Chemical engineering", "Piping", "Fluid dynamics", "Dynamical systems" ]
1,323,481
https://en.wikipedia.org/wiki/Synthetic%20instrument
In metrology (test and measurement science), a synthetic instrument is software that performs a specific synthesis, analysis, or measurement function. A Synthetic Measurement System (SMS) is a common, general purpose, physical hardware platform that is intended to perform many kinds of synthesis, analysis, or measurement functions using Synthetic Instruments. Typically the generic SMS hardware is dual cascade of three subsystems: digital processing and control, analog-to-digital or digital-to-analog conversion (codec), and signal conditioning. One cascade is for stimulus, one for response. Sandwiched between them is the device under test (DUT) that is being measured. A synthetic instrument is the opposite of the retronym natural instrument. Although the word “synthetic” in the phrase synthetic instrument might seem to imply that synthetic instruments are synthesizers: that they only do synthesis; this is incorrect. The instrument itself is being synthesized; nothing is implied about what the instrument does. A synthetic instrument might indeed be a synthesizer, but it could just as easily be an analyzer, or some hybrid of the two. Synthetic instruments are implemented on generic hardware, i.e., generic meaning that the underlying hardware is not explicitly designed to perform the particular measurement. This is probably the most salient characteristic of a synthetic instrument. Measurement specificity is encapsulated totally in software. The hardware does not define the measurement. An analogy to this relationship between specific measurement hardware versus generic hardware with its function totally defined in software is the relationship between specific digital circuits and a general purpose CPU. A specific digital circuit can be designed and hardwired with digital logic parts to perform a specific calculation. Alternatively, a microprocessor (or, better yet, a gate array) could be used to perform the same calculation using appropriate software. One case is specific, the other generic, with the specificity encapsulated in software. At the software level, portability of measurement description is the key attribute that distinguishes a synthetic instrument from the more commonly found instrumentation software—software that is limited to hardware scripting and data flow processing. Not all measurement related software systems inherently provide for the abstract, portable synthesis of measurements. Even if they do have such provisions, they may not typically be applied that way by users, especially if the system encourages non-abstracted access to hardware. Application software packages such as Measure Foundry and LabVIEW are typically used with explicit structural links to the natural measurements made by specific hardware and therefore usually are not synthesizing measurements from an abstract description. On the other hand, should a software system be used to synthesize measurement functions as descriptive behavioral constructs, rather than hardware referenced structural data flow descriptions, this is true measurement synthesis. An analogy here is the distinction between a non portable structural description and an abstract behavioral description of digital logic that we see in HDL systems like Verilog. Synthetic instruments in test and measurement are conceptually related to the software synthesizer in audio or music. A musical instrument synthesizer synthesizes the sound of specific instruments from generic hardware. Of course, a significant difference in these concepts is that musical instrument synthesizers typically only generate musical sound, whereas a synthetic instrument in test and measurement may be equally likely to generate or to measure some signal or parameter. A similar term commonly used in test and measurement, Virtual instrumentation, is a superset of synthetic instrumentation. All synthetic instruments are virtual instruments; however, the two terms are different when virtual instrument software mirrors and augments non-generic instrument hardware, providing a soft front panel, or managing the data flow to and from a natural instrument. In this case, the PC and accompanying software is supplementing the analysis and presentation capabilities of the natural instrument. The essential point is this: synthetic instruments are synthesized. The whole is greater than the sum of the parts. To use Buckminster Fuller's word, synthetic instruments are synergistic instruments. Like a triangle is more than three lines, synthetic instruments are more than the triangle of hardware (Control, Codec, Conditioning) they are implemented on. Therefore, one way to tell if you have a true synthetic instrument is to examine the hardware design alone and to try to figure out what sort of instrument it might be. If all you can determine are basic category facts, like the fact that it can be categorized as a stimulus or response instrument, but not anything about what it's particularly designed to create or measure—if the measurement specificity is all hidden in software—then you likely have a true synthetic instrument. The DoD has created a standards body called the Synthetic Instrument Working Group (SIWG) whose role is to define standards for interoperability of synthetic instrument systems. The SIWG defines a synthetic instruments (SI) as: A reconfigurable system that links a series of elemental hardware and software components with standardized interfaces to generate signals or make measurements using numeric processing techniques. See also Synthetic measure References External links Synthetic Instruments Book-Blog Measurement
Synthetic instrument
[ "Physics", "Mathematics" ]
1,002
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
1,323,604
https://en.wikipedia.org/wiki/Energy%20accounting
Energy accounting is a system used to measure, analyze and report the energy consumption of different activities on a regular basis. This is done to improve energy efficiency, and to monitor the environment impact of energy consumption. Energy management Energy accounting is a system used in energy management systems to measure and analyze energy consumption to improve energy efficiency within an organization. Organisations such as Intel corporation use these systems to track energy usage. Various energy transformations are possible. An energy balance can be used to track energy through a system. This becomes a useful tool for determining resource use and environmental impacts. How much energy is needed at each point in a system is measured, as well as the form of that energy. An accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within a system. Sometimes, non-useful work is what is often responsible for environmental problems. The newer systems are trying to build predictive models of consumption. Startups are aiming to revolutionize this approach by introducing AI-based predictive models. These models can analyze vast amounts of data to forecast energy usage patterns, identify inefficiencies, and optimize energy distribution in real-time. By leveraging machine learning algorithms, these systems can learn from historical data and continuously improve their accuracy, providing organizations with powerful tools to enhance energy efficiency and reduce environmental impact. This shift towards predictive analytics represents a significant advancement in energy accounting, enabling more proactive and intelligent energy management. Energy balance Energy returned on energy invested (EROEI) is the ratio of energy delivered by an energy technology to the energy invested to set up the technology. See also Anthropogenic metabolism Energy and Environment Energy management Energy management software Energy management system Energy quality Energy transformation EROEI Industrial metabolism Social metabolism Urban metabolism References External links Accounting: Facility Energy Use Energy accounting in the context of environmental accounting Thermodynamics Energy economics Ecological economics
Energy accounting
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
387
[ "Environmental social science", "Energy economics", "Thermodynamics", "Dynamical systems" ]
1,323,803
https://en.wikipedia.org/wiki/Rapid%20amplification%20of%20cDNA%20ends
Rapid amplification of cDNA ends (RACE) is a technique used in molecular biology to obtain the full length sequence of an RNA transcript found within a cell. RACE results in the production of a cDNA copy of the RNA sequence of interest, produced through reverse transcription, followed by PCR amplification of the cDNA copies (see RT-PCR). The amplified cDNA copies are then sequenced and, if long enough, should map to a unique genomic region. RACE is commonly followed up by cloning before sequencing of what was originally individual RNA molecules. A more high-throughput alternative which is useful for identification of novel transcript structures, is to sequence the RACE-products by next generation sequencing technologies. Process RACE can provide the sequence of an RNA transcript from a small known sequence within the transcript to the 5' end (5' RACE-PCR) or 3' end (3' RACE-PCR) of the RNA. This technique is sometimes called one-sided PCR or anchored PCR. The first step in RACE is to use reverse transcription to produce a cDNA copy of a region of the RNA transcript. In this process, an unknown end portion of a transcript is copied using a known sequence from the center of the transcript. The copied region is bounded by the known sequence, at either the 5' or 3' end. The protocols for 5' or 3' RACES differ slightly. 5' RACE-PCR begins using mRNA as a template for a first round of cDNA synthesis (or reverse transcription) reaction using an anti-sense (reverse) oligonucleotide primer that recognizes a known sequence in the middle of the gene of interest; the primer is called a gene specific primer (GSP). The primer binds to the mRNA, and the enzyme reverse transcriptase adds base pairs to the 3' end of the primer to generate a specific single-stranded cDNA product; this is the reverse complement of the mRNA. Following cDNA synthesis, the enzyme terminal deoxynucleotidyl transferase (TdT) is used to add a string of identical nucleotides, known as a homopolymeric tail, to the 3' end of the cDNA. (There are some other ways to add the 3'-terminal sequence for the first strand of the de novo cDNA synthesis which are much more efficient than homopolymeric tailing, but the sense of the method remains the same). PCR is then carried out, which uses a second anti-sense gene specific primer (GSP2) that binds to the known sequence, and a sense (forward) universal primer (UP) that binds the homopolymeric tail added to the 3' ends of the cDNAs to amplify a cDNA product from the 5' end. 3' RACE-PCR uses the natural polyA tail that exists at the 3' end of all eukaryotic mRNAs for priming during reverse transcription, so this method does not require the addition of nucleotides by TdT. cDNAs are generated using an Oligo-dT-adaptor primer (a primer with a short sequence of deoxy-thymine nucleotides) that complements the polyA stretch and adds a special adaptor sequence to the 5' end of each cDNA. PCR is then used to amplify 3' cDNA from a known region using a sense GSP, and an anti-sense primer complementary to the adaptor sequence. RACE-sequencing The cDNA molecules generated by RACE can be sequenced using high-throughput sequencing technologies (also called, RACE-seq). High-throughput sequencing characterization of RACE fragments is highly time-efficient, more sensitive, less costly and technically feasible compared to traditional characterization of RACE fragments with molecular cloning followed by Sanger sequencing of a few clones. History and applications RACE can be used to amplify unknown 5' (5'-RACE) or 3' (3'-RACE) parts of RNA molecules where part of the RNA sequence is known and targeted by a gene-specific primer. Combined with high-throughput sequencing for characterization of these amplified RACE products, it is possible to apply the approach to characterize any types of coding or non-coding RNA-molecules. The idea of combining RACE with high-throughput sequencing was first introduced in 2009 as Deep-RACE to perform mapping of Transcription start sites (TSS) of 17 genes in a single cell-line. For example, In a study from 2014 to accurately map cleavage sites of target RNA directed by synthetic siRNAs, the approach was first named RACE-seq. Further, the methodology was used to characterize full-length unknown parts of novel transcripts and fusion transcripts in colorectal cancer. In another study aiming to characterize unknown transcript structures of lncRNAs, RACE was used in combination with semi-long 454 sequencing. References "Rapid amplification of 5' cDNA ends," in Molecular Cloning: A Laboratory Manual (eds. Sambrook, J. & Russell, D.W.) Chapter 8 Protocol 9, 8.54−8.60 (Cold Spring Harbor Laboratory Press, Cold Spring Harbor, New York, USA, 2001) Principles of Gene Manipulation and Genomics, S. B. Primrose and R. M. Twyman, Blackwell Publishing, 2006 (7th ed.), pages 111–12. External links Blog article about RACE-seq Biochemistry detection methods Genetic mapping Genetics techniques Molecular biology
Rapid amplification of cDNA ends
[ "Chemistry", "Engineering", "Biology" ]
1,149
[ "Biochemistry methods", "Genetics techniques", "Genetic engineering", "Chemical tests", "Biochemistry detection methods", "Molecular biology", "Biochemistry" ]
1,324,357
https://en.wikipedia.org/wiki/Yukawa%20interaction
In particle physics, Yukawa's interaction or Yukawa coupling, named after Hideki Yukawa, is an interaction between particles according to the Yukawa potential. Specifically, it is between a scalar field (or pseudoscalar field) and a Dirac field of the type The Yukawa interaction was developed to model the strong force between hadrons. A Yukawa interaction is thus used to describe the nuclear force between nucleons mediated by pions (which are pseudoscalar mesons). A Yukawa interaction is also used in the Standard Model to describe the coupling between the Higgs field and massless quark and lepton fields (i.e., the fundamental fermion particles). Through spontaneous symmetry breaking, these fermions acquire a mass proportional to the vacuum expectation value of the Higgs field. This Higgs-fermion coupling was first described by Steven Weinberg in 1967 to model lepton masses. Classical potential If two fermions interact through a Yukawa interaction mediated by a Yukawa particle of mass , the potential between the two particles, known as the Yukawa potential, will be: which is the same as a Coulomb potential except for the sign and the exponential factor. The sign will make the interaction attractive between all particles (the electromagnetic interaction is repulsive for same electrical charge sign particles). This is explained by the fact that the Yukawa particle has spin zero and even spin always results in an attractive potential. (It is a non-trivial result of quantum field theory that the exchange of even-spin bosons like the pion (spin 0, Yukawa force) or the graviton (spin 2, gravity) results in forces always attractive, while odd-spin bosons like the gluons (spin 1, strong interaction), the photon (spin 1, electromagnetic force) or the rho meson (spin 1, Yukawa-like interaction) yields a force that is attractive between opposite charge and repulsive between like-charge.) The negative sign in the exponential gives the interaction a finite effective range, so that particles at great distances will hardly interact any longer (interaction forces fall off exponentially with increasing separation). As for other forces, the form of the Yukawa potential has a geometrical interpretation in term of the field line picture introduced by Faraday: The part results from the dilution of the field line flux in space. The force is proportional to the number of field lines crossing an elementary surface. Since the field lines are emitted isotropically from the force source and since the distance between the elementary surface and the source varies the apparent size of the surface (the solid angle) as the force also follows the  dependence. This is equivalent to the part of the potential. In addition, the exchanged mesons are unstable and have a finite lifetime. The disappearance (radioactive decay) of the mesons causes a reduction of the flux through the surface that results in the additional exponential factor of the Yukawa potential. Massless particles such as photons are stable and thus yield only potentials. (Note however that other massless particles such as gluons or gravitons do not generally yield potentials because they interact with each other, distorting their field pattern. When this self-interaction is negligible, such as in weak-field gravity (Newtonian gravitation) or for very short distances for the strong interaction (asymptotic freedom), the potential is restored.) The action The Yukawa interaction is an interaction between a scalar field (or pseudoscalar field) and a Dirac field of the type The action for a meson field interacting with a Dirac baryon field is where the integration is performed over dimensions; for typical four-dimensional spacetime , and The meson Lagrangian is given by Here, is a self-interaction term. For a free-field massive meson, one would have where is the mass for the meson. For a (renormalizable, polynomial) self-interacting field, one will have where is a coupling constant. This potential is explored in detail in the article on the quartic interaction. The free-field Dirac Lagrangian is given by where is the real-valued, positive mass of the fermion. The Yukawa interaction term is where is the (real) coupling constant for scalar mesons and for pseudoscalar mesons. Putting it all together one can write the above more explicitly as Yukawa coupling to the Higgs in the Standard Model A Yukawa coupling term to the Higgs field effecting spontaneous symmetry breaking in the Standard Model is responsible for fermion masses in a symmetric manner. Suppose that the potential has its minimum, not at but at some non-zero value This can happen, for example, with a potential form such as . In this case, the Lagrangian exhibits spontaneous symmetry breaking. This is because the non-zero value of the field, when operating on the vacuum, has a non-zero vacuum expectation value of In the Standard Model, this non-zero expectation is responsible for the fermion masses despite the chiral symmetry of the model apparently excluding them. To exhibit the mass term, the action can be re-expressed in terms of the derived field where is constructed to be independent of position (a constant). This means that the Yukawa term includes a component and, since both and are constants, the term presents as a mass term for the fermion with equivalent mass This mechanism is the means by which spontaneous symmetry breaking gives mass to fermions. The scalar field is known as the Higgs field. The Yukawa coupling for any given fermion in the Standard Model is an input to the theory. The ultimate reason for these couplings is not known: it would be something that a better, deeper theory should explain. Majorana form It is also possible to have a Yukawa interaction between a scalar and a Majorana field. In fact, the Yukawa interaction involving a scalar and a Dirac spinor can be thought of as a Yukawa interaction involving a scalar with two Majorana spinors of the same mass. Broken out in terms of the two chiral Majorana spinors, one has where is a complex coupling constant, is a complex number, and is the number of dimensions, as above. See also The article Yukawa potential provides a simple example of the Feynman rules and a calculation of a scattering amplitude from a Feynman diagram involving a Yukawa interaction. References Quantum field theory Standard Model Electroweak theory
Yukawa interaction
[ "Physics" ]
1,353
[ "Quantum field theory", "Physical phenomena", "Standard Model", "Quantum mechanics", "Electroweak theory", "Fundamental interactions", "Particle physics" ]
4,951,574
https://en.wikipedia.org/wiki/Archaeoacoustics
Archaeoacoustics is a sub-field of archaeology and acoustics which studies the relationship between people and sound throughout history. It is an interdisciplinary field with methodological contributions from acoustics, archaeology, and computer simulation, and is broadly related to topics within cultural anthropology such as experimental archaeology and ethnomusicology. Since many cultures have sonic components, applying acoustical methods to the study of archaeological sites and artifacts may reveal new information on the civilizations examined. Disciplinary methodology As the study of archaeoacoustics is concerned with a variety of cultural phenonemena, the methodologies depend on the subject of inquiry. The majority of archaeoacoustic studies can be grouped into either a study of artefacts (like musical instruments) or places (buildings or sites). For archaeological or historical sites that still exist in the present day, measurement methods from the realm of architectural acoustics may be used to characterise the behaviour of the site's acoustic field. When sites have been altered from their original state, a mixed-methodology approach may be used where acoustic measurements are combined with virtual reconstructions and simulations. The output of these simulations may be used to listen to the historical state of the site (via Auralization), to aide in an analysis based in the principles of psychoacoustic and for societal contextualization when historically relevant sources are taken into consideration. For archaeological objects, acoustic measurements and simulations may be used to investigate the possible acoustic behaviour of artifacts found at a site, as in the case of an acoustic jar. In a similar vein, the relationship between cultural uses of a space and artifacts found within it can be examined experimentally, as with lithophones and ringing rocks. Notable applications Natural formations Iegor Reznikoff and Michel Dauvois studied the prehistoric painted caves of France, and found links between the artworks' positioning and acoustic effects. An AHRC project headed by Rupert Till of Huddersfield University, Chris Scarre of Durham University, and Bruno Fazenda of Salford University studies similar relationships in the prehistoric painted caves in northern Spain. More recently, archaeologists Margarita Díaz-Andreu, Carlos García Benito and Tommaso Mattioli have undertaken work on rock art landscapes in Italy, France and Spain, paying particular attention to echolocation and augmented audibility of distant sounds that is experienced in some rock art sites. Steven Waller has also studied the links between rock art and sound. Structures and buildings Stonehenge In 1999, Aaron Watson undertook work on the acoustics of numerous archaeological sites, including that of Stonehenge, and investigated numerous chamber tombs and other stone circles. Rupert Till (Huddersfield) and Bruno Fazenda (Salford) also explored Stonehenge's acoustics. At a 2011 conference, Steven Waller argued that acoustics interference patterns were used to design the blueprint of Stonehenge. Almost a decade later, in a detailed study described in a 2020 journal article of the Journal of Archaeological Science, a team led by Trevor Cox and Bruno Fazenda (Salford) employed an acoustic scale model reconstruction of Stonehenge to examine the acoustics within and around the site at different historical stages of the monument, applying sophisticated architectural acoustics methods and knowledge to studies on prehistoric archaeology, offering novel insight into how speech and musical sounds were altered by the acoustics of Stonehenge. Chavín de Huantar Miriam Kolar and colleagues (Stanford) studied various spatial and perceptual attributes of Chavín de Huantar. They identified within the site held the same resonance produced by pututu shells (also used as instruments in the Chavín culture). Chichen Itza Scientific research led since 1998 suggests that the Kukulkan pyramid in Chichen Itza mimics the chirping sound of the quetzal bird when humans clap their hands around it. The researchers argue that this phenomenon is not accidental, that the builders of this pyramid felt divinely rewarded by the echoing effect of this structure. Technically, the clapping noise rings out and scatters against the temple's high and narrow limestone steps, producing a chirp-like tone that declines in frequency. Artifacts Archaeologist Paul Devereux's work (2001) has looked at ringing rocks, Avebury and various other subjects, that he details in his book Stone Age Soundtracks. Ian Cross of University of Cambridge has explored lithoacoustics, the use of stones as musical instruments. Archaeologist Cornelia Kleinitz has studied the sound of a rock gongs in Sudan with Rupert Till and Brenda Baker. Panagiotis Karampatzakis and Vasilios Zafranas investigated the acoustic properties of the Necromanteion of Acheron, Aristoxenus acoustic vases, and the evolution of acoustics in the ancient Greek and Roman odea. Study groups The activity of research groups in the field of archaeoacoustics (sometimes called "acoustic heritage") and the related field of music archaeology is determined by the availability of funding, though some groups maintain a long term presence in the field. In the past twenty years, many researchers have undertaken both seminal work in developing methods to identify, conserve, or recreate aspects of historical acoustic environments, as well as case studies at relevant heritage sites. The Acoustics and Music of British Prehistory Research Network was funded by the Arts and Humanities Research Council and Engineering and Physical Sciences Research Council, led by Rupert Till and Chris Scarre, as well as Professor Jian Kang of Sheffield University's Department of Architecture. It has a list of researchers working in the field, and links to many other relevant sites. An e-mail list has been discussing the subject since 2002 and was set up as a result of the First Pan-American/Iberian Meeting on Acoustics by Victor Reijs. Based in the US, the OTS Foundation has conducted several international conferences specifically on archaeoacoustics, with a focus on the human experience of sound in ancient ritual and ceremonial spaces. The published papers represent a broader multidisciplinary study and include input from the realms of archaeology, architecture, acoustic engineering, rock art, and psycho-acoustics, as well as reports of field work from Gobekli Tepe and Southern Turkey, Malta, and elsewhere around the world. The European Music Archeology Project is a multi-million euro project to recreate ancient instruments and their sounds, and also the environments in which they would have been played. Discredited theories Prior to the establishment of archaeoacoustics as a formal area of study, the possibility of unintentionally recorded sound contained in ancient artifacts held great interest for some theorists. Phonograph cylinders store sound as engravings in the surface of the cylinder, which can be played back by a phonograph with the proper settings. It was hypothesized that this process could have been accidentally replicated during the creation of a ceramic pot or vase, and that such artifacts could be sonified to recover the sounds contained within the elastic medium. In 1902, Charles Sanders Peirce expressed this idea when he wrote: "Give science only a hundred more centuries of increase in geometrical progression, and she may be expected to find that the sound waves of Aristotle's voice have somehow recorded themselves." The concept continued to be of interest throughout the second half of the century, with David E. H. Jones discussing the subject in his "Daedalus" column in the 6 February 1969 issue of New Scientist magazine, writing: Jones subsequently received a letter from Richard G. Woodbridge III, claiming to have already been working on the idea and stating that he had sent a paper on the subject to the journal Nature. The paper never appeared in Nature, but the Proceedings of the IEEE printed a letter from Woodbridge entitled "Acoustic Recordings from Antiquity" in its August 1969 edition. In this letter, the author called attention to what he called "Acoustic Archaeology" and described some early experiments in the field. He then described his experiments with making clay pots and oil paintings from which sound could then be replayed, using a conventional record player cartridge connected directly to a set of headphones. He claimed to have extracted the hum of the potter's wheel from the grooves of a pot, and the word "blue" from an analysis of patch of blue colour in a painting. In 1993, the idea was furthered explored by archeology professor Paul Åström and acoustics professor Mendel Kleiner who reported that they could recover some sounds from pottery, mainly in the upper frequencies. As discussed in an episode of MythBusters (Episode 62: Killer Cable Snaps, Pottery Record) while some generic acoustic phenomena can be found on pottery, it is unlikely that any discernible sounds (like someone talking) could be recorded on the pots, unless ancient people had the technical knowledge to deliberately put the sounds on the artifacts. In popular culture A 1955 episode of the syndicated U.S. TV series Science Fiction Theatre, called "The Frozen Sound", involves a stone of hardened lava, long kept as a paperweight, that turns out to have recorded the voices of a panicking crowd in Pompeii almost 2000 years before. Nigel Kneale's 1972 BBC television play The Stone Tape helped to popularize the term 'stone tape theory'. Arthur C. Clarke discussed the idea at a NASA conference on the future of technology in the early 1970s. An episode of Mysteryquest on History called Stonehenge featured Rupert Till and Bruno Fazenda conducting acoustic tests at Stonehenge and at the Maryhill Monument, a full-sized replica of Stonehenge in the USA. Gregory Benford's 1979 short story "Time Shards" concerns a researcher who recovers thousand-year-old sound from a piece of pottery thrown on a wheel and inscribed with a fine wire as it spun. The sound is then analyzed to reveal conversations between the potter and his assistant in Middle English. Rudy Rucker's 1981 short story "Buzz" includes a small section of audio recovered from ancient Egyptian pottery. A 2000 episode of The X-Files, "Hollywood A.D.", features "The Lazarus Bowl", a mythical piece of pottery reputed to have recorded on it the words that Jesus Christ spoke when he raised Lazarus from the dead. In the 1996 game Amber: Journeys Beyond, this phenomenon is referred to as 'stone tape theory' and a key part of the game's plot. CSI: Crime Scene Investigation used this in 2005 episode "Committed", where an inmate's conversation is partially recorded on a clay jar. In the first-season episode of Fringe entitled "The Road Not Taken", an electron microscope is used to reproduce sounds captured on a partially melted window. In Jurassic Park III, a 3D printed larynx replica is used to communicate with velociraptors. See also International Study Group on Music Archaeology Music archaeology Ancient music Prehistoric music Ernst Chladni Phonautograph Echolocation Room acoustics and Architectural acoustics References Archaeological sub-disciplines Acoustics
Archaeoacoustics
[ "Physics" ]
2,238
[ "Classical mechanics", "Acoustics" ]
4,952,672
https://en.wikipedia.org/wiki/Hydrogen%E2%80%93deuterium%20exchange
Hydrogen–deuterium exchange (also called H–D or H/D exchange) is a chemical reaction in which a covalently bonded hydrogen atom is replaced by a deuterium atom, or vice versa. It can be applied most easily to exchangeable protons and deuterons, where such a transformation occurs in the presence of a suitable deuterium source, without any catalyst. The use of acid, base or metal catalysts, coupled with conditions of increased temperature and pressure, can facilitate the exchange of non-exchangeable hydrogen atoms, so long as the substrate is robust to the conditions and reagents employed. This often results in perdeuteration: hydrogen-deuterium exchange of all non-exchangeable hydrogen atoms in a molecule. An example of exchangeable protons which are commonly examined in this way are the protons of the amides in the backbone of a protein. The method gives information about the solvent accessibility of various parts of the molecule, and thus the tertiary structure of the protein. The theoretical framework for understanding hydrogen exchange in proteins was first described by Kaj Ulrik Linderstrøm-Lang and he was the first to apply H/D exchange to study proteins. Exchange reaction In protic solution exchangeable protons such as those in hydroxyl or amine group exchange protons with the solvent. If D2O is solvent, deuterons will be incorporated at these positions. The exchange reaction can be followed using a variety of methods (see Detection). Since this exchange is an equilibrium reaction, the molar amount of deuterium should be high compared to the exchangeable protons of the substrate. For instance, deuterium is added to a protein in H2O by diluting the H2O solution with D2O (e.g. tenfold). Usually exchange is performed at physiological pH (7.0–8.0) where proteins are in their most native ensemble of conformational states. The H/D exchange reaction can also be catalysed, by acid, base or metal catalysts such as platinum. For the backbone amide hydrogen atoms of proteins, the minimum exchange rate occurs at approximately pH 2.6, on average. By performing the exchange at neutral pH and then rapidly changing the pH, the exchange rates of the backbone amide hydrogens can be dramatically slowed, or quenched. The pH at which the reaction is quenched depends on the analysis method. For detection by NMR, the pH may be moved to around 4.0–4.5. For detection by mass spectrometry, the pH is dropped to the minimum of the exchange curve, pH 2.6. In the most basic experiment, the reaction is allowed to take place for a set time before it is quenched. The deuteration pattern of a molecule that has undergone H/D exchange can be maintained in aprotic environments. However, some methods of deuteration analysis for molecules such as proteins, are performed in aqueous solution, which means that exchange will continue at a slow rate even after the reaction is quenched. Undesired deuterium-hydrogen exchange is referred to as back-exchange and various methods have been devised to correct for this. Detection H–D exchange was measured originally by the father of hydrogen exchange Kaj Ulrik Linderstrøm-Lang using density gradient tubes. In modern times, H–D exchange has primarily been monitored by the methods: NMR spectroscopy, mass spectrometry and neutron crystallography. Each of these methods have their advantages and drawbacks. NMR spectroscopy Hydrogen and deuterium nuclei are grossly different in their magnetic properties. Thus it is possible to distinguish between them by NMR spectroscopy. Deuterons will not be observed in a 1H NMR spectrum and conversely, protons will not be observed in a 2H NMR spectrum. Where small signals are observed in a 1H NMR spectrum of a highly deuterated sample, these are referred to as residual signals. They can be used to calculate the level of deuteration in a molecule. Analogous signals are not observed in 2H NMR spectra because of the low sensitivity of this technique compared to the 1H analysis. Deuterons typically exhibit very similar chemical shifts to their analogous protons. Analysis via 13C NMR spectroscopy is also possible: the different spin values of hydrogen (1/2) and deuterium (1) gives rise to different splitting multiplicities. NMR spectroscopy can be used to determine site-specific deuteration of molecules. Another method uses HSQC spectra. Typically HSQC spectra are recorded at a series of timepoints while the hydrogen is exchanging with the deuterium. Since the HSQC experiment is specific for hydrogen, the signal will decay exponentially as the hydrogen exchanges. It is then possible to fit an exponential function to the data, and obtain the exchange constant. This method gives residue-specific information for all the residues in the protein simultaneously The major drawback is that it requires a prior assignment of the spectrum for the protein in question. This can be very labor-intensive, and usually limits the method to proteins smaller than 25 kDa. Because it takes minutes to hours to record a HSQC spectrum, amides that exchange quickly must be measured using other pulse sequences. Mass spectrometry Hydrogen–deuterium exchange mass spectrometry (HX-MS or HDX-MS) can determine the overall deuterium content of molecules which have undergone H/D exchange. Because of the sample preparation required, it is typically considered to provide an accurate measurement of non-exchangeable hydrogen atoms only. It can also involve H/D exchange in the gas phase or solution phase exchange prior to ionization. It has several advantages in NMR spectroscopy with respect to analysis of H–D exchange reactions: much less material is needed, the concentration of sample can be very low (as low as 0.1 uM), the size limit is much greater, and data can usually be collected and interpreted much more quickly. The deuterium nucleus is twice as heavy as the hydrogen nucleus because it contains a neutron as well as a proton. Thus a molecule that contains some deuterium will be heavier than one that contains all hydrogen. As a protein is increasingly deuterated, the molecular mass increases correspondingly. Detecting the change in the mass of a protein upon deuteration was made possible by modern protein mass spectrometry, first reported in 1991 by Katta and Chait. Determining site specific deuteration via mass spectrometry is more complicated than using NMR spectroscopy. For example, the location and relative amount of deuterium exchange along the peptide backbone can be determined roughly by subjecting the protein to proteolysis after the exchange reaction has been quenched. Individual peptides are then analyzed for overall deuteration of each peptide fragment. Using this technique the resolution of deuterium exchange is determined by the size of the peptides produced during digestion. Pepsin, an acid protease, is commonly used for proteolysis, as the quench pH must be maintained during the proteolytic reaction. To minimize the back-exchange, proteolysis and subsequent mass spectrometry analysis must be done as quickly as possible. HPLC separation of the peptic digest is often carried out at low temperature just prior to electrospray mass spectrometry to minimize back-exchange. More recently, UPLC has been used due to its superior separation capabilities. It was proposed in 1999 that it might be possible to achieve single-residue resolution by using collision-induced dissociation (CID) fragmentation of deuterated peptides in conjunction with tandem mass spectrometry. It was soon discovered that CID causes "scrambling" of the deuterium position within the peptides. However, fragmentation produced by MALDI in-source decay (ISD), electron capture dissociation (ECD), and electron transfer dissociation (ETD) proceed with little or no scrambling under the correct experimental conditions. Scrambling of the isotopic labeling is caused by collisional heating prior to dissociation of the ion and while CID do cause scrambling, collisional heating can also occur during ionization and ion transport. However, by careful optimization of instrument parameters which cause ion heating, hydrogen scrambling can be minimized to a degree which preserves the solution phase isotopic labeling until fragmentation can be performed using a technique where scrambling does not occur. More recently, ultraviolet photodissociation (UVPD) has also been investigated as a possible fragmentation technique to localize deuterium within peptides and proteins. In this regard, the conclusions have been mixed, while it is possible to obtain UVPD fragments which has not undergone scrambling under certain conditions, others have shown that scrambling can occur for both peptides and proteins during the UVPD fragmentation step itself. The theory consolidating these apparent contradictions has to do with the dual fragmentation pathway that may arise from UV irradiation of peptides and proteins, i.e. direct and statistical dissociation. That is, if experimental conditions favor direct dissociation and the precursor ion is kept at low internal energies before and during fragmentation the deuterium level of the resulting fragments will correspond to the non-scrambled precursor. However, experimental conditions may favor statistical dissociation during UV irradiation, especially at long irradiation times and low gas pressure, leading to internal conversion of the electronic excitation energy contributed by the UV photons. The result is vibrational excitation of the irradiated molecule which in turn undergo scrambling. Neutron crystallography Hydrogen–deuterium exchange of fast-exchanging species (e.g. hydroxyl groups) can be measured at atomic resolution quantitatively by neutron crystallography, and in real time if exchange is conducted during the diffraction experiment. High intensity neutron beams are generally generated by spallation at linac particle accelerators such as the Spallation Neutron Source. Neutrons diffract crystals similarly to X-rays and can be used for structural determination. Hydrogen atoms, with between one and zero electrons in a biological setting, diffract X-rays poorly and are effectively invisible under normal experimental conditions. Neutrons scatter from atomic nuclei, and are therefore capable of detecting hydrogen and deuterium atoms. Hydrogen atoms are routinely replaced with deuterium, which introduce a strong and positive scattering factor. It is often sufficient to replace only the solvent and labile hydrogen atoms in a protein crystal by vapor diffusion. In such a structure the occupancy of an exchangeable deuterium atom in a crystal will refine from 0-100%, directly quantifying the amount of exchange. Applications Neutron scattering Perdeuteration of one component of a multi-component system can provide contrast for neutron scattering experiments, where the contrast obtained by using deuterated solvents is insufficient. Protein structure It is not possible to determine the structure of a protein with H/D exchange other than neutron crystallography nor is it possible to define secondary structural elements. The reasons for this are related to the way in which protein structure slows exchange. Exchange rates are a function of two parameters: solvent accessibility and hydrogen bonding. Thus an amide which is part of an intramolecular hydrogen bond will exchange slowly if at all, while an amide on the surface of protein hydrogen bonded to water will exchange rapidly. Amides buried from the solvent but not hydrogen bonded may also have very slow exchange rates. Because both solvent accessibility and hydrogen bonding contribute to the rate of exchange, it becomes difficult to attribute a given exchange rate to a structural element without crystallography or NMR structural data. H–D exchange has been used to characterize the folding pathway of proteins, by refolding the protein under exchange conditions. In a forward exchange experiment (H to D), a pulse of deuterium is added after various amounts of refolding time. The parts of the structure that form rapidly will be protected and thus not exchanged, whereas areas that fold late in the pathway will be exposed to the exchange for longer periods of time. Thus H/D exchange can be used to determine the sequence of various folding events. Factors determining the time resolution of this approach are the efficiency of mixing and how quickly the quench can be performed after the labeling. H–D exchange has been used to characterize protein structures and protein–protein interactions. The exchange reaction needs to be carried out with the isolated proteins and with the complex. The exchanging regions are then compared. If a region is buried by the binding, the amides in this region may be protected in the complex and exchange slowly. However, one must bear in mind that H–D exchange cannot be used to locate binding interfaces for all protein-protein interactions. Some protein-protein interactions are driven by electrostatic forces of side chains and are unlikely to change the exchange rate of backbone amide hydrogens, particularly if the amide hydrogens are located in stable structural elements such as alpha helices. Lastly, H–D exchange can be used to monitor conformational changes in proteins as they relate to protein function. If conformation is altered as result of post-translational modification, enzyme activation, drug binding or other functional events, there will likely be a change to H/D exchange that can be detected. HDXsite online webserver HDXsite is an online websever which includes some applications such as HDX modeller increasing the resolution of experimental HDX data and modeling protection factors for individual residues. References Further reading Chemical reactions Protein structure Spectroscopy Mass spectrometry Deuterium
Hydrogen–deuterium exchange
[ "Physics", "Chemistry" ]
2,811
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Structural biology", "nan", "Protein structure", "Spectroscopy", "Matter" ]
4,954,001
https://en.wikipedia.org/wiki/Membrane%20biology
Membrane biology is the study of the biological and physiochemical characteristics of membranes, with applications in the study of cellular physiology. Membrane bioelectrical impulses are described by the Hodgkin cycle. Biophysics Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections. See also References Biophysics
Membrane biology
[ "Physics", "Chemistry", "Biology" ]
173
[ "Membrane biology", "Applied and interdisciplinary physics", "Biophysics", "Molecular biology" ]