id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
21,519,760
https://en.wikipedia.org/wiki/FortSP
FortSP is a software package for solving stochastic programming (SP) problems. It solves scenario-based SP problems with recourse as well as problems with chance constraints and integrated chance constraints. FortSP is available as a standalone executable that accepts input in SMPS format and as a library with an interface in the C programming language. The solution algorithms provided by FortSP include Benders' decomposition and a variant of level decomposition for two-stage problems, nested Benders' decomposition for multistage problems and reformulation of the problem as a deterministic equivalent. There is also an implementation of a cutting-plane algorithm for integrated chance constraints. FortSP supports external linear programming solvers such as CPLEX and FortMP through their library interfaces or nl files. These solvers are used to optimize the deterministic equivalent problem and also the subproblems in the decomposition methods. References External links OptiRisk Systems home page FortSP home page Numerical software Mathematical optimization software
FortSP
[ "Mathematics" ]
206
[ "Numerical software", "Mathematical software" ]
3,880,390
https://en.wikipedia.org/wiki/Physical%20coefficient
Physical coefficient is an important number that characterizes some physical property of a technical or scientific object under specified conditions. A coefficient also has a scientific reference which is the reliance on force. Stoichiometric coefficient of a chemical compound To find the coefficient of a chemical compound, you must balance the elements involved in it. For example, water: H2O. It just so happens that hydrogen (H) and oxygen (O) are both diatomic molecules, thus we have H2 and O2. To form water, one of the O atoms breaks off from the O2 molecule and react with the H2 compound to form H2O. But, there is one oxygen atom left. It reacts with another H2 molecule. Since it took two of each atom to balance the compound, we put the coefficient 2 in front of H2O: 2 H2O. The total reaction is thus 2 H2 + O2 → 2 H2O. Examples of physical coefficients Coefficient of thermal expansion (thermodynamics) (dimensionless) - Relates the change in temperature to the change in a material's dimensions. Partition coefficient (KD) (chemistry) - The ratio of concentrations of a compound in two phases of a mixture of two immiscible solvents at equilibrium. Hall coefficient (electrical physics) - Relates a magnetic field applied to an element to the voltage created, the amount of current and the element thickness. It is a characteristic of the material from which the conductor is made. Lift coefficient (CL or CZ) (aerodynamics) (dimensionless) - Relates the lift generated by an airfoil with the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil. Ballistic coefficient (BC) (aerodynamics) (units of kg/m2) - A measure of a body's ability to overcome air resistance in flight. BC is a function of mass, diameter, and drag coefficient. Transmission coefficient (quantum mechanics) (dimensionless) - Represents the probability flux of a transmitted wave relative to that of an incident wave. It is often used to describe the probability of a particle tunnelling through a barrier. Damping factor a.k.a. viscous damping coefficient (Physical Engineering) (units of newton-seconds per meter) - relates a damping force with the velocity of the object whose motion is being dampened. References Physical quantities
Physical coefficient
[ "Physics", "Mathematics" ]
494
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
3,881,572
https://en.wikipedia.org/wiki/Singapore%20Combat%20Engineers
The Singapore Combat Engineers (SCE) is a formation of the Singapore Army. Combat Engineers provide mobility by bridging gaps and clearing minefields to facilitate speedy advance of troops into enemy territory, and counter-mobility by constructing obstacles such as anti-tank ditches to impede the enemy's movement. The Combat Engineers also construct trenches, drainage systems and other related infrastructure to enhance the survivability of troops during operations. History When the Singapore Armed Forces Training Institute (SAFTI) was set up in 1967 as the first military training institute to train officers and non-commissioned officers (NCOs, now known as Specialists), an Engineer Training Wing was incorporated into the plan. Two young officers, 2LT Gurcharan Singh and 2LT Chng Teow Hua, were selected to attend a basic engineer officer's course in Fort Belvoir, Virginia, United States. Upon completion of their course, these two officers, with the Commanding Officer, MAJ George Mitchell, conducted the first Engineer Commanders' Course from April to August 1968. The graduate officers and NCOs from the course formed the nucleus of the SCE. As its role became more defined and her responsibilities expanded, the Engineer Wing was renamed to School of Field Engineers and moved from SAFTI (now Pasir Laba Camp) to new premises on Pulau Blakang Mati (now Sentosa) in the same year and subsequently branched out to other camps such as Gillman Camp and Loyang Camp (both now defunct). In April 1970, the Engineer Headquarters (EHQ) was established with MAJ Mitchell as the Senior Engineer Officer. The EHQ was renamed HQ Singapore Combat Engineer in 1974 and the commander's designation was changed to Chief Engineer Officer. Motto The Formation's motto, "Advance and Overcome" is derived from the Combat Engineers' fundamental role in providing mobility for advancing troops by overcoming all obstacles. The Combat Engineers believe they are advancing in terms of technology and techniques, overcoming adversities along the way – all part of their efforts to better fulfill missions of providing mobility, counter-mobility and ensuring survivability for the Army. Insignia The gold colour stands for the sterling qualities of the Combat Engineers – their steadfast spirit and durable nature. The black is for their ability to provide continuous support throughout hours of darkness. The castle is a symbol of the construction power of the Combat Engineers as seen in the bridges, fortifications, roads and obstacles often built by them. The interlocking bricks show the strength, endurance and high degree of teamwork required to accomplish engineer tasks. The bayonet represents the offensive spirit of the Engineers in piercing the enemy defences, while the twin bolts of lightning stand for the destructive demolition power of the Combat Engineers. Headquarters Jurong Camp I - 30SCE Nee Soon Camp - HQ SCE, ETI, 36SCE, 39SCE & HQ CBRE DG Seletar Camp - HQ ARMCEG, 35SCE Sungei Gedong Camp - 38 SCE Combat Engineers Colours On 22 January 1977, the first Combat Engineers Colours were presented to the Combat Engineers formation by the former President Benjamin Henry Sheares at Jurong Town Stadium. The presentation of Colours signifies esprit de corps, pride and identity. "The brown base colour represented the harsh terrain that Engineers must always advance through and overcome. The sword, wings and anchor depicted the support given to the land, airborne and amphibious forces while the laurel and words formed a golden circle representing unity." It was replaced with a different design in October 1991. Structure The Combat Engineers formation consists of two headquarters, a training institute, five active battalions, and ten reservist battalions. Each brigade of the army has an organic company of field engineers, deployed at the discretion of the brigade commander, while each infantry battalion has an organic platoon of pioneers to support battalion movement. Unlike the usual infantry sections of seven men, a field engineer section consists of six men. There are two specialists in a section, the section commander and the section 2IC. Headquarters Singapore Combat Engineers (HQ SCE) Specialist HQ to the SAF on all matters pertaining to Combat Engineer Operations. Responsible for developing core engineer capabilities in terms of Mobility, Countermobility & Survivability. Engineer Training Institute (ETI) The Engineer Training Institute is the combination of three former Combat Engineer training schools, namely School of Combat Engineers (SOCE), Division Engineer Training Centre (DETC) and Armoured Engineer Training Centre (AETC). AETC was removed from ETI as of 1997 and re-established as an active unit, now officially known as the 38th Battalion, Singapore Combat Engineers (38SCE). ETI currently comprises Engineer Commander Training School (ECTS), Engineer Vocational Training School (EVTS), Division Engineer Training Centre (DETC) and Engineer Staff Training Centre (ESTC). The motto for ETI is Seek. Strive. Excel.. 30th Battalion, Singapore Combat Engineers (30SCE) Originally formed as the 30th Combat Engineer Battalion (30 CEB) on 1 November 1968, the 30th Battalion, Singapore Combat Engineers (30SCE) provides the combat engineering capability of the 3rd Singapore Division, as well as Field and Plant Engineer support to the divisions and brigades of the Singapore Army. Typical Field Engineer tasks include demolition, fortification and the building of wire obstacles and minefields, while Plant Engineers operate heavy construction machinery. The battalion consists of three field companies and a Mechanized Equipment Company, and is responsible for the clearing of obstacles in the paths of advancing forces, the opening of main and alternate supply routes, and ensuring the mobility of the army's manoeuvre elements (i.e. armour and infantry forces). They also construct obstacles to deny movement to the enemy during retrograde operations and field fortifications for the protection of friendly forces. Field Engineers employ the Medium Girder Bridge, introduced to the Singapore Combat Engineers in 1975. A field engineer company of around 100 men would take seven hours to construct a bridge spanning more than 50 metres. It was eventually replaced by the Foldable Longspan Bridge (FLB) in 2001, where 12 men require three hours to construct a 46-metre span of bridge. Also used is the Cobra Projection Line Charge (PLC), a man-packed portable, rocket propelled minefield lane clearing charge used to clear infantry lanes through minefields. Plant Engineers are known to operate commercial construction equipment such as excavators, shovels, bulldozers and cranes. The battalion's motto is Overcome With Speed, Fortify With Strength, and its mascot is the polar bear. Army Combat Engineer Group (ARMCEG) The Army Combat Engineer Group (ARMCEG) was formed on 31 March 1993, as an operational command for bridging engineers in the SAF. It ensures that the SAF is equipped with the necessary mobility and counter-mobility capabilities so that troops are able to remain mobile and overcome obstacles in their missions. 35th Battalion, Singapore Combat Engineers (35SCE) 35SCE is the battalion specialised in military bridging. Consisting of a number of companies, the battalion provides the transportation means in the form of float bridges, rafts and assault boats for the projection of combat troops and vehicles across rivers and water obstacles to facilitate troop movements. Established in 1969, the 35th Battalion Singapore Combat Engineers was first called 35 CEB, based at Loyang Camp with ten officers and 30 NCOs. In 1971, the battalion relocated to its current base at Seletar Camp. The 35th battalion is known to use a variety of bridging equipment. Each company is split into three platoons and each bridging platoon operates 4 M3G Float Bridges which each require an operating crew of 4 men (2 sergeants and 2 Pioneer (military)). Each raft consists of two rigs, which form a Class 60 raft or a float bridge when coupled together further. Separately, the battalion houses a specialised Boat unit that comprises an undisclosed number of platoons. This unit operates in key areas such as the rapid deployment alongside an elite fighting force for projection across water bodies and the execution of tactical coastal hook operations (an offensive flanking manoeuvre). It operates with 2 different open-top launch boats namely, the Assault Boat ("AB") and General Support Boat ("GSB"). The company's motto is "Forged in Toughness", achieving mission critical success regardless of time, terrain, tide or weather. Accompanying the battalion is a Combat Support company with a Singapore Signals platoon, a Surveillance platoon and a Plant (Heavy equipment) platoon. It is tasked with supporting the overall operations of the bridging battalion. In January 2005, 35SCE was deployed to Meulaboh as part of Operation Flying Eagle, Singapore's response to the 2004 Indian Ocean earthquake and tsunami. A 45-man combat engineering team was sent ashore to prepare landing points for supplies and equipment to be offloaded from the three Endurance class landing platform dock ships of the Republic of Singapore Navy anchored off Meulaboh. The engineers also assisted in the clearing of debris and roads and the creation of the helicopter landing points. The mascot of 35SCE is the crocodile, and its motto is Power Projection. 38th Battalion, Singapore Combat Engineers (SCE) The unit was renamed to 38 SCE on 29 April 2009 following its reprofiling as an active Armoured Engineer unit. The battalion's motto is Steadfast and Gallant, with the African Elephant as its mascot. The Battalion employs the Leopard 2 Vehicle Launched Bridge, Bionix Vehicle Launched Bridge, Trailblazer, Broncos ATTC, M113 Land Assault Mine Breaching Equipment, FV180 Combat Engineer Tractor, M728 Combat Engineer Vehicle Chemical, Biological, Radiological and Explosives Defence Group (CBRE DG) The Chemical, Biological, Radiological and Explosives Defence Group (CBRE DG) was established in October 2002 with the integration of the operations and training of CBRD and EOD under a unified command. The CBRE DG manages all issues concerning counter-terrorist CBRE development. Since its inception, CBRE DG has been an integral part of the SAF's ongoing effort in the build-up of a comprehensive counter-terrorism capability against conventional and non-conventional threats, and conducts Preventive and Response CBRE operations in conjunction with the Home Team agencies on both the home and international fronts. On 4 April 2019, CBRE DG bid farewell to Seletar Camp and moved to its new home with 36 SCE and 39 SCE within the CBRE Cluster in Nee Soon Camp. The CBRE DG comprises the 36th and 39th Battalions, Singapore Combat Engineers, and the Medical Response Force (MRF) from the SAF Medical Corps, which provides on-scene medical treatment for casualties of chemical and biological agents. Its motto is Prepared and Vigilant. 36th Battalion, Singapore Combat Engineers (36SCE) Formed in 1969 as the Bomb Disposal Unit (BDU), the 36th Battalion, Singapore Combat Engineers is the Explosive Ordnance Disposal (EOD) Unit of the Singapore Armed Forces, specialising in defusing explosive devices. In peacetime, 36SCE handles security sweeps, attends to discovered old war relics and Improvised explosive devices. In 1978, 36 SCE sent a team to Bangladesh to aid in the clearing of a 500 lb aerial bomb. On 4 April 2019, 36 SCE bid farewell to Selarang Camp and moved to its new home with 39 SCE and CBRE DG within the CBRE Cluster in Nee Soon Camp. Its motto is Towards Perfection. Its mascot is the German Shepherd Dog. 39th Battalion, Singapore Combat Engineers (39SCE) The 39th Battalion, Singapore Combat Engineers, is the chemical, biological and radiological defence (CBRD) unit of the Singapore Armed Forces, formed to improve the survivability of troops in a chemical warfare environments. It decontaminates incident sites that contain chemical or biological hazards and provides a sustained, multi-incident response capability. 39SCE also works closely with the Singapore Civil Defence Force (SCDF) in the event of a chemical attack. The Gulf War highlighted the increased threat of chemical weapons, prompting the SAF to begin Individual Chemical Defence familiarisation training for its servicemen in 1991. In response, 39SCE was raised on 1 December 1993 as a company strength unit at Seletar East Camp to develop a chemical defence capability, and to conduct training and experimentation in the areas of chemical protection, detection and decontamination. By 1996, the SAF had developed a limited chemical response capability, which it fielded for the first time during the World Trade Organization Conference held in Singapore. Following the September 11 attacks, the Singapore Combat Engineer's EOD and CBRD battalions have worked with Home Affairs agencies to provide security coverage for significant international events. On 4 April 2019, 39 SCE bid farewell to Seletar Camp and moved to its new home with 36 SCE and CBRE DG within the CBRE Cluster in Nee Soon Camp. Its motto is Protect & Preserve. Its mascot is the mongoose. See also Sapper References Combat Engineers Military units and formations established in 1967 Military engineer corps
Singapore Combat Engineers
[ "Engineering" ]
2,684
[ "Engineering units and formations", "Military engineer corps" ]
3,882,239
https://en.wikipedia.org/wiki/Diffusion%20flame
In combustion, a diffusion flame is a flame in which the oxidizer and fuel are separated before burning. Contrary to its name, a diffusion flame involves both diffusion and convection processes. The name diffusion flame was first suggested by S.P. Burke and T.E.W. Schumann in 1928, to differentiate from premixed flame where fuel and oxidizer are premixed prior to burning. The diffusion flame is also referred to as nonpremixed flame. The burning rate is however still limited by the rate of diffusion. Diffusion flames tend to burn slower and to produce more soot than premixed flames because there may not be sufficient oxidizer for the reaction to go to completion, although there are some exceptions to the rule. The soot typically produced in a diffusion flame becomes incandescent from the heat of the flame and lends the flame its readily identifiable orange-yellow color. Diffusion flames tend to have a less-localized flame front than premixed flames. The contexts for diffusion may vary somewhat. For instance, a candle uses the heat of the flame itself to vaporize its wax fuel and the oxidizer (oxygen) diffuses into the flame from the surrounding air, while a gaslight flame (or the safety flame of a Bunsen burner) uses fuel already in the form of a vapor. Diffusion flames are often studied in counter flow (also called opposed jet) burners. Their interest is due to possible application in the flamelet model for turbulent combustion. Furthermore they provide a convenient way to examine strained flames and flames with holes. These are also known under the name of "edge flames", characterized by a local extinction on their axis because of the high strain rates in the vicinity of the stagnation point. Diffusion flames have an entirely different appearance in a microgravity environment. There is no convection to carry the hot combustion products away from the fuel source, which results in a spherical flame front, such as in the candle seen here. This is a rare example of a diffusion flame which does not produce much soot and does not therefore have a typical yellow flame. See also Burke–Schumann flame Liñán's diffusion flame theory Emmons problem Premixed flame Oxidizing and reducing flames Oxy-fuel combustion process References External links Diffusion flames in microgravity, NASA Diffusion and edge flames in opposed-jet burners Fire Combustion engineering
Diffusion flame
[ "Chemistry", "Engineering" ]
487
[ "Combustion", "Fire", "Combustion engineering", "Industrial engineering" ]
3,882,879
https://en.wikipedia.org/wiki/Erosion%20control
Erosion control is the practice of preventing or controlling wind or water erosion in agriculture, land development, coastal areas, river banks and construction. Effective erosion controls handle surface runoff and are important techniques in preventing water pollution, soil loss, wildlife habitat loss and human property loss. Usage Erosion controls are used in natural areas, agricultural settings or urban environments. In urban areas erosion controls are often part of stormwater runoff management programs required by local governments. The controls often involve the creation of a physical barrier, such as vegetation or rock, to absorb some of the energy of the wind or water that is causing the erosion. They also involve building and maintaining storm drains. On construction sites they are often implemented in conjunction with sediment controls such as sediment basins and silt fences. Bank erosion is a natural process: without it, rivers would not meander and change course. However, land management patterns that change the hydrograph and/or vegetation cover can act to increase or decrease channel migration rates. In many places, whether or not the banks are unstable due to human activities, people try to keep a river in a single place. This can be done for environmental reclamation or to prevent a river from changing course into land that is being used by people. One way that this is done is by placing riprap or gabions along the bank. Examples Examples of erosion control methods include the following: cellular confinement systems crop rotation conservation tillage contour plowing contour trenching cover crops fiber rolls (also called straw wattles) gabions hydroseeding level spreaders mulching perennial crops plasticulture polyacrylamide (as a coagulant) reforestation riparian buffer riprap strip farming sand fence vegetated waterway (bioswale) terracing windbreaks Mathematical modeling Since the 1920s and 1930s scientists have been creating mathematical models for understanding the mechanisms of soil erosion and resulting sediment surface runoff, including an early paper by Albert Einstein applying Baer's law. These models have addressed both gully and sheet erosion. Earliest models were a simple set of linked equations which could be employed by manual calculation. By the 1970s the models had expanded to complex computer models addressing nonpoint source pollution with thousands of lines of computer code. The more complex models were able to address nuances in micrometeorology, soil particle size distributions and micro-terrain variation. See also Bridge scour Burned area emergency response Certified Professional in Erosion and Sediment Control Coastal management Dust Bowl Natural Resources Conservation Service (United States) Tillage erosion Universal Soil Loss Equation Vetiver System Notes References Albert Einstein. 1926. Die Ursache der Mäanderbildung der Flußläufe und des sogenannten Baerschen Gesetzes, Die Naturwissenschaften, 11, S. 223–224 C. Michael Hogan, Leda Patmore, Gary Latshaw, Harry Seidman et al. 1973. Computer modeling of pesticide transport in the soil for five instrumented watersheds, U.S. Environmental Protection Agency Southeast Water laboratory, Athens, Ga. by ESL Inc., Sunnyvale, California Robert E. Horton. 1933. The Horton Papers U.S. Natural Resources Conservation Service (NRCS). Washington, DC. "National Conservation Practice Standards." National Handbook of Conservation Practices. Accessed 2009-03-28. External links "Saving Runaway Farm Land", November 1930, Popular Mechanics One of the first articles on the problem of soil erosion control Erosion Control Technology Council - a trade organization for the erosion control industry International Erosion Control Association - Professional Association, Publications, Training Soil Bioengineering and Biotechnical Slope Stabilization - Erosion Control subsection of a website on Riparian Habitat Restoration Construction Soil erosion Earthworks (engineering) Riparian zone Sustainable design Water pollution
Erosion control
[ "Chemistry", "Engineering", "Environmental_science" ]
768
[ "Construction", "Riparian zone", "Hydrology", "Water pollution" ]
3,885,155
https://en.wikipedia.org/wiki/Replica%20trick
In the statistical physics of spin glasses and other systems with quenched disorder, the replica trick is a mathematical technique based on the application of the formula: or: where is most commonly the partition function, or a similar thermodynamic function. It is typically used to simplify the calculation of , the expected value of , reducing the problem to calculating the disorder average where is assumed to be an integer. This is physically equivalent to averaging over copies or replicas of the system, hence the name. The crux of the replica trick is that while the disorder averaging is done assuming to be an integer, to recover the disorder-averaged logarithm one must send continuously to zero. This apparent contradiction at the heart of the replica trick has never been formally resolved, however in all cases where the replica method can be compared with other exact solutions, the methods lead to the same results. (A natural sufficient rigorous proof that the replica trick works would be to check that the assumptions of Carlson's theorem hold, especially that the ratio is of exponential type less than .) It is occasionally necessary to require the additional property of replica symmetry breaking (RSB) in order to obtain physical results, which is associated with the breakdown of ergodicity. General formulation It is generally used for computations involving analytic functions (can be expanded in power series). Expand using its power series: into powers of or in other words replicas of , and perform the same computation which is to be done on , using the powers of . A particular case which is of great use in physics is in averaging the thermodynamic free energy, over values of with a certain probability distribution, typically Gaussian. The partition function is then given by Notice that if we were calculating just (or more generally, any power of ) and not its logarithm which we wanted to average, the resulting integral (assuming a Gaussian distribution) is just a standard Gaussian integral which can be easily computed (e.g. completing the square). To calculate the free energy, we use the replica trick:which reduces the complicated task of averaging the logarithm to solving a relatively simple Gaussian integral, provided is an integer. The replica trick postulates that if can be calculated for all positive integers then this may be sufficient to allow the limiting behavior as to be calculated. Clearly, such an argument poses many mathematical questions, and the resulting formalism for performing the limit typically introduces many subtleties. When using mean-field theory to perform one's calculations, taking this limit often requires introducing extra order parameters, a property known as "replica symmetry breaking" which is closely related to ergodicity breaking and slow dynamics within disorder systems. Physical applications The replica trick is used in determining ground states of statistical mechanical systems, in the mean-field approximation. Typically, for systems in which the determination of ground state is easy, one can analyze fluctuations near the ground state. Otherwise one uses the replica method. An example is the case of a quenched disorder in a system like a spin glass with different types of magnetic links between spins, leading to many different configurations of spins having the same energy. In the statistical physics of systems with quenched disorder, any two states with the same realization of the disorder (or in case of spin glasses, with the same distribution of ferromagnetic and antiferromagnetic bonds) are called replicas of each other. For systems with quenched disorder, one typically expects that macroscopic quantities will be self-averaging, whereby any macroscopic quantity for a specific realization of the disorder will be indistinguishable from the same quantity calculated by averaging over all possible realizations of the disorder. Introducing replicas allows one to perform this average over different disorder realizations. In the case of a spin glass, we expect the free energy per spin (or any self averaging quantity) in the thermodynamic limit to be independent of the particular values of ferromagnetic and antiferromagnetic couplings between individual sites, across the lattice. So, we explicitly find the free energy as a function of the disorder parameter (in this case, parameters of the distribution of ferromagnetic and antiferromagnetic bonds) and average the free energy over all realizations of the disorder (all values of the coupling between sites, each with its corresponding probability, given by the distribution function). As free energy takes the form: where describes the disorder (for spin glasses, it describes the nature of magnetic interaction between each of the individual sites and ) and we are taking the average over all values of the couplings described in , weighted with a given distribution. To perform the averaging over the logarithm function, the replica trick comes in handy, in replacing the logarithm with its limit form mentioned above. In this case, the quantity represents the joint partition function of identical systems. REM: the easiest replica problem The random energy model (REM) is one of the simplest models of statistical mechanics of disordered systems, and probably the simplest model to show the meaning and power of the replica trick to the level 1 of replica symmetry breaking. The model is especially suitable for this introduction because an exact result by a different procedure is known, and the replica trick can be proved to work by crosschecking of results. Alternative methods The cavity method is an alternative method, often of simpler use than the replica method, for studying disordered mean-field problems. It has been devised to deal with models on locally tree-like graphs. Another alternative method is the supersymmetric method. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems. See for example the book: Also, it has been demonstrated that the Keldysh formalism provides a viable alternative to the replica approach. Remarks The first of the above identities is easily understood via Taylor expansion: For the second identity, one simply uses the definition of the derivative References S Edwards (1971), "Statistical mechanics of rubber". In Polymer networks: structural and mechanical properties, (eds A. J. Chompff & S. Newman). New York: Plenum Press, ISBN 978-1-4757-6210-5. Papers on spin glasses Books on spin glasses References to other approaches Statistical mechanics
Replica trick
[ "Physics" ]
1,304
[ "Statistical mechanics" ]
1,436,104
https://en.wikipedia.org/wiki/Maximum%20principle
In the mathematical fields of differential equations and geometric analysis, the maximum principle is one of the most useful and best known tools of study. Solutions of a differential inequality in a domain D satisfy the maximum principle if they achieve their maxima at the boundary of D. The maximum principle enables one to obtain information about solutions of differential equations without any explicit knowledge of the solutions themselves. In particular, the maximum principle is a useful tool in the numerical approximation of solutions of ordinary and partial differential equations and in the determination of bounds for the errors in such approximations. In a simple two-dimensional case, consider a function of two variables such that The weak maximum principle, in this setting, says that for any open precompact subset of the domain of , the maximum of on the closure of is achieved on the boundary of . The strong maximum principle says that, unless is a constant function, the maximum cannot also be achieved anywhere on itself. Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once. In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary. Intuition A partial formulation of the strong maximum principle Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let be an open subset of Euclidean space and let be a function on such that where for each and between 1 and , is a function on with . Fix some choice of in . According to the spectral theorem of linear algebra, all eigenvalues of the matrix are real, and there is an orthonormal basis of consisting of eigenvectors. Denote the eigenvalues by and the corresponding eigenvectors by , for from 1 to . Then the differential equation, at the point , can be rephrased as The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero. This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of ), that must be constant if there is a point of where is maximized. Note that the above reasoning is unaffected if one considers the more general partial differential equation since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality ( rather than ) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle. Non-applicability of the strong maximum principle However, the above reasoning no longer applies if one considers the condition since now the "balancing" condition, as evaluated at a hypothetical maximum point of , only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that and on any open region containing the origin, the function certainly has a maximum. The classical weak maximum principle for linear elliptic PDE The essential idea Let denote an open subset of Euclidean space. If a smooth function is maximized at a point , then one automatically has: as a matrix inequality. One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question. For instance, if solves the differential equation then it is clearly impossible to have and at any point of the domain. So, following the above observation, it is impossible for to take on a maximum value. If, instead solved the differential equation then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If solved the differential equation then the same analysis would show that cannot take on a minimum value. The possibility of such analysis is not even limited to partial differential equations. For instance, if is a function such that which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that cannot attain a maximum value. There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point where is not in contradiction to the requirement everywhere. However, one could consider, for an arbitrary real number , the function defined by It is straightforward to see that By the above analysis, if then cannot attain a maximum value. One might wish to consider the limit as to 0 in order to conclude that also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if has a boundary such that together with its boundary is compact, then supposing that can be continuously extended to the boundary, it follows immediately that both and attain a maximum value on Since we have shown that , as a function on , does not have a maximum, it follows that the maximum point of , for any , is on By the sequential compactness of it follows that the maximum of is attained on This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of is also attained somewhere on . That is the content of the "strong maximum principle," which requires further analysis. The use of the specific function above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance, with the same effect. The classical strong maximum principle for linear elliptic PDE Summary of proof Let be an open subset of Euclidean space. Let be a twice-differentiable function which attains its maximum value . Suppose that Suppose that one can find (or prove the existence of): a compact subset of , with nonempty interior, such that for all in the interior of , and such that there exists on the boundary of with . a continuous function which is twice-differentiable on the interior of and with and such that one has on the boundary of with Then on with on the boundary of ; according to the weak maximum principle, one has on . This can be reorganized to say for all in . If one can make the choice of so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that is a maximum point of on , so that its gradient must vanish. Proof The above "program" can be carried out. Choose to be a spherical annulus; one selects its center to be a point closer to the closed set than to the closed set , and the outer radius is selected to be the distance from this center to ; let be a point on this latter set which realizes the distance. The inner radius is arbitrary. Define Now the boundary of consists of two spheres; on the outer sphere, one has ; due to the selection of , one has on this sphere, and so holds on this part of the boundary, together with the requirement . On the inner sphere, one has . Due to the continuity of and the compactness of the inner sphere, one can select such that . Since is constant on this inner sphere, one can select such that on the inner sphere, and hence on the entire boundary of . Direct calculation shows There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below. Lastly, note that the directional derivative of at along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of at is nonzero, in contradiction to being a maximum point of on the open set . Statement of the theorem The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927): The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number such that for all in the annulus, the matrix has all eigenvalues greater than or equal to . One then takes , as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number which is a lower bound of the eigenvalues of for all in . These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof: One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation {{math|y{{}} + 2y 0}} has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations which have interior maxima. The sign of c is relevant, as also seen in the one-dimensional case; for instance the solutions to {{math|y'' are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions. See also Maximum modulus principle Hopf maximum principle Notes References Research articles Calabi, E. An extension of E. Hopf's maximum principle with an application to Riemannian geometry. Duke Math. J. 25 (1958), 45–56. Cheng, S.Y.; Yau, S.T. Differential equations on Riemannian manifolds and their geometric applications. Comm. Pure Appl. Math. 28 (1975), no. 3, 333–354. Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry and related properties via the maximum principle. Comm. Math. Phys. 68 (1979), no. 3, 209–243. Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry of positive solutions of nonlinear elliptic equations in . Mathematical analysis and applications, Part A, pp. 369–402, Adv. in Math. Suppl. Stud., 7a, Academic Press, New York-London, 1981. Hamilton, Richard S. Four-manifolds with positive curvature operator. J. Differential Geom. 24 (1986), no. 2, 153–179. E. Hopf. Elementare Bemerkungen Über die Lösungen partieller Differentialgleichungen zweiter Ordnung vom elliptischen Typus. Sitber. Preuss. Akad. Wiss. Berlin 19 (1927), 147-152. Hopf, Eberhard. A remark on linear elliptic differential equations of second order. Proc. Amer. Math. Soc. 3 (1952), 791–793. Nirenberg, Louis. A strong maximum principle for parabolic equations. Comm. Pure Appl. Math. 6 (1953), 167–177. Omori, Hideki. Isometric immersions of Riemannian manifolds. J. Math. Soc. Jpn. 19 (1967), 205–214. Yau, Shing Tung. Harmonic functions on complete Riemannian manifolds. Comm. Pure Appl. Math. 28 (1975), 201–228. Kreyberg, H. J. A. On the maximum principle of optimal control in economic processes, 1969 (Trondheim, NTH, Sosialøkonomisk institutt https://www.worldcat.org/title/on-the-maximum-principle-of-optimal-control-in-economic-processes/oclc/23714026) Textbooks Evans, Lawrence C. Partial differential equations. Second edition. Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 2010. xxii+749 pp. Friedman, Avner. Partial differential equations of parabolic type. Prentice-Hall, Inc., Englewood Cliffs, N.J. 1964 xiv+347 pp. Gilbarg, David; Trudinger, Neil S. Elliptic partial differential equations of second order. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. Ladyženskaja, O. A.; Solonnikov, V. A.; Uralʹceva, N. N. Linear and quasilinear equations of parabolic type. Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23 American Mathematical Society, Providence, R.I. 1968 xi+648 pp. Ladyzhenskaya, Olga A.; Ural'tseva, Nina N. Linear and quasilinear elliptic equations. Translated from the Russian by Scripta Technica, Inc. Translation editor: Leon Ehrenpreis. Academic Press, New York-London 1968 xviii+495 pp. Lieberman, Gary M. Second order parabolic differential equations. World Scientific Publishing Co., Inc., River Edge, NJ, 1996. xii+439 pp. Morrey, Charles B., Jr. Multiple integrals in the calculus of variations. Reprint of the 1966 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2008. x+506 pp. Protter, Murray H.; Weinberger, Hans F. Maximum principles in differential equations. Corrected reprint of the 1967 original. Springer-Verlag, New York, 1984. x+261 pp. Smoller, Joel. Shock waves and reaction-diffusion equations. Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 258. Springer-Verlag, New York, 1994. xxiv+632 pp. Harmonic functions Partial differential equations Mathematical principles
Maximum principle
[ "Mathematics" ]
3,147
[ "Mathematical principles" ]
1,436,150
https://en.wikipedia.org/wiki/Plasma%20torch
A plasma torch (also known as a plasma arc, plasma gun, plasma cutter, or plasmatron) is a device for generating a directed flow of plasma. The plasma jet can be used for applications including plasma cutting, plasma arc welding, plasma spraying, and plasma gasification for waste disposal. Types Thermal plasmas are generated in plasma torches by direct current (DC), alternating current (AC), radio-frequency (RF) and other discharges. DC torches are the most commonly used and researched, because when compared to AC: "there is less flicker generation and noise, a more stable operation, better control, a minimum of two electrodes, lower electrode consumption, slightly lower refractory [heat] wear and lower power consumption". Transferred vs. non-transferred There are two types of DC torches: non-transferred and transferred. In non-transferred DC torches, the electrodes are inside the body/housing of the torch itself (creating the arc there). Whereas in a transferred torch one electrode is outside (and is usually the conductive material to be treated), allowing the arc to form outside of the torch over a larger distance. A benefit of transferred DC torches is that the plasma arc is formed outside the water-cooled body, preventing heat loss—as is the case with non-transferred torches, where their electrical-to-thermal efficiency can be as low as 50%, but the hot water can itself be utilized. Furthermore, transferred DC torches can be used in a twin-torch setup, where one torch is cathodic and the other anodic, which has the earlier benefit of a regular transferred single-torch system, but allows their use with non-conductive materials, as there is no need for it to form the other electrode. However, these types of setups are rare as most common non-conductive materials do not require the precise cutting ability of a plasma torch. In addition, the discharge generated by this particular plasma source configuration is characterized by a complex shape and fluid dynamics that requires a 3D description in order to be predicted, making performance unsteady. The electrodes of non-transferred torches are larger, because they suffer more wear by the plasma arc. The quality of plasma produced is a function of density (pressure), temperature and torch power (the greater the better). With regards to the efficiency of the torch itself—this can vary among manufacturers and torch technology; though for example, Leal-Quirós reports that for Westinghouse Plasma Corp. torches “a thermal efficiency of 90% is easily possible; the efficiency represents the percentage of arc power that exits the torch and enters the process”. Thermal plasma DC torches, non-transferred arc, hot cathode In a DC torch, the electric arc is formed between the electrodes (which can be made of copper, tungsten, graphite, silver etc.), and the thermal plasma is formed from the continual input of carrier/working gas, projecting outward as a plasma jet/flame (as can be seen in the adjacent image). In DC torches, the carrier gas can be, for example, either oxygen, nitrogen, argon, helium, air, or hydrogen; and although termed such, it does not have to be a gas (thus, better termed a carrier fluid). For example, a research plasma torch at the Institute of Plasma Physics (IPP) in Prague, Czech Republic, functions with an H2O vortex (as well as a small addition of argon to ignite the arc), and produces a high temperature/velocity plasma flame. In fact, early studies of arc stabilization employed a water-vortex. Overall, the electrode materials and carrier fluids have to be specifically matched to avoid excessive electrode corrosion or oxidation (and contamination of materials to be treated), while maintaining ample power and function. Furthermore, the flow-rate of the carrier gas can be raised to promote a larger, more projecting plasma jet, provided that the arc current is sufficiently increased; and vice versa. The plasma flame of a real plasma torch is a few inches long at most; it is to be distinguished from fictional long-range plasma weapons. Gallery See also Plasma source References Plasma technology and applications Sustainable technologies
Plasma torch
[ "Physics" ]
860
[ "Plasma technology and applications", "Plasma physics" ]
1,436,317
https://en.wikipedia.org/wiki/DNA%20construct
A DNA construct is an artificially-designed segment of DNA borne on a vector that can be used to incorporate genetic material into a target tissue or cell. A DNA construct contains a DNA insert, called a transgene, delivered via a transformation vector which allows the insert sequence to be replicated and/or expressed in the target cell. This gene can be cloned from a naturally occurring gene, or synthetically constructed. The vector can be delivered using physical, chemical or viral methods. Typically, the vectors used in DNA constructs contain an origin of replication, a multiple cloning site, and a selectable marker. Certain vectors can carry additional regulatory elements based on the expression system involved. DNA constructs can be as small as a few thousand base pairs (kbp) of DNA carrying a single gene, using vectors such as plasmids or bacteriophages, or as large as hundreds of kbp for large-scale genomic studies using an artificial chromosome. A DNA construct may express wildtype protein, prevent the expression of certain genes by expressing competitors or inhibitors, or express mutant proteins, such as deletion mutations or missense mutations. DNA constructs are widely adapted in molecular biology research for techniques such as DNA sequencing, protein expression, and RNA studies. History The first standardized vector, pBR220, was designed in 1977 by researchers in Herbert Boyer’s lab. The plasmid contains various restriction enzyme sites and a stable antibiotic-resistance gene free from transposon activities. In 1982, Jeffrey Vieira and Joachim Messing described the development of M13mp7-derived pUC vectors that consist of a multiple cloning site and allow for more efficient sequencing and cloning using a set of universal M13 primers. Three years later, the currently popular pUC19 plasmid was engineered by the same scientists. Construction The gene on a DNA sequence of interest can either be cloned from an existing sequence or developed synthetically. To clone a naturally occurring sequence in an organism, the organism's DNA is first cut with restriction enzymes, which recognize DNA sequences and cut them, around the target gene. The gene can then be amplified using polymerase chain reaction (PCR). Typically, this process includes using short sequences known as primers to initially hybridize to the target sequence; in addition, point mutations can be introduced in the primer sequences and then copied in each cycle in order to modify the target sequence. It is also possible to synthesize a target DNA strand for a DNA construct. Short strands of DNA known as oligonucleotides can be developed using column-based synthesis, in which bases are added one at a time to a strand of DNA attached to a solid phase. Each base has a protecting group to prevent linkage that is not removed until the next base is ready to be added, ensuring that they are linked in the correct sequence. Oligonucleotides can also be synthesized on a microarray, which allows for tens of thousands of sequences to be synthesized at once, in order to reduce cost. To synthesize a larger gene, oligonucleotides are developed with overlapping sequences on the ends and then joined together. The most common method is called polymerase cycling assembly (PCA): fragments hybridize at the overlapping regions and are extended, and larger fragments are created in each cycle. Once a sequence has been isolated, it must be inserted into a vector. The easiest way to do this is to cut the vector DNA using restriction enzymes; if the same enzymes were used to isolate the target sequence, then the same "overhang" sequences will be created on each end allowing for hybridization. Once the target gene has hybridized to the vector DNA, they can be joined using a DNA ligase. An alternative strategy uses recombination between homologous sites on the target gene and the vector sequence, eliminating the need for restriction enzymes. Modes of delivery There are three general categories of DNA construct delivery: physical, chemical, and viral. Physical methods, which deliver the DNA by physically penetrating the cell, include microinjection, electroporation, and biolistics. Chemical methods rely on chemical reactions to deliver the DNA and include transformation with cells made competent using calcium phosphate as well as delivery via lipid nanoparticles. Viral methods use a variety of viral vectors to deliver the DNA, including adenovirus, lentivirus, and herpes simplex virus Vector structure In addition to the target gene, there are three important elements in a vector: an origin of replication, a selectable marker, and a multiple cloning site. An origin of replication is a DNA sequence that starts the process of DNA replication, allowing the vector to clone itself. A multiple cloning site contains binding sites for several restriction enzymes, making it easier to insert different DNA sequences into the vector. A selectable marker confers some trait that can be easily selected for in a host cell, so that it can be determined whether transformation was successful. The most common selectable markers are genes for antibiotic resistance, so that host cells without the construct will die off when exposed to the antibody and only host cells with the construct will remain. Types of DNA constructs Bacterial plasmids are circular sections of DNA that naturally replicate in bacteria. Plasmids are capable of holding inserts up to approximately 20 kbp in length. These types of constructs typically contain a gene offering antibiotic-resistance, an origin of replication, regulatory elements such as Lac inhibitors, a polylinker, and a protein tag which facilitates protein purification. Bacteriophage Vectors are viruses that can infect bacteria and replicate their own DNA. Artificial chromosomes are commonly used in genome project studies due to their ability to hold inserts up to 350 kbp. These vectors are derived from the F plasmid, taking advantage of the high stability and conjugational ability introduced by the F factor. Fosmids are a hybrid between bacterial F plasmids and λ phage cloning techniques. Inserts are pre-packaged into phage particles, then inserted into the host cell with the ability to hold ~45 kbp. They are typically used to generate a DNA library due to their increased stability. Applications DNA constructs can be used to produce proteins, including both naturally occurring proteins and engineered mutant proteins. These proteins can be used to make therapeutic products, such as pharmaceuticals and antibodies. DNA constructs can also change the expression levels of other genes by expressing regulatory sequences such as promoters and inhibitors. Additionally, DNA constructs can be used for research such as creating genomic libraries, sequencing cloned DNA, and studying RNA and protein expression. See also Vector (molecular biology) References DNA Genome editing
DNA construct
[ "Engineering", "Biology" ]
1,380
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
1,436,650
https://en.wikipedia.org/wiki/Nucleotide%20diversity
Nucleotide diversity is a concept in molecular genetics which is used to measure the degree of polymorphism within a population. One commonly used measure of nucleotide diversity was first introduced by Nei and Li in 1979. This measure is defined as the average number of nucleotide differences per site between two DNA sequences in all possible pairs in the sample population, and is denoted by . An estimator for is given by: where and are the respective frequencies of the th and th sequences, is the number of nucleotide differences per nucleotide site between the th and th sequences, and is the number of sequences in the sample. The term in front of the sums guarantees an unbiased estimator, which does not depend on how many sequences you sample. Nucleotide diversity is a measure of genetic variation. It is usually associated with other statistical measures of population diversity, and is similar to expected heterozygosity. This statistic may be used to monitor diversity within or between ecological populations, to examine the genetic variation in crops and related species, or to determine evolutionary relationships. Nucleotide diversity can be calculated by examining the DNA sequences directly, or may be estimated from molecular marker data, such as Random Amplified Polymorphic DNA (RAPD) data and Amplified Fragment Length Polymorphism (AFLP) data. Software DnaSP — DNA Sequence Polymorphism, is a software package for the analysis of nucleotide polymorphism from aligned DNA sequence data. MEGA, Molecular Evolutionary Genetics Analysis, is a software package used for estimating rates of molecular evolution, as well as generating phylogenetic trees, and aligning DNA sequences. Available for Windows, Linux and Mac OS X (since ver. 5.x). Arlequin3 software can be used for calculations of nucleotide diversity and a variety of other statistical tests for intra-population and inter-population analyses. Available for Windows. Variscan R package PopGenome pixy R package QSutils References Molecular genetics
Nucleotide diversity
[ "Chemistry", "Biology" ]
406
[ "Molecular genetics", "Molecular biology" ]
1,436,668
https://en.wikipedia.org/wiki/Voigt%20notation
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application. For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector As another example: The stress tensor (in matrix notation) is given as In Voigt notation it is simplified to a 6-dimensional vector: The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as Its representation in Voigt notation is where , , and are engineering shear strains. The benefit of using different representations for stress and strain is that the scalar invariance is preserved. Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix. Mnemonic rule A simple mnemonic rule for memorizing Voigt notation is as follows: Write down the second order tensor in matrix form (in the example, the stress tensor) Strike out the diagonal Continue on the third column Go back to the first element along the first row. Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue). Mandel notation For a symmetric tensor of second rank only six components are distinct, the three on the diagonal and the others being off-diagonal. Thus it can be expressed, in Mandel notation, as the vector The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors, for example: A symmetric tensor of rank four satisfying and has 81 components in three-dimensional space, but only 36 components are distinct. Thus, in Mandel notation, it can be expressed as Applications The notation is named after physicist Woldemar Voigt & John Nye (scientist). It is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hooke's law, as well as finite element analysis, and Diffusion MRI. Hooke's law has a symmetric fourth-order stiffness tensor with 81 components (3×3×3×3), but because the application of such a rank-4 tensor to a symmetric rank-2 tensor must yield another symmetric rank-2 tensor, not all of the 81 elements are independent. Voigt notation enables such a rank-4 tensor to be represented by a 6×6 matrix. However, Voigt's form does not preserve the sum of the squares, which in the case of Hooke's law has geometric significance. This explains why weights are introduced (to make the mapping an isometry). A discussion of invariance of Voigt's notation and Mandel's notation can be found in Helnwein (2001). See also Vectorization (mathematics) Hooke's law References Tensors Mathematical notation Solid mechanics
Voigt notation
[ "Physics", "Mathematics", "Engineering" ]
696
[ "Solid mechanics", "Tensors", "Mechanics", "nan" ]
1,437,020
https://en.wikipedia.org/wiki/Guiding%20center
In physics, the motion of an electrically charged particle such as an electron or ion in a plasma in a magnetic field can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation. Gyration If the magnetic field is uniform and all other forces are absent, then the Lorentz force will cause a particle to undergo a constant acceleration perpendicular to both the particle velocity and the magnetic field. This does not affect particle motion parallel to the magnetic field, but results in circular motion at constant speed in the plane perpendicular to the magnetic field. This circular motion is known as the gyromotion. For a particle with mass and charge moving in a magnetic field with strength , it has a frequency, called the gyrofrequency or cyclotron frequency, of For a speed perpendicular to the magnetic field of , the radius of the orbit, called the gyroradius or Larmor radius, is Parallel motion Since the magnetic Lorentz force is always perpendicular to the magnetic field, it has no influence (to lowest order) on the parallel motion. In a uniform field with no additional forces, a charged particle will gyrate around the magnetic field according to the perpendicular component of its velocity and drift parallel to the field according to its initial parallel velocity, resulting in a helical orbit. If there is a force with a parallel component, the particle and its guiding center will be correspondingly accelerated. If the field has a parallel gradient, a particle with a finite Larmor radius will also experience a force in the direction away from the larger magnetic field. This effect is known as the magnetic mirror. While it is closely related to guiding center drifts in its physics and mathematics, it is nevertheless considered to be distinct from them. General force drifts Generally speaking, when there is a force on the particles perpendicular to the magnetic field, then they drift in a direction perpendicular to both the force and the field. If is the force on one particle, then the drift velocity is These drifts, in contrast to the mirror effect and the non-uniform B drifts, do not depend on finite Larmor radius, but are also present in cold plasmas. This may seem counterintuitive. If a particle is stationary when a force is turned on, where does the motion perpendicular to the force come from and why doesn't the force produce a motion parallel to itself? The answer is the interaction with the magnetic field. The force initially results in an acceleration parallel to itself, but the magnetic field deflects the resulting motion in the drift direction. Once the particle is moving in the drift direction, the magnetic field deflects it back against the external force, so that the average acceleration in the direction of the force is zero. There is, however, a one-time displacement in the direction of the force equal to (f/m)ωc−2, which should be considered a consequence of the polarization drift (see below) while the force is being turned on. The resulting motion is a cycloid. More generally, the superposition of a gyration and a uniform perpendicular drift is a trochoid. All drifts may be considered special cases of the force drift, although this is not always the most useful way to think about them. The obvious cases are electric and gravitational forces. The grad-B drift can be considered to result from the force on a magnetic dipole in a field gradient. The curvature, inertia, and polarisation drifts result from treating the acceleration of the particle as fictitious forces. The diamagnetic drift can be derived from the force due to a pressure gradient. Finally, other forces such as radiation pressure and collisions also result in drifts. Gravitational field A simple example of a force drift is a plasma in a gravitational field, e.g. the ionosphere. The drift velocity is Because of the mass dependence, the gravitational drift for the electrons can normally be ignored. The dependence on the charge of the particle implies that the drift direction is opposite for ions as for electrons, resulting in a current. In a fluid picture, it is this current crossed with the magnetic field that provides that force counteracting the applied force. Electric field This drift, often called the (E-cross-B) drift, is a special case because the electric force on a particle depends on its charge (as opposed, for example, to the gravitational force considered above). As a result, ions (of whatever mass and charge) and electrons both move in the same direction at the same speed, so there is no net current (assuming quasineutrality of the plasma). In the context of special relativity, in the frame moving with this velocity, the electric field vanishes. The value of the drift velocity is given by Nonuniform E If the electric field is not uniform, the above formula is modified to read Nonuniform B Guiding center drifts may also result not only from external forces but also from non-uniformities in the magnetic field. It is convenient to express these drifts in terms of the parallel and perpendicular kinetic energies In that case, the explicit mass dependence is eliminated. If the ions and electrons have similar temperatures, then they also have similar, though oppositely directed, drift velocities. Grad-B drift When a particle moves into a larger magnetic field, the curvature of its orbit becomes tighter, transforming the otherwise circular orbit into a cycloid. The drift velocity is Curvature drift In order for a charged particle to follow a curved field line, it needs a drift velocity out of the plane of curvature to provide the necessary centripetal force. This velocity is where is the radius of curvature pointing outwards, away from the center of the circular arc which best approximates the curve at that point. where is the unit vector in the direction of the magnetic field. This drift can be decomposed into the sum of the curvature drift and the term In the important limit of stationary magnetic field and weak electric field, the inertial drift is dominated by the curvature drift term. Curved vacuum drift In the limit of small plasma pressure, Maxwell's equations provide a relationship between gradient and curvature that allows the corresponding drifts to be combined as follows For a species in thermal equilibrium, can be replaced by ( for and for ). The expression for the grad-B drift above can be rewritten for the case when is due to the curvature. This is most easily done by realizing that in a vacuum, Ampere's Law is . In cylindrical coordinates chosen such that the azimuthal direction is parallel to the magnetic field and the radial direction is parallel to the gradient of the field, this becomes Since is a constant, this implies that and the grad-B drift velocity can be written Polarization drift A time-varying electric field also results in a drift given by Obviously this drift is different from the others in that it cannot continue indefinitely. Normally an oscillatory electric field results in a polarization drift oscillating 90 degrees out of phase. Because of the mass dependence, this effect is also called the inertia drift. Normally the polarization drift can be neglected for electrons because of their relatively small mass. Diamagnetic drift The diamagnetic drift is not actually a guiding center drift and resembles averaged (fluid) behavior of large collection of particles. A pressure gradient does not cause any single particle to drift. Nevertheless, the fluid velocity is defined by counting the particles moving through a reference area, and a pressure gradient results in more particles in one direction than in the other. The net velocity of the fluid is given by Drift Currents With the important exception of the drift, the drift velocities of differently charged particles will be different. This difference in velocities results in a current, while the mass dependence of the drift velocity can result in chemical separation. References Plasma theory and modeling
Guiding center
[ "Physics" ]
1,655
[ "Plasma theory and modeling", "Plasma physics" ]
1,437,371
https://en.wikipedia.org/wiki/Fatty%20acid%20metabolism
Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds. In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants. In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones. Fatty acid catabolism Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows: Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by sympathetic nerves in adipose tissue), caused by declining blood glucose levels after meals, which simultaneously lowers the insulin level in the blood. Once freed from glycerol, the free fatty acids enter the blood, which transports them, attached to plasma albumin, throughout the body. Long-chain free fatty acids enter metabolizing cells (i.e. most living cells in the body except red blood cells and neurons in the central nervous system) through specific transport proteins, such as the SLC27 family fatty acid transport protein. Red blood cells do not contain mitochondria and are therefore incapable of metabolizing fatty acids; the tissues of the central nervous system cannot use fatty acids, despite containing mitochondria, because long-chain fatty acids (as opposed to medium-chain fatty acids) cannot cross the blood-brain barrier into the interstitial fluids that bathe these cells. Once inside the cell, long-chain-fatty-acid—CoA ligase catalyzes the reaction between a fatty acid molecule with ATP (which is broken down to AMP and inorganic pyrophosphate) to give a fatty acyl-adenylate, which then reacts with free coenzyme A to give a fatty acyl-CoA molecule. In order for the acyl-CoA to enter the mitochondrion the carnitine shuttle is used: Acyl-CoA is transferred to the hydroxyl group of carnitine by carnitine palmitoyltransferase I, located on the cytosolic faces of the outer and inner mitochondrial membranes. Acyl-carnitine is shuttled inside by a carnitine-acylcarnitine translocase, as a carnitine is shuttled outside. Acyl-carnitine is converted back to acyl-CoA by carnitine palmitoyltransferase II, located on the interior face of the inner mitochondrial membrane. The liberated carnitine is shuttled back to the cytosol, as an acyl-CoA is shuttled into the mitochondrial matrix. Beta oxidation, in the mitochondrial matrix, then cuts the long carbon chains of the fatty acids (in the form of acyl-CoA molecules) into a series of two-carbon (acetate) units, which, combined with co-enzyme A, form molecules of acetyl CoA, which condense with oxaloacetate to form citrate at the "beginning" of the citric acid cycle. It is convenient to think of this reaction as marking the "starting point" of the cycle, as this is when fuel - acetyl-CoA - is added to the cycle, which will be dissipated as CO and HO with the release of a substantial quantity of energy captured in the form of ATP, during the course of each turn of the cycle and subsequent oxidative phosphorylation. Briefly, the steps in beta oxidation are as follows: Dehydrogenation by acyl-CoA dehydrogenase, yielding 1 FADH Hydration by enoyl-CoA hydratase Dehydrogenation by 3-hydroxyacyl-CoA dehydrogenase, yielding 1 NADH + H+ Cleavage by thiolase, yielding 1 acetyl-CoA and a fatty acid that has now been shortened by 2 carbons (forming a new, shortened acyl-CoA) This beta oxidation reaction is repeated until the fatty acid has been completely reduced to acetyl-CoA or, in the case of fatty acids with odd numbers of carbon atoms, acetyl-CoA and 1 molecule of propionyl-CoA per molecule of fatty acid. Each beta oxidative cut of the acyl-CoA molecule eventually yields 5 ATP molecules in oxidative phosphorylation. The acetyl-CoA produced by beta oxidation enters the citric acid cycle in the mitochondrion by combining with oxaloacetate to form citrate. Coupled to oxidative phosphorylation this results in the complete combustion of the acetyl-CoA to CO and water. The energy released in this process is captured in the form of 1 GTP and 11 ATP molecules per acetyl-CoA molecule oxidized. This is the fate of acetyl-CoA wherever beta oxidation of fatty acids occurs, except under certain circumstances in the liver. The propionyl-CoA is later converted into succinyl-CoA through biotin-dependant propionyl-CoA carboxylase (PCC) and Vitamin B12-dependant methylmalonyl-CoA mutase (MCM), sequentially. Succinyl-CoA is first converted to malate, and then to pyruvate where it is then transported to the matrix to enter the citric acid cycle. In the liver oxaloacetate can be wholly or partially diverted into the gluconeogenic pathway during fasting, starvation, a low carbohydrate diet, prolonged strenuous exercise, and in uncontrolled type 1 diabetes mellitus. Under these circumstances, oxaloacetate is hydrogenated to malate, which is then removed from the mitochondria of the liver cells to be converted into glucose in the cytoplasm of the liver cells, from where it is released into the blood. In the liver, therefore, oxaloacetate is unavailable for condensation with acetyl-CoA when significant gluconeogenesis has been stimulated by low (or absent) insulin and high glucagon concentrations in the blood. Under these conditions, acetyl-CoA is diverted to the formation of acetoacetate and beta-hydroxybutyrate. Acetoacetate, beta-hydroxybutyrate, and their spontaneous breakdown product, acetone, are frequently, but confusingly, known as ketone bodies (as they are not "bodies" at all, but water-soluble chemical substances). The ketones are released by the liver into the blood. All cells with mitochondria can take up ketones from the blood and reconvert them into acetyl-CoA, which can then be used as fuel in their citric acid cycles, as no other tissue can divert its oxaloacetate into the gluconeogenic pathway in the way that this can occur in the liver. Unlike free fatty acids, ketones can cross the blood–brain barrier and are therefore available as fuel for the cells of the central nervous system, acting as a substitute for glucose, on which these cells normally survive. The occurrence of high levels of ketones in the blood during starvation, a low carbohydrate diet, prolonged heavy exercise, or uncontrolled type 1 diabetes mellitus is known as ketosis, and, in its extreme form, in out-of-control type 1 diabetes mellitus, as ketoacidosis. The glycerol released by lipase action is phosphorylated by glycerol kinase in the liver (the only tissue in which this reaction can occur), and the resulting glycerol 3-phosphate is oxidized to dihydroxyacetone phosphate. The glycolytic enzyme triose phosphate isomerase converts this compound to glyceraldehyde 3-phosphate, which is oxidized via glycolysis, or converted to glucose via gluconeogenesis. Fatty acids as an energy source Fatty acids, stored as triglycerides in an organism, are a concentrated source of energy because they contain little oxygen and are anhydrous. The energy yield from a gram of fatty acids is approximately 9 kcal (37 kJ), much higher than the 4 kcal (17 kJ) for carbohydrates. Since the hydrocarbon portion of fatty acids is hydrophobic, these molecules can be stored in a relatively anhydrous (water-free) environment. Carbohydrates, on the other hand, are more highly hydrated. For example, 1 g of glycogen binds approximately 2 g of water, which translates to 1.33 kcal/g (4 kcal/3 g). This means that fatty acids can hold more than six times the amount of energy per unit of stored mass. Put another way, if the human body relied on carbohydrates to store energy, then a person would need to carry 31 kg (67.5 lb) of hydrated glycogen to have the energy equivalent to 4.6 kg (10 lb) of fat. Hibernating animals provide a good example for utilization of fat reserves as fuel. For example, bears hibernate for about 7 months, and during this entire period, the energy is derived from degradation of fat stores. Migrating birds similarly build up large fat reserves before embarking on their intercontinental journeys. The fat stores of young adult humans average between about 10–20 kg, but vary greatly depending on gender and individual disposition. By contrast, the human body stores only about 400 g of glycogen, of which 300 g is locked inside the skeletal muscles and is unavailable to the body as a whole. The 100 g or so of glycogen stored in the liver is depleted within one day of starvation. Thereafter the glucose that is released into the blood by the liver for general use by the body tissues has to be synthesized from the glucogenic amino acids and a few other gluconeogenic substrates, which do not include fatty acids. Nonetheless, lipolysis releases glycerol which can enter the pathway of gluconeogenesis. Carbohydrate synthesis from glycerol and fatty acids Fatty acids are broken down to acetyl-CoA by means of beta oxidation inside the mitochondria, whereas fatty acids are synthesized from acetyl-CoA outside the mitochondria, in the cytosol. The two pathways are distinct, not only in where they occur, but also in the reactions that occur, and the substrates that are used. The two pathways are mutually inhibitory, preventing the acetyl-CoA produced by beta-oxidation from entering the synthetic pathway via the acetyl-CoA carboxylase reaction. It can also not be converted to pyruvate as the pyruvate dehydrogenase complex reaction is irreversible. Instead the acetyl-CoA produced by the beta-oxidation of fatty acids condenses with oxaloacetate, to enter the citric acid cycle. During each turn of the cycle, two carbon atoms leave the cycle as CO in the decarboxylation reactions catalyzed by isocitrate dehydrogenase and alpha-ketoglutarate dehydrogenase. Thus each turn of the citric acid cycle oxidizes an acetyl-CoA unit while regenerating the oxaloacetate molecule with which the acetyl-CoA had originally combined to form citric acid. The decarboxylation reactions occur before malate is formed in the cycle. Only plants possess the enzymes to convert acetyl-CoA into oxaloacetate from which malate can be formed to ultimately be converted to glucose. However, acetyl-CoA can be converted to acetoacetate, which can decarboxylate to acetone (either spontaneously, or catalyzed by acetoacetate decarboxylase). It can then be further metabolized to isopropanol which is excreted in breath/urine, or by CYP2E1 into hydroxyacetone (acetol). Acetol can be converted to propylene glycol. This converts to pyruvate (by two alternative enzymes), or propionaldehyde, or to L-lactaldehyde then L-lactate (the common lactate isomer). Another pathway turns acetol to methylglyoxal, then to pyruvate, or to D-lactaldehyde (via S-D-lactoyl-glutathione or otherwise) then D-lactate. D-lactate metabolism (to glucose) is slow or impaired in humans, so most of the D-lactate is excreted in the urine; thus D-lactate derived from acetone can contribute significantly to the metabolic acidosis associated with ketosis or isopropanol intoxication. L-Lactate can complete the net conversion of fatty acids into glucose. The first experiment to show conversion of acetone to glucose was carried out in 1951. This, and further experiments used carbon isotopic labelling. Up to 11% of the glucose can be derived from acetone during starvation in humans. The glycerol released into the blood during the lipolysis of triglycerides in adipose tissue can only be taken up by the liver. Here it is converted into glycerol 3-phosphate by the action of glycerol kinase which hydrolyzes one molecule of ATP per glycerol molecule which is phosphorylated. Glycerol 3-phosphate is then oxidized to dihydroxyacetone phosphate, which is, in turn, converted into glyceraldehyde 3-phosphate by the enzyme triose phosphate isomerase. From here the three carbon atoms of the original glycerol can be oxidized via glycolysis, or converted to glucose via gluconeogenesis. Other functions and uses of fatty acids Intracellular signaling Fatty acids are an integral part of the phospholipids that make up the bulk of the plasma membranes, or cell membranes, of cells. These phospholipids can be cleaved into diacylglycerol (DAG) and inositol trisphosphate (IP) through hydrolysis of the phospholipid, phosphatidylinositol 4,5-bisphosphate (PIP), by the cell membrane bound enzyme phospholipase C (PLC). Eicosanoid paracrine hormones One product of fatty acid metabolism are the prostaglandins, compounds having diverse hormone-like effects in animals. Prostaglandins have been found in almost every tissue in humans and other animals. They are enzymatically derived from arachidonic acid, a 20-carbon polyunsaturated fatty acid. Every prostaglandin therefore contains 20 carbon atoms, including a 5-carbon ring. They are a subclass of eicosanoids and form the prostanoid class of fatty acid derivatives. The prostaglandins are synthesized in the cell membrane by the cleavage of arachidonate from the phospholipids that make up the membrane. This is catalyzed either by phospholipase A acting directly on a membrane phospholipid, or by a lipase acting on DAG (diacyl-glycerol). The arachidonate is then acted upon by the cyclooxygenase component of prostaglandin synthase. This forms a cyclopentane ring roughly in the middle of the fatty acid chain. The reaction also adds 4 oxygen atoms derived from two molecules of O. The resulting molecule is prostaglandin G, which is converted by the hydroperoxidase component of the enzyme complex into prostaglandin H. This highly unstable compound is rapidly transformed into other prostaglandins, prostacyclin and thromboxanes. These are then released into the interstitial fluids surrounding the cells that have manufactured the eicosanoid hormone. If arachidonate is acted upon by a lipoxygenase instead of cyclooxygenase, hydroxyeicosatetraenoic acids and leukotrienes are formed. They also act as local hormones. Prostaglandins have two derivatives: prostacyclins and thromboxanes. Prostacyclins are powerful locally acting vasodilators and inhibit the aggregation of blood platelets. Through their role in vasodilation, prostacyclins are also involved in inflammation. They are synthesized in the walls of blood vessels and serve the physiological function of preventing needless clot formation, as well as regulating the contraction of smooth muscle tissue. Conversely, thromboxanes (produced by platelet cells) are vasoconstrictors and facilitate platelet aggregation. Their name comes from their role in clot formation (thrombosis). Dietary sources of fatty acids, their digestion, absorption, transport in the blood and storage A significant proportion of the fatty acids in the body are obtained from the diet, in the form of triglycerides of either animal or plant origin. The fatty acids in the fats obtained from land animals tend to be saturated, whereas the fatty acids in the triglycerides of fish and plants are often polyunsaturated and therefore present as oils. These triglycerides cannot be absorbed by the intestine. They are broken down into mono- and di-glycerides plus free fatty acids (but no free glycerol) by pancreatic lipase, which forms a 1:1 complex with a protein called colipase (also a constituent of pancreatic juice), which is necessary for its activity. The activated complex can work only at a water-fat interface. Therefore, it is essential that fats are first emulsified by bile salts for optimal activity of these enzymes. The digestion products consisting of a mixture of tri-, di- and monoglycerides and free fatty acids, which, together with the other fat soluble contents of the diet (e.g. the fat soluble vitamins and cholesterol) and bile salts form mixed micelles, in the watery duodenal contents (see diagrams on the right). The contents of these micelles (but not the bile salts) enter the enterocytes (epithelial cells lining the small intestine) where they are resynthesized into triglycerides, and packaged into chylomicrons which are released into the lacteals (the capillaries of the lymph system of the intestines). These lacteals drain into the thoracic duct which empties into the venous blood at the junction of the left jugular and left subclavian veins on the lower left hand side of the neck. This means that the fat-soluble products of digestion are discharged directly into the general circulation, without first passing through the liver, unlike all other digestion products. The reason for this peculiarity is unknown. The chylomicrons circulate throughout the body, giving the blood plasma a milky or creamy appearance after a fatty meal. Lipoprotein lipase on the endothelial surfaces of the capillaries, especially in adipose tissue, but to a lesser extent also in other tissues, partially digests the chylomicrons into free fatty acids, glycerol and chylomicron remnants. The fatty acids are absorbed by the adipocytes, but the glycerol and chylomicron remnants remain in the blood plasma, ultimately to be removed from the circulation by the liver. The free fatty acids released by the digestion of the chylomicrons are absorbed by the adipocytes, where they are resynthesized into triglycerides using glycerol derived from glucose in the glycolytic pathway. These triglycerides are stored, until needed for the fuel requirements of other tissues, in the fat droplet of the adipocyte. The liver absorbs a proportion of the glucose from the blood in the portal vein coming from the intestines. After the liver has replenished its glycogen stores (which amount to only about 100 g of glycogen when full) much of the rest of the glucose is converted into fatty acids as described below. These fatty acids are combined with glycerol to form triglycerides which are packaged into droplets very similar to chylomicrons, but known as very low-density lipoproteins (VLDL). These VLDL droplets are processed in exactly the same manner as chylomicrons, except that the VLDL remnant is known as an intermediate-density lipoprotein (IDL), which is capable of scavenging cholesterol from the blood. This converts IDL into low-density lipoprotein (LDL), which is taken up by cells that require cholesterol for incorporation into their cell membranes or for synthetic purposes (e.g. the formation of the steroid hormones). The remainder of the LDLs is removed by the liver. Adipose tissue and lactating mammary glands also take up glucose from the blood for conversion into triglycerides. This occurs in the same way as in the liver, except that these tissues do not release the triglycerides thus produced as VLDL into the blood. Adipose tissue cells store the triglycerides in their fat droplets, ultimately to release them again as free fatty acids and glycerol into the blood (as described above), when the plasma concentration of insulin is low, and that of glucagon and/or epinephrine is high. Mammary glands discharge the fat (as cream fat droplets) into the milk that they produce under the influence of the anterior pituitary hormone prolactin. All cells in the body need to manufacture and maintain their membranes and the membranes of their organelles. Whether they rely entirely on free fatty acids absorbed from the blood, or are able to synthesize their own fatty acids from blood glucose, is not known. The cells of the central nervous system will almost certainly have the capability of manufacturing their own fatty acids, as these molecules cannot reach them through the blood–brain barrier. However, it is unknown how they are reached by the essential fatty acids, which mammals cannot synthesize themselves but are nevertheless important components of cell membranes (and other functions described above). Fatty acid synthesis Much like beta-oxidation, straight-chain fatty acid synthesis occurs via the six recurring reactions shown below, until the 16-carbon palmitic acid is produced. The diagrams presented show how fatty acids are synthesized in microorganisms and list the enzymes found in Escherichia coli. These reactions are performed by fatty acid synthase II (FASII), which in general contains multiple enzymes that act as one complex. FASII is present in prokaryotes, plants, fungi, and parasites, as well as in mitochondria. In animals as well as some fungi such as yeast, these same reactions occur on fatty acid synthase I (FASI), a large dimeric protein that has all of the enzymatic activities required to create a fatty acid. FASI is less efficient than FASII; however, it allows for the formation of more molecules, including "medium-chain" fatty acids via early chain termination. Enzymes, acyltransferases and transacylases, incorporate fatty acids in phospholipids, triacylglycerols, etc. by transferring fatty acids between an acyl acceptor and donor. They also have the task of synthesizing bioactive lipids as well as their precursor molecules. Once a 16:0 carbon fatty acid has been formed, it can undergo a number of modifications, resulting in desaturation and/or elongation. Elongation, starting with stearate (18:0), is performed mainly in the endoplasmic reticulum by several membrane-bound enzymes. The enzymatic steps involved in the elongation process are principally the same as those carried out by fatty acid synthesis, but the four principal successive steps of the elongation are performed by individual proteins, which may be physically associated. Abbreviations: ACP – Acyl carrier protein, CoA – Coenzyme A, NADP – Nicotinamide adenine dinucleotide phosphate. Note that during fatty synthesis the reducing agent is NADPH, whereas NAD is the oxidizing agent in beta-oxidation (the breakdown of fatty acids to acetyl-CoA). This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. (Thus NADPH is also required for the synthesis of cholesterol from acetyl-CoA; while NADH is generated during glycolysis.) The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by “NADP+-linked malic enzyme" pyruvate, CO and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate. Glycolytic end products are used in the conversion of carbohydrates into fatty acids In humans, fatty acids are formed from carbohydrates predominantly in the liver and adipose tissue, as well as in the mammary glands during lactation. The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl-CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is carboxylated by acetyl-CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids. Regulation of fatty acid synthesis Acetyl-CoA is formed into malonyl-CoA by acetyl-CoA carboxylase, at which point malonyl-CoA is destined to feed into the fatty acid synthesis pathway. Acetyl-CoA carboxylase is the point of regulation in saturated straight-chain fatty acid synthesis, and is subject to both phosphorylation and allosteric regulation. Regulation by phosphorylation occurs mostly in mammals, while allosteric regulation occurs in most organisms. Allosteric control occurs as feedback inhibition by palmitoyl-CoA and activation by citrate. When there are high levels of palmitoyl-CoA, the final product of saturated fatty acid synthesis, it allosterically inactivates acetyl-CoA carboxylase to prevent a build-up of fatty acids in cells. Citrate acts to activate acetyl-CoA carboxylase under high levels, because high levels indicate that there is enough acetyl-CoA to feed into the Krebs cycle and produce energy. High plasma levels of insulin in the blood plasma (e.g. after meals) cause the dephosphorylation and activation of acetyl-CoA carboxylase, thus promoting the formation of malonyl-CoA from acetyl-CoA, and consequently the conversion of carbohydrates into fatty acids, while epinephrine and glucagon (released into the blood during starvation and exercise) cause the phosphorylation of this enzyme, inhibiting lipogenesis in favor of fatty acid oxidation via beta-oxidation. Disorders Disorders of fatty acid metabolism can be described in terms of, for example, hypertriglyceridemia (too high level of triglycerides), or other types of hyperlipidemia. These may be familial or acquired. Familial types of disorders of fatty acid metabolism are generally classified as inborn errors of lipid metabolism. These disorders may be described as fatty acid oxidation disorders or as a lipid storage disorders, and are any one of several inborn errors of metabolism that result from enzyme or transport protein defects affecting the ability of the body to oxidize fatty acids in order to produce energy within muscles, liver, and other cell types. When a fatty acid oxidation disorder affects the muscles, it is a metabolic myopathy. Moreover, cancer cells can display irregular fatty acid metabolism with regard to both fatty acid synthesis and mitochondrial fatty acid oxidation (FAO) that are involved in diverse aspects of tumorigenesis and cell growth. References Metabolism Fatty acids Hepatology
Fatty acid metabolism
[ "Chemistry", "Biology" ]
6,536
[ "Biochemistry", "Metabolism", "Cellular processes" ]
1,437,415
https://en.wikipedia.org/wiki/Glyceraldehyde%203-phosphate
Glyceraldehyde 3-phosphate, also known as triose phosphate or 3-phosphoglyceraldehyde and abbreviated as G3P, GA3P, GADP, GAP, TP, GALP or PGAL, is a metabolite that occurs as an intermediate in several central pathways of all organisms. With the chemical formula H(O)CCH(OH)CH2OPO32-, this anion is a monophosphate ester of glyceraldehyde. An intermediate in both glycolysis and gluconeogenesis Formation D-glyceraldehyde 3-phosphate is formed from the following three compounds in reversible reactions: Fructose-1,6-bisphosphate (F1,6BP), catalyzed by aldolase. The numbering of the carbon atoms indicates the fate of the carbons according to their position in fructose 6-phosphate. Dihydroxyacetone phosphate (DHAP), catalyzed by triose phosphate isomerase. 1,3-bisphosphoglycerate (1,3BPG), catalyzed by glyceraldehyde 3-phosphate dehydrogenase. As a substrate To produce 1,3-bisphospho-D-glycerate in glycolysis. D-glyceraldehyde 3-phosphate is also of some importance since this is how glycerol (as DHAP) enters the glycolytic and gluconeogenic pathways. Furthermore, it is a participant in and a product of the pentose phosphate pathway. Interactive pathway map | An intermediate in photosynthesis During plant photosynthesis, 2 equivalents of glycerate 3-phosphate (GP; also known as 3-phosphoglycerate) are produced by the first step of the light-independent reactions when ribulose 1,5-bisphosphate (RuBP) and carbon dioxide are catalysed by the rubisco enzyme. The GP is converted to D-glyceraldehyde 3-phosphate (G3P) using the energy in ATP and the reducing power of NADPH as part of the Calvin cycle. This returns ADP, phosphate ions Pi, and NADP+ to the light-dependent reactions of photosynthesis for their continued function. RuBP is regenerated for the Calvin cycle to continue. G3P is generally considered the prime end-product of photosynthesis and it can be used as an immediate food nutrient, combined and rearranged to form monosaccharide sugars, such as glucose, which can be transported to other cells, or packaged for storage as insoluble polysaccharides such as starch. Balance sheet 6 CO2 + 6 RuBP (+ energy from 12 ATP and 12 NADPH) →12 G3P (3-carbon) 10 G3P (+ energy from 6 ATP) → 6 RuBP (i.e. starting material regenerated) 2 G3P → glucose (6-carbon). In tryptophan biosynthesis Glyceraldehyde 3-phosphate occurs as a byproduct in the biosynthesis pathway of tryptophan, an essential amino acid that cannot be produced by the human body. In thiamine biosynthesis Glyceraldehyde 3-phosphate occurs as a reactant in the biosynthesis pathway of thiamine (Vitamin B1), another substance that cannot be produced by the human body. References External links D-Glyceraldehyde 3-phosphate and the reactions and pathways it participates in, from the KEGG PATHWAY Database Glyceraldehyde 3-phosphate and the reactions and pathways it participates in, from the KEGG PATHWAY Database Photosynthesis Organophosphates Pentose phosphate pathway Phosphate esters Glycolysis Metabolic intermediates
Glyceraldehyde 3-phosphate
[ "Chemistry", "Biology" ]
832
[ "Carbohydrate metabolism", "Pentose phosphate pathway", "Glycolysis", "Photosynthesis", "Metabolic intermediates", "Biomolecules", "Biochemistry", "Metabolism" ]
1,438,121
https://en.wikipedia.org/wiki/Anti-neutrophil%20cytoplasmic%20antibody
Anti-neutrophil cytoplasmic antibodies (ANCAs) are a group of autoantibodies, mainly of the IgG type, against antigens in the cytoplasm of neutrophils (the most common type of white blood cell) and monocytes. They are detected as a blood test in a number of autoimmune disorders, but are particularly associated with systemic vasculitis, so called ANCA-associated vasculitides (AAV). ANCA IF patterns Immunofluorescence (IF) on ethanol-fixed neutrophils is used to detect ANCA, although formalin-fixed neutrophils may be used to help differentiate ANCA patterns. ANCA can be divided into four patterns when visualised by IF; cytoplasmic ANCA (c-ANCA), C-ANCA (atypical), perinuclear ANCA (p-ANCA) and atypical ANCA (a-ANCA), also known as x-ANCA. c-ANCA shows cytoplasmic granular fluorescence with central interlobular accentuation. C-ANCA (atypical) shows cytoplasmic staining that is usually uniform and has no interlobular accentuation. p-ANCA has three subtypes, classical p-ANCA, p-ANCA without nuclear extension and granulocyte specific-antinuclear antibody (GS-ANA). Classical p-ANCA shows perinuclear staining with nuclear extension, p-ANCA without nuclear extension has perinuclear staining without nuclear extension and GS-ANA shows nuclear staining on granulocytes only. a-ANCA often shows combinations of both cytoplasmic and perinuclear staining. ANCA antigens The c-ANCA antigen is specifically proteinase 3 (PR3). p-ANCA antigens include myeloperoxidase (MPO) and bacterial permeability increasing factor Bactericidal/permeability-increasing protein (BPI). Other antigens exist for c-ANCA (atypical), however many are as yet unknown. Classical p-ANCA occurs with antibodies directed to MPO. p-ANCA without nuclear extension occurs with antibodies to BPI, cathepsin G, elastase, lactoferrin and lysozyme. GS-ANA are antibodies directed to granulocyte specific nuclear antigens. Atypical ANCA are thought to be antigens similar to that of the p-ANCAs, however may occur due to differences in neutrophil processing. Other less common antigens include HMG1 (p-ANCA pattern), HMG2 (p-ANCA pattern), alpha enolase (p and c-ANCA pattern), catalase (p and c-ANCA pattern), beta glucuronidase (p-ANCA pattern), azurocidin (p and c-ANCA pattern), actin (p and a-ANCA) and h-lamp-2 (c-ANCA). ELISA Enzyme-linked immunosorbent assay (ELISA) is used in diagnostic laboratories to detect ANCAs. Although IF can be used to screen for many ANCAs, ELISA is used to detect antibodies to individual antigens. The most common antigens used on an ELISA microtitre plate are MPO and PR3, which are usually tested for after a positive IF test. Development It is poorly understood how ANCA are developed, although several hypotheses have been suggested. There is probably a genetic contribution, particularly in genes controlling the level of immune response – although genetic susceptibility is likely to be linked to an environmental factor, some possible factors including vaccination or exposure to silicates. Two possible mechanisms of ANCA development are postulated, although neither of these theories answers the question of how the different ANCA specificities are developed, and there is much research still being undertaken on the development of ANCA. Theory of molecular mimicry Microbial superantigens are molecules expressed by bacteria and other microorganisms that have the power to stimulate a strong immune response by activation of T-cells. These molecules generally have regions that resemble self-antigens that promote a residual autoimmune response – this is the theory of molecular mimicry. Staphylococcal and streptococcal superantigens have been characterized in autoimmune diseases – the classical example in post group A streptococcal rheumatic heart disease, where there is similarity between M proteins of Streptococcus pyogenes to cardiac myosin and laminin. It has also been shown that up to 70% of patients with granulomatosis with polyangiitis are chronic nasal carriers of Staphylococcus aureus, with carriers having an eight times increased risk of relapse. This would therefore be considered a type II hypersensitivity reaction. Theory of defective apoptosis Neutrophil apoptosis, or programmed cell death, is vital in controlling the duration of the early inflammatory response, thus restricting damage to tissues by the neutrophils. ANCA may be developed either via ineffective apoptosis or ineffective removal of apoptotic cell fragments, leading to the exposure of the immune system to molecules normally sequestered inside the cells. This theory solves the paradox of how it could be possible for antibodies to be raised against the intracellular antigenic targets of ANCA. Role in disease Disease associations ANCAs are associated with small vessel vasculitides including granulomatosis with polyangiitis, microscopic polyangiitis, primary pauci-immune necrotizing crescentic glomerulonephritis (a type of renal-limited microscopic polyangiitis), eosinophilic granulomatosis with polyangiitis and drug induced vasculitides. ANCA-associated vasculitides (AAV) have new classification criteria, updated in 2022. PR3 directed c-ANCA is present in 80-90% of granulomatosis with polyangiitis, 20-40% of microscopic polyangiitis, 20-40% of pauci-immune crescentic glomerulonephritis and 35% of eosinophilic granulomatosis with polyangiitis. c-ANCA (atypical) is present in 80% of cystic fibrosis (with BPI as the target antigen) and also in inflammatory bowel disease, primary sclerosing cholangitis and rheumatoid arthritis (with antibodies to multiple antigenic targets). p-ANCA with MPO specificity is found in 50% of microscopic polyangiitis, 50% of primary pauci-immune necrotizing crescentic glomerulonephritis and 35% of eosinophilic granulomatosis with polyangiitis. p-ANCA with specificity to other antigens are associated with inflammatory bowel disease, rheumatoid arthritis, drug-induced vasculitis, autoimmune liver disease, drug induced syndromes and parasitic infections. Atypical ANCA is associated with drug-induced systemic vasculitis, inflammatory bowel disease and rheumatoid arthritis. The ANCA-positive rate is much higher in patients with type 1 diabetes mellitus than in healthy individuals. Levamisole, which is a common adulterant of cocaine, can cause an ANCA positive vasculitis. The presence or absence of ANCA cannot indicate presence or absence of disease and results are correlated with clinical features. The association of ANCA and disease activity remains controversial; however, the reappearance of ANCA after treatment can indicate a relapse. Pathogenesis Although the pathogenic role of ANCA is still controversial, in vitro and animal models support the idea that the antibodies have a direct pathological role in the formation of small vessel vasculitides. MPO and PR3 specific ANCA can activate neutrophils and monocytes through their Fc and Fab'2 receptors, which can be enhanced by cytokines which cause neutrophils to display MPO and PR3 on their surface. Aberrant glycosylation of the MPO and PR3 specific ANCA enhances their ability to interact with activating Fc receptors on neutrophils. The activated neutrophils can then adhere to endothelial cells where degranulation occurs. This releases free oxygen radicals and lytic enzymes, resulting in damage to the endothelium via the induction of necrosis and apoptosis. Furthermore, neutrophils release chemoattractive signalling molecules that recruit more neutrophils to the endothelium, acting as a positive feedback loop. Animal models have shown that MPO antibodies can induce necrotizing crescentic glomerulonephritis and systemic small vessel vasculitis. In these animal models the formation of glomerulonephritis and vasculitis can occur in the absence of T-cells, however neutrophils must be present. Although ANCA titres have been noted to have limited correlation with disease activity, except for kidney disease, and with risk of relapse, this is explained by differences in the epitopes and affinity of ANCAs. ANCAs induce excess activation of neutrophils, resulting in the production of neutrophil extracellular traps (NETs), which cause damage to small blood vessels. In addition, in patients with active disease, treated with Rituximab, an anti-CD20 antibody which remove circulating B-cells, clinical remission correlates more to the decreasing number of circulating B-cells than decrease in ANCA titre, which in some patient does not change during treatment. The same study found that clinical relapse in some patients were in association with the return of circulating B-cells. Based on the above observations and that ANCA reactive B-cells can be found in circulation in patients with AAV, an alternative hypothesis have been proposed assigning a direct pathogenic role of these cells, whereby activated neutrophils and ANCA-reactive B-cells engage in intercellular cross-talk, which leads not only to neutrophil degranulation and inflammation but also to the proliferation and differentiation of ANCA-reactive B-cells. However, this hypothesis remains to be tested. Treatment Avacopan was approved for medical use in the United States to treat anti-neutrophil cytoplasmic autoantibody-associated vasculitis in October 2021. History ANCAs were originally described in Davies et al. in 1982 in segmental necrotising glomerulonephritis. The Second International ANCA Workshop, held in The Netherlands in May 1989, fixed the nomenclature on perinuclear vs. cytoplasmic patterns, and the antigens MPO and PR3 were discovered in 1988 and 1989, respectively. International ANCA Workshops have been carried out every two years. References External links images of pANCA and cANCA fluorescence images of ANCA Autoantibodies Blood tests Chemical pathology Immunologic tests
Anti-neutrophil cytoplasmic antibody
[ "Chemistry", "Biology" ]
2,387
[ "Biochemistry", "Blood tests", "Chemical pathology", "Immunologic tests" ]
1,438,605
https://en.wikipedia.org/wiki/Jacques%20Pucheran
Jacques Pucheran (2 June 1817 – 13 January 1895) was a French zoologist born in Clairac. He was a grandnephew to physiologist Étienne Serres (1786-1868). Pucheran accompanied the expedition on the Astrolabe between 1837 and 1840, under the command of Jules Dumont d'Urville, with fellow-naturalists Jacques Bernard Hombron and Honoré Jacquinot. On his return he contributed the ornithological section (with Jacquinot) of "Voyage au Pôle sud et dans l'Océanie sur les corvettes L'Astrolabe et La Zélée" (1841–1854). Pucheran worked as a zoologist and naturalist at the Muséum national d'histoire naturelle. He was the author of many works in the fields of ornithology, mammalogy, anthropology, etc. With Florent Prévost and Isidore Geoffroy Saint-Hilaire, he published a catalog involving species of mammals and birds kept in the collections at the museum, titled "Muséum d'histoire naturelle de Paris. Catalogue méthodique," etc. (1851). He was a member of the Société d'Agen académique, and a chevalier in the Légion d'honneur and the Ordre de la Conception de Portugal. He classified numerous zoological taxa, and the following are a few ornithological species that are named after him. Black-cheeked woodpecker, Melanerpes pucherani (Malherbe 1849) Crested guineafowl, Guttera pucherani (Hartlaub 1861) Red-billed ground-cuckoo, Neomorphus pucheranii (Deville 1851). References 1817 births 1894 deaths French zoologists Taxon authorities French ornithologists People from Lot-et-Garonne National Museum of Natural History (France) people
Jacques Pucheran
[ "Biology" ]
392
[ "Taxon authorities", "Taxonomy (biology)" ]
1,440,307
https://en.wikipedia.org/wiki/Dielectrophoresis
Dielectrophoresis (DEP) is a phenomenon in which a force is exerted on a dielectric particle when it is subjected to a non-uniform electric field. This force does not require the particle to be charged. All particles exhibit dielectrophoretic activity in the presence of electric fields. However, the strength of the force depends strongly on the medium and particles' electrical properties, on the particles' shape and size, as well as on the frequency of the electric field. Consequently, fields of a particular frequency can manipulate particles with great selectivity. This has allowed, for example, the separation of cells or the orientation and manipulation of nanoparticles and nanowires. Furthermore, a study of the change in DEP force as a function of frequency can allow the electrical (or electrophysiological in the case of cells) properties of the particle to be elucidated. Background and properties Although the phenomenon we now call dielectrophoresis was described in passing as far back as the early 20th century, it was only subject to serious study, named and first understood by Herbert Pohl in the 1950s. Recently, dielectrophoresis has been revived due to its potential in the manipulation of microparticles, nanoparticles and cells. Dielectrophoresis occurs when a polarizable particle is suspended in a non-uniform electric field. The electric field polarizes the particle, and the poles then experience a force along the field lines, which can be either attractive or repulsive according to the orientation on the dipole. Since the field is non-uniform, the pole experiencing the greatest electric field will dominate over the other, and the particle will move. The orientation of the dipole is dependent on the relative polarizability of the particle and medium, in accordance with Maxwell–Wagner–Sillars polarization. Since the direction of the force is dependent on field gradient rather than field direction, DEP will occur in AC as well as DC electric fields; polarization (and hence the direction of the force) will depend on the relative polarizabilities of particle and medium. If the particle moves in the direction of increasing electric field, the behavior is referred to as positive DEP (sometime pDEP), if acting to move the particle away from high field regions, it is known as negative DEP (or nDEP). As the relative polarizabilities of the particle and medium are frequency-dependent, varying the energizing signal and measuring the way in which the force changes can be used to determine the electrical properties of particles; this also allows the elimination of electrophoretic motion of particles due to inherent particle charge. Phenomena associated with dielectrophoresis are electrorotation and traveling wave dielectrophoresis (TWDEP). These require complex signal generation equipment in order to create the required rotating or traveling electric fields, and as a result of this complexity have found less favor among researchers than conventional dielectrophoresis. Dielectrophoretic force The simplest theoretical model is that of a homogeneous sphere surrounded by a conducting dielectric medium. For a homogeneous sphere of radius and complex permittivity in a medium with complex permittivity the (time-averaged) DEP force is: The factor in curly brackets is known as the complex Clausius-Mossotti function and contains all the frequency dependence of the DEP force. Where the particle consists of nested spheres – the most common example of which is the approximation of a spherical cell composed of an inner part (the cytoplasm) surrounded by an outer layer (the cell membrane) – then this can be represented by nested expressions for the shells and the way in which they interact, allowing the properties to be elucidated where there are sufficient parameters related to the number of unknowns being sought. For a more general field-aligned ellipsoid of radius and length with complex dielectric constant in a medium with complex dielectric constant the time-dependent dielectrophoretic force is given by: The complex dielectric constant is , where is the dielectric constant, is the electrical conductivity, is the field frequency, and is the imaginary unit. This expression has been useful for approximating the dielectrophoretic behavior of particles such as red blood cells (as oblate spheroids) or long thin tubes (as prolate ellipsoids) allowing the approximation of the dielectrophoretic response of carbon nanotubes or tobacco mosaic viruses in suspension. These equations are accurate for particles when the electric field gradients are not very large (e.g., close to electrode edges) or when the particle is not moving along an axis in which the field gradient is zero (such as at the center of an axisymmetric electrode array), as the equations only take into account the dipole formed and not higher order polarization. When the electric field gradients are large, or when there is a field null running through the center of the particle, higher order terms become relevant, and result in higher forces. To be precise, the time-dependent equation only applies to lossless particles, because loss creates a lag between the field and the induced dipole. When averaged, the effect cancels out and the equation holds true for lossy particles as well. An equivalent time-averaged equation can be easily obtained by replacing E with Erms, or, for sinusoidal voltages by dividing the right hand side by 2. These models ignores the fact that cells have a complex internal structure and are heterogeneous. A multi-shell model in a low conducting medium can be used to obtain information of the membrane conductivity and the permittivity of the cytoplasm. For a cell with a shell surrounding a homogeneous core with its surrounding medium considered as a layer, as seen in Figure 2, the overall dielectric response is obtained from a combination of the properties of the shell and core. where 1 is the core (in cellular terms, the cytoplasm), 2 is the shell (in a cell, the membrane). r1 is the radius from the centre of the sphere to the inside of the shell, and r2 is the radius from the centre of the sphere to the outside of the shell. Applications Dielectrophoresis can be used to manipulate, transport, separate and sort different types of particles. DEP is being applied in fields such as medical diagnostics, drug discovery, cell therapeutics, and particle filtration. DEP has been also used in conjunction with semiconductor chip technology for the development of DEP array technology for the simultaneous management of thousands of cells in microfluidic devices. Single microelectrodes on the floor of a flow cell are managed by a CMOS chip to form thousands of dielectrophoretic "cages", each capable of capturing and moving one single cell under control of routing software. As biological cells have dielectric properties, dielectrophoresis has many biological and medical applications. Instruments capable of separating cancer cells from healthy cells have been made as well as isolating single cells from forensic mixed samples. Platelets have been separated from whole blood with a DEP-activated cell sorter. DEP has made it possible to characterize and manipulate biological particles like blood cells, stem cells, neurons, pancreatic β cells, DNA, chromosomes, proteins and viruses. DEP can be used to separate particles with different sign polarizabilities as they move in different directions at a given frequency of the AC field applied. DEP has been applied for the separation of live and dead cells, with the remaining live cells still viable after separation or to force contact between selected single cells to study cell-cell interaction. DEP has been used to separate strains of bacteria and viruses. DEP can also be used to detect apoptosis soon after drug induction measuring the changes in electrophysiological properties. As a cell characterisation tool DEP is mainly used for characterising cells measuring the changes in their electrical properties. To do this, many techniques are available to quantify the dielectrophoretic response, as it is not possible to directly measure the DEP force. These techniques rely on indirect measures, obtaining a proportional response of the strength and direction of the force that needs to be scaled to the model spectrum. So most models only consider the Clausius-Mossotti factor of a particle. The most used techniques are collection rate measurements: this is the simplest and most used technique – electrodes are submerged in a suspension with a known concentration of particles and the particles that collect at the electrode are counted; crossover measurements: the crossover frequency between positive and negative DEP is measured to characterise particles – this technique is used for smaller particles (e.g. viruses), that are difficult to count with the previous technique; particle velocity measurements: this technique measures the velocity and direction of the particles in an electric field gradient; measurement of the levitation height: the levitation height of a particle is proportional to the negative DEP force that is applied. Thus, this technique is good for characterising single particles and is mainly used for larger particles such as cells; impedance sensing: particles collecting at the electrode edge have an influence on the impedance of the electrodes – this change can be monitored to quantify DEP. In order to study larger populations of cells, the properties can be obtained by analysing the dielectrophoretic spectra. Implementation Electrode geometries At the start, electrodes were made mainly from wires or metal sheets. Nowadays, the electric field in DEP is created by means of electrodes which minimize the magnitude of the voltage needed. This has been possible using fabrication techniques such as photolithography, laser ablation and electron beam patterning. These small electrodes allow the handling of small bioparticles. The most used electrode geometries are isometric, polynomial, interdigitated, and crossbar. Isometric geometry is effective for particle manipulation with DEP but repelled particles do not collect in well defined areas and so separation into two homogeneous groups is difficult. Polynomial is a new geometry producing well defined differences in regions of high and low forces and so particles could be collected by positive and negative DEP. This electrode geometry showed that the electrical field was highest at the middle of the inter-electrode gaps. Interdigitated geometry comprises alternating electrode fingers of opposing polarities and is mainly used for dielectrophoretic trapping and analysis. Crossbar geometry is potentially useful for networks of interconnects. DEP-well electrodes These electrodes were developed to offer a high-throughput yet low-cost alternative to conventional electrode structures for DEP. Rather than use photolithographic methods or other microengineering approaches, DEP-well electrodes are constructed from stacking successive conductive and insulating layers in a laminate, after which multiple "wells" are drilled through the structure. If one examines the walls of these wells, the layers appear as interdigitated electrodes running continuously around the walls of the tube. When alternating conducting layers are connected to the two phases of an AC signal, a field gradient formed along the walls moves cells by DEP. DEP-wells can be used in two modes; for analysis or separation. In the first, the dielectrophoretic properties of cells can be monitored by light absorption measurements: positive DEP attracts the cells to the wall of the well, thus when probed with a light beam the well the light intensity increases through the well. The opposite is true for negative DEP, in which the light beam becomes obscured by the cells. Alternatively, the approach can be used to build a separator, where mixtures of cells are forced through large numbers (>100) of wells in parallel; those experiencing positive DEP are trapped in the device whilst the rest are flushed. Switching off the field allows release of the trapped cells into a separate container. The highly parallel nature of the approach means that the chip can sort cells at much higher speeds, comparable to those used by MACS and FACS. This approach offers many advantages over conventional, photolithography-based devices but reducing cost, increasing the amount of sample which can be analysed simultaneously, and the simplicity of cell motion reduced to one dimension (where cells can only move radially towards or away from the centre of the well). Devices manufactured to use the DEP-well principle are marketed under the DEPtech brand. Dielectrophoresis field-flow fractionation The utilization of the difference between dielectrophoretic forces exerted on different particles in nonuniform electric fields is known as DEP separation. The exploitation of DEP forces has been classified into two groups: DEP migration and DEP retention. DEP migration uses DEP forces that exert opposite signs of force on different particle types to attract some of the particles and repel others. DEP retention uses the balance between DEP and fluid-flow forces. Particles experiencing repulsive and weak attractive DEP forces are eluted by fluid flow, whereas particles experiencing strong attractive DEP forces are trapped at electrode edges against flow drag. Dielectrophoresis field-flow fractionation (DEP-FFF), introduced by Davis and Giddings, is a family of chromatographic-like separation methods. In DEP-FFF, DEP forces are combined with drag flow to fractionate a sample of different types of particles. Particles are injected into a carrier flow that passes through the separation chamber, with an external separating force (a DEP force) being applied perpendicular to the flow. By means of different factors, such as diffusion and steric, hydrodynamic, dielectric and other effects, or a combination thereof, particles (<1 μm in diameter) with different dielectric or diffusive properties attain different positions away from the chamber wall, which, in turn, exhibit different characteristic concentration profile. Particles that move further away from the wall reach higher positions in the parabolic velocity profile of the liquid flowing through the chamber and will be eluted from the chamber at a faster rate. Optical dielectrophoresis The use of photoconductive materials (for example, in lab-on-chip devices) allows for localized inducement of dielectrophoretic forces through the application of light. In addition, one can project an image to induce forces in a patterned illumination area, allowing for some complex manipulations. When manipulating living cells, optical dielectrophoresis provides a non-damaging alternative to optical tweezers, as the intensity of light is about 1000 times less. References Further reading External links AES Electrophoresis Society - Learning Center Biological cell separation using dielectrophoresis in a microfluidic device Sandia's dielectrophoresis device may revolutionize sample preparation Electrophoresis Analytical chemistry Nanotechnology Colloidal chemistry
Dielectrophoresis
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
3,095
[ "Colloidal chemistry", "Instrumental analysis", "Materials science", "Colloids", "Surface science", "Biochemical separation processes", "Molecular biology techniques", "nan", "Nanotechnology", "Electrophoresis" ]
38,605,597
https://en.wikipedia.org/wiki/Coronal%20rain
Coronal rain is a phenomenon that occurs in the Sun's corona when hot plasma cools and condenses in strong magnetic fields and falls to the photosphere. It is usually associated with active regions. Coronal rain formed when impulsive heating from magnetic reconnection occurs. The material that makes up the coronal rain can be up to hundreds of times cooler than the surrounding environment. References External links July 2012: Coronal Rain The Sun's Coronal Rain Puzzle Solved : Discovery News Solar phenomena Articles containing video clips
Coronal rain
[ "Physics" ]
109
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
38,610,814
https://en.wikipedia.org/wiki/MALBAC
Multiple Annealing and Looping Based Amplification Cycles (MALBAC) is a quasilinear whole genome amplification method. Unlike conventional DNA amplification methods that are non-linear or exponential (in each cycle, DNA copied can serve as template for subsequent cycles), MALBAC utilizes special primers that allow amplicons to have complementary ends and therefore to loop, preventing DNA from being copied exponentially. This results in amplification of only the original genomic DNA and therefore reduces amplification bias. MALBAC is “used to create overlapped shotgun amplicons covering most of the genome”. For next generation sequencing, MALBAC is followed by regular PCR which is used to further amplify amplicons. Technological platform Prior to MALBAC, a single cell is isolated by various methods including laser capture microdissection, microfluidic devices, flow cytometry, or micro pipetting, then lysed. MALBAC single-cell whole-genome amplification involves 5 cycles of quenching, extending, melting, and looping. MALBAC primers The major advantage of MALBAC is that DNA is amplified almost linearly. The utilization of specialized primers enables looping of amplicons which then prevents them from being further amplified in subsequent cycles of MALBAC. These primers are 35 nucleotides long, with 8 variable nucleotides that hybridize to the templates and 27 common nucleotides. The common nucleotide sequence is GTG AGT GAT GGT TGA GGT AGT GTG GAG. The 8 variable nucleotides anneal randomly to the single stranded genomic DNA molecule. After one extension, semi-amplicon, an amplicon containing the common nucleotide sequence on only the 5’ end, is made. This semi-amplicon is used as a template for another round of extension, which then results in a full-amplicon, an amplicon where the 3’ end is complementary to the sequence on the 5’ end. Strand displacement MALBAC primers have variable components which allow them to randomly bind to the template DNA. This means that on a single fragment at any cycle, there could be multiple primers annealed to the fragment. A DNA polymerase such as one derived from Bacillus stearothermophilus (Bst polymerase) is able to displace the 5’ end of another upstream strand growing in the same direction. Error rate Bst DNA polymerase has an error rate of 1/10000 bases. Experimental workflow Single-cell isolation and lysis – pg of genomic DNA fragments (10 to 100 kb) isolated from a single-cell are used as templates. Melting – At 94 °C, double-stranded DNA molecules are melted into single stranded forms. Quenching – After melting, the reaction is immediately quenched to 0 °C, and MALBAC primers are added to the reaction. Extension – Bst DNA Polymerase (Large Fragment) extends the primers at 65 °C for 2 mins, creating semi-amplicons. Melting – The reaction is heated to back to 94 °C to separate the semi-amplicon from the genomic DNA template. Quenching - The reaction is quickly quenched at 0 °C, and followed by the addition of the same polymerase mix. The MALBAC primers efficiently bind to both semi-amplicons and genomic DNA template. Extension - Bst DNA Polymerase (Large Fragment) extends the primers at 65 °C for 2 mins. At this step, full-amplicons are made for those that used semi-amplicons as templates, and also semi-amplicons are made for those that used the genomic DNA template as templates. Melting – The reaction is heated to 94 °C to separate the amplicons from the template. Looping – For full amplicons, the 3’ end sequence is now complementary to the 5’ end. At 58 °C, the two ends hybridize forming a looped DNA. This prevents the full amplicon from being used as a template in subsequent MALBAC cycles. Repeat steps 6-9 five times – 5 cycles of linear MALBAC amplification. Regular PCR – The MALBAC product is further amplified by PCR. By using the 27 common nucleotides as primers, only the full amplicons are amplified. At the end of PCR, picograms of genetic material is amplified to microgram of DNA, yielding enough DNA to be sequenced. Applications MALBAC offers an unbiased approach to the amplification of DNA from a single cell. This method of single cell sequencing has a vast number of applications, many of which have yet to be exploited. MALBAC may aid in the analysis of forensic specimens, in pre-natal screening for genetic diseases, in understanding the development of reproductive cells, or in elucidating the complexity of a tumour. At its foundation, this technology allows researchers to observe the frequency with which mutations accumulate in single cells. Moreover, it permits the detection of chromosomal abnormalities and gene copy number variations (CNVs) within and between cells, and further facilitates the detection of uncommon mutations that result in single nucleotide polymorphisms (SNPs). In the field of cancer research, MALBAC has many applications. It may be used to examine intratumor heterogeneity, to identify genes which may confer an aggressive or metastatic phenotype, or to evaluate the potential for a tumour to develop drug resistance. A pioneering application of MALBAC was published in a December 2012 issue of Science and described the use of this technology to measure the mutation rate of the colon cancer cell line SW4802. By sequencing the amplified DNA of three kindred colon cancer cells in parallel with unrelated colon cancer cells from a different lineage, SNPs were identified with no false positives detected. It was also observed that purine-pyrimidine transversions occurred at a high frequency among the SNPs. The characterization of copy number and single nucleotide variations of single colon cancer cells highlighted the heterogeneity present within a tumour. MALBAC has been applied as a method to examine the genetic diversity amongst reproductive cells. By sequencing the genomes of 99 individual human sperm cells from an anonymous donor, MALBAC was used to examine genetic recombination events involving single gametes and ultimately provide insight into the dynamics of genetic recombination and its contribution to male infertility. Additionally, within an individual sperm, MALBAC identified duplicated or missing chromosomes, as well as SNPs or CNVs which could negatively affect fertility. Advantages MALBAC has resulted in many significant advances over other single cell sequencing techniques, foremost that it can report 93% of the genome of a single human cell. Some advantages of this technology include reduced amplification bias and increased genome coverage, the requirement for very little template DNA, and low rates of false positive and false negative mutations. Reduces amplification bias and increases genome coverage MALBAC is a form of whole genome sequencing which reduces the bias associated with exponential PCR amplification by using a quasilinear phase of pre-amplification. MALBAC utilizes five cycles of pre-amplification and primers containing a 27 nucleotide common sequence and an 8 nucleotide variable sequence to produce fragments of amplified DNA (amplicons) which loop back on themselves to prevent additional copying and cross-hybridization. These loops cannot be used as a template for amplification during MALBAC and therefore reduce the amplification bias commonly associated with the uneven exponential amplification of DNA fragments by polymerase chain reaction. MALBAC has been described to have better amplification uniformity than other methods of single sequencing, such as multiple displacement amplification (MDA). MDA does not utilize DNA looping and amplifies DNA in an exponential fashion, resulting in bias. Accordingly, the amplification bias associated with other single cell sequencing methods results in low coverage of the genome. The reduced bias associated with MALBAC has generated better genome sequence coverage than other single cell sequencing methods. Requires very little template DNA MALBAC can be used to amplify and subsequently sequence DNA when only one or a few cells are available, such as in the analysis of circulating tumour cells, pre-natal screens or forensic samples. Only a small amount of starting template (picograms of DNA) is required to initiate the process, and therefore it is an ideal method for the sequencing of a single human cell. Low incidence of false positive and false negative mutations Single cell sequencing often has a high rate of false negative mutations. A false negative mutation rate is defined as the probability of not detecting a real mutation, and this may occur due to amplification bias resulting from the loss, or drop-out, of an allele. The sequence coverage uniformity of MALBAC in comparison to other single cell sequencing techniques has enhanced the detection of SNPs and reduced allele dropout rate. Allelic dropout rate increases when an allele of a heterozygote fails to amplify resulting in identification of a ‘false homozygote.’ This may occur due to low concentration of DNA template, or the uneven amplification of template resulting in one allele of a heterozygote being copied more than the other. The allele dropout rate of MALBAC has been shown to be much lower (approximately 1%) compared to MDA which is approximately 65%. In contrast to MDA which has been shown to have a 41% SNP detection efficiency in comparison with bulk sequencing, MALBAC has been reported to have SNP detection efficiency of 76%. MALBAC has also been reported to have a low false positive rate. False positive mutations generated by MALBAC largely result from errors introduced by DNA polymerase during the first cycle of amplification that are further propagated during subsequent cycles. This false positive rate can be eliminated by sequencing 2-3 cells within a lineage derived from a single cell to verify the presence of a SNP, and by eliminating sequencing and amplification errors by sequencing unrelated cells from a separate lineage. Limitations Due to the requirement for a low quantity of template DNA, contamination of target DNA by the operator or environment can potentially confound sequencing results. In order to completely rule out false positives, it is necessary to compare the cell sequencing results to those obtained from 2-3 cells within the same lineage, as well as to cells from an unrelated lineage. DNA polymerase used to amplify the template DNA is error prone and can introduce sequencing errors in the first cycle of MALBAC which are subsequently propagated. Genome coverage at a single cell level using MALBAC is less uniform than bulk sequencing. Although MALBAC has improved the detection efficiency of single cell sequencing, it is unable to detect approximately one third of SNPs compared to bulk sequencing. External links NCBI database Yikon Genomics:Whole Genome Amplification (WGA) and Multiple Annealing and Looping-based Amplification Cycles (MALBAC) References Genomics DNA sequencing Biotechnology
MALBAC
[ "Chemistry", "Biology" ]
2,345
[ "nan", "Molecular biology techniques", "DNA sequencing", "Biotechnology" ]
2,894,560
https://en.wikipedia.org/wiki/History%20of%20artificial%20intelligence
The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain. The field of AI research was founded at a workshop held on the campus of Dartmouth College in 1956. Attendees of the workshop became the leaders of AI research for decades. Many of them predicted that machines as intelligent as humans would exist within a generation. The U.S. government provided millions of dollars with the hope of making this vision come true. Eventually, it became obvious that researchers had grossly underestimated the difficulty of this feat. In 1974, criticism from James Lighthill and pressure from the U.S. Congress led the U.S. and British Governments to stop funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government and the success of expert systems reinvigorated investment in AI, and by the late 1980s, the industry had grown into a billion-dollar enterprise. However, investors' enthusiasm waned in the 1990s, and the field was criticized in the press and avoided by industry (a period known as an "AI winter"). Nevertheless, research and funding continued to grow under other names. In the early 2000s, machine learning was applied to a wide range of problems in academia and industry. The success was due to the availability of powerful computer hardware, the collection of immense data sets, and the application of solid mathematical methods. Soon after, deep learning proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT. These models exhibit human-like traits of knowledge, attention, and creativity, and have been integrated into various sectors, fueling exponential investment in AI. However, concerns about the potential risks and ethical implications of advanced AI have also emerged, prompting debate about the future of AI and its impact on society. Precursors Mythical, fictional, and speculative precursors Myth and legend In Greek mythology, Talos was a giant made of bronze who acted as guardian for the island of Crete. He would throw boulders at the ships of invaders and would complete 3 circuits around the island's perimeter daily. According to pseudo-Apollodorus' Bibliotheke, Hephaestus forged Talos with the aid of a cyclops and presented the automaton as a gift to Minos. In the Argonautica, Jason and the Argonauts defeated Talos by removing a plug near his foot, causing the vital ichor to flow out from his body and rendering him lifeless. Pygmalion was a legendary king and sculptor of Greek mythology, famously represented in Ovid's Metamorphoses. In the 10th book of Ovid's narrative poem, Pygmalion becomes disgusted with women when he witnesses the way in which the Propoetides prostitute themselves. Despite this, he makes offerings at the temple of Venus asking the goddess to bring to him a woman just like a statue he carved. Medieval legends of artificial beings In Of the Nature of Things, the Swiss alchemist Paracelsus describes a procedure that he claims can fabricate an "artificial man". By placing the "sperm of a man" in horse dung, and feeding it the "Arcanum of Mans blood" after 40 days, the concoction will become a living infant. The earliest written account regarding golem-making is found in the writings of Eleazar ben Judah of Worms in the early 13th century. During the Middle Ages, it was believed that the animation of a Golem could be achieved by insertion of a piece of paper with any of God's names on it, into the mouth of the clay figure. Unlike legendary automata like Brazen Heads, a Golem was unable to speak. Takwin, the artificial creation of life, was a frequent topic of Ismaili alchemical manuscripts, especially those attributed to Jabir ibn Hayyan. Islamic alchemists attempted to create a broad range of life through their work, ranging from plants to animals. In Faust: The Second Part of the Tragedy by Johann Wolfgang von Goethe, an alchemically fabricated homunculus, destined to live forever in the flask in which he was made, endeavors to be born into a full human body. Upon the initiation of this transformation, however, the flask shatters and the homunculus dies. Modern fiction By the 19th century, ideas about artificial men and thinking machines became a popular theme in fiction. Notable works like Mary Shelley's Frankenstein and Karel Čapek's R.U.R. (Rossum's Universal Robots) explored the concept of artificial life. Speculative essays, such as Samuel Butler's "Darwin among the Machines", and Edgar Allan Poe's "Maelzel's Chess Player" reflected society's growing interest in machines with artificial intelligence. AI remains a common topic in science fiction today. Automata Realistic humanoid automata were built by craftsman from many civilizations, including Yan Shi, Hero of Alexandria, Al-Jazari, Haroun al-Rashid, Jacques de Vaucanson, Leonardo Torres y Quevedo, Pierre Jaquet-Droz and Wolfgang von Kempelen. The oldest known automata were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it". English scholar Alexander Neckham asserted that the Ancient Roman poet Virgil had built a palace with automaton statues. During the early modern period, these legendary automata were said to possess the magical ability to answer questions put to them. The late medieval alchemist and proto-Protestant Roger Bacon was purported to have fabricated a brazen head, having developed a legend of having been a wizard. These legends were similar to the Norse myth of the Head of Mímir. According to legend, Mímir was known for his intellect and wisdom, and was beheaded in the Æsir-Vanir War. Odin is said to have "embalmed" the head with herbs and spoke incantations over it such that Mímir's head remained able to speak wisdom to Odin. Odin then kept the head near him for counsel. Formal reasoning Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical—or "formal"—reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction by the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to the word algorithm) and European scholastic philosophers such as William of Ockham and Duns Scotus. Spanish philosopher Ramon Llull (1232–1315) developed several logical machines devoted to the production of knowledge by logical means; Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas. In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes famously wrote in Leviathan: "For reason ... is nothing but reckoning, that is adding and subtracting". Leibniz envisioned a universal language of reasoning, the characteristica universalis, which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate." These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research. The study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?" His question was answered by Gödel's incompleteness proof, Turing's machine and Church's Lambda calculus. Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machine—a simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines. Computer science Calculating machines were designed or built in antiquity and throughout history by many people, including Gottfried Leibniz, Joseph Marie Jacquard, Charles Babbage, Percy Ludgate, Leonardo Torres Quevedo, Vannevar Bush, and others. Ada Lovelace speculated that Babbage's machine was "a thinking or ... reasoning machine", but warned "It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers" of the machine. The first modern computers were the massive machines of the Second World War (such as Konrad Zuse's Z3, Alan Turing's Heath Robinson and Colossus, Atanasoff and Berry's and ABC and ENIAC at the University of Pennsylvania). ENIAC was based on the theoretical foundation laid by Alan Turing and developed by John von Neumann, and proved to be the most influential. Birth of artificial intelligence (1941-56) The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an "electronic brain". In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) explored several research directions that would be vital to later AI research. Alan Turing was among the first people to seriously investigate the theoretical possibility of "machine intelligence". The field of "artificial intelligence research" was founded as an academic discipline in 1956. Turing Test In 1950 Turing published a landmark paper "Computing Machinery and Intelligence", in which he speculated about the possibility of creating machines that think. In the paper, he noted that "thinking" is difficult to define and devised his famous Turing Test: If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence. Artificial neural networks Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions in 1943. They were the first to describe what later researchers would call a neural network. The paper was influenced by Turing's paper 'On Computable Numbers' from 1936 using similar two-state boolean 'neurons', but was the first to apply it to neuronal function. One of the students inspired by Pitts and McCulloch was Marvin Minsky who was a 24-year-old graduate student at the time. In 1951 Minsky and Dean Edmonds built the first neural net machine, the SNARC. Minsky would later become one of the most important leaders and innovators in AI. Cybernetic robots Experimental robots such as W. Grey Walter's turtles and the Johns Hopkins Beast, were built in the 1950s. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry. Game AI In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur Samuel's checkers program, the subject of his 1959 paper "Some Studies in Machine Learning Using the Game of Checkers", eventually achieved sufficient skill to challenge a respectable amateur. Samuel's program was among the first uses of what would later be called machine learning. Game AI would continue to be used as a measure of progress in AI throughout its history. Symbolic reasoning and the Logic Theorist When access to digital computers became possible in the mid-fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines. In 1955, Allen Newell and future Nobel Laureate Herbert A. Simon created the "Logic Theorist", with help from J. C. Shaw. The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some. Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind." The symbolic reasoning paradigm they introduced would dominate AI research and funding until the middle 90s, as well as inspire the cognitive revolution. Dartmouth Workshop The Dartmouth workshop of 1956 was a pivotal event that marked the formal inception of AI as an academic discipline. It was organized by Marvin Minsky and John McCarthy, with the support of two senior scientists Claude Shannon and Nathan Rochester of IBM. The proposal for the conference stated they intended to test the assertion that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it". The term "Artificial Intelligence" was introduced by John McCarthy at the workshop. The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. At the workshop Newell and Simon debuted the "Logic Theorist". The workshop was the moment that AI gained its name, its mission, its first major success and its key players, and is widely considered the birth of AI. Cognitive revolution In the autumn of 1956, Newell and Simon also presented the Logic Theorist at a meeting of the Special Interest Group in Information Theory at the Massachusetts Institute of Technology (MIT). At the same meeting, Noam Chomsky discussed his generative grammar, and George Miller described his landmark paper "The Magical Number Seven, Plus or Minus Two". Miller wrote "I left the symposium with a conviction, more intuitive than rational, that experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes were all pieces from a larger whole." This meeting was the beginning of the "cognitive revolution"—an interdisciplinary paradigm shift in psychology, philosophy, computer science and neuroscience. It inspired the creation of the sub-fields of symbolic artificial intelligence, generative linguistics, cognitive science, cognitive psychology, cognitive neuroscience and the philosophical schools of computationalism and functionalism. All these fields used related tools to model the mind and results discovered in one field were relevant to the others. The cognitive approach allowed researchers to consider "mental objects" like thoughts, plans, goals, facts or memories, often analyzed using high level symbols in functional networks. These objects had been forbidden as "unobservable" by earlier paradigms such as behaviorism. Symbolic mental objects would become the major focus of AI research and funding for the next several decades. Early successes (1956-1974) The programs developed in the years after the Dartmouth Workshop were, to most people, simply "astonishing": computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. Government agencies like the Defense Advanced Research Projects Agency (DARPA, then known as "ARPA") poured money into the field. Artificial Intelligence laboratories were set up at a number of British and US universities in the latter 1950s and early 1960s. Approaches There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these: Reasoning, planning and problem solving as search Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. The principal difficulty was that, for many problems, the number of possible paths through the "maze" was astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics that would eliminate paths that were unlikely to lead to a solution. Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver". Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and Symbolic Automatic Integrator (SAINT), written by Minsky's student James Slagle in 1961. Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of the robot Shakey. Natural language An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems. A semantic net represents concepts (e.g. "house", "door") as nodes, and relations among concepts as links between the nodes (e.g. "has-a"). The first AI program to use a semantic net was written by Ross Quillian and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory. Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a computer program (see ELIZA effect). But in fact, ELIZA simply gave a canned response or repeated back what was said to it, rephrasing its response with a few grammar rules. ELIZA was the first chatbot. Micro-worlds In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface. This paradigm led to innovative work in machine vision by Gerald Sussman, Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. Terry Winograd's SHRDLU could communicate in ordinary English sentences about the micro-world, plan operations and execute them. Perceptrons and early neural networks In the 1960s funding was primarily directed towards laboratories researching symbolic AI, however several people still pursued research in neural networks. The perceptron, a single-layer neural network was introduced in 1958 by Frank Rosenblatt (who had been a schoolmate of Marvin Minsky at the Bronx High School of Science). Like most AI researchers, he was optimistic about their power, predicting that a perceptron "may eventually be able to learn, make decisions, and translate languages." Rosenblatt was primarily funded by Office of Naval Research. Bernard Widrow and his student Ted Hoff built ADALINE (1960) and MADALINE (1962), which had up to 1000 adjustable weights. A group at Stanford Research Institute led by Charles A. Rosen and Alfred E. (Ted) Brain built two neural network machines named MINOS I (1960) and II (1963), mainly funded by U.S. Army Signal Corps. MINOS II had 6600 adjustable weights, and was controlled with an SDS 910 computer in a configuration named MINOS III (1968), which could classify symbols on army maps, and recognize hand-printed characters on Fortran coding sheets. Most of neural network research during this early period involved building and using bespoke hardware, rather than simulation on digital computers. However, partly due to lack of results and partly due to competition from symbolic AI research, the MINOS project ran out of funding in 1966. Rosenblatt failed to secure continued funding in the 1960s. In 1969, research came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Rosenblatt's predictions had been grossly exaggerated. The effect of the book was that virtually no research was funded in connectionism for 10 years. The competition for government funding ended with the victory of symbolic AI approaches over neural networks. Minsky (who had worked on SNARC) became a staunch objector to pure connectionist AI. Widrow (who had worked on ADALINE) turned to adaptive signal processing. The SRI group (which worked on MINOS) turned to symbolic AI and robotics. The main problem was the inability to train multilayered networks (versions of backpropagation had already been used in other fields but it was unknown to these researchers). The AI community became aware of backpropogation in the 80s, and, in the 21st century, neural networks would become enormously successful, fulfilling all of Rosenblatt's optimistic predictions. Rosenblatt did not live to see this, however, as he died in a boating accident in 1971. Optimism The first generation of AI researchers made these predictions about their work: 1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem." 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do." 1967, Marvin Minsky: "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved." 1970, Marvin Minsky (in Life magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being." Financing In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (ARPA, later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide $3 million each year until the 70s. DARPA made similar grants to Newell and Simon's program at Carnegie Mellon University and to Stanford University's AI Lab, founded by John McCarthy in 1963. Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965. These four institutions would continue to be the main centers of AI research and funding in academia for many years. The money was given with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them. This created a freewheeling atmosphere at MIT that gave birth to the hacker culture, but this "hands off" approach did not last. First AI Winter (1974–1980) In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised public expectations impossibly high, and when the promised results failed to materialize, funding targeted at AI was severely reduced. The lack of success indicated the techniques being used by AI researchers at the time were insufficient to achieve their goals. These setbacks did not affect the growth and progress of the field, however. The funding cuts only impacted a handful of major laboratories and the critiques were largely ignored. General public interest in the field continued to grow, the number of researchers increased dramatically, and new ideas were explored in logic programming, commonsense reasoning and many other areas. Historian Thomas Haigh argued in 2023 that there was no winter, and AI researcher Nils Nilsson described this period as the most "exciting" time to work in AI. Problems In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys". AI researchers had begun to run into several limits that would be only conquered decades later, and others that still stymie the field in the 2020s: Limited computer power: There was not enough memory or processing speed to accomplish anything truly useful. For example: Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only 20 words, because that was all that would fit in memory. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy. "With enough horsepower," he wrote, "anything will fly". Intractability and the combinatorial explosion: In 1972 Richard Karp (building on Stephen Cook's 1971 theorem) showed there are many problems that can only be solved in exponential time. Finding optimal solutions to these problems requires extraordinary amounts of computer time, except when the problems are trivial. This limitation applied to all symbolic AI programs that used search trees and meant that many of the "toy" solutions used by AI would never scale to useful systems. Moravec's paradox: Early AI research had been very successful at getting computers to do "intelligent" tasks like proving theorems, solving geometry problems and playing chess. Their success at these intelligent tasks convinced them that the problem of intelligent behavior had been largely solved. However, they utterly failed to make progress on "unintelligent" tasks like recognizing a face or crossing a room without bumping into anything. By the 1980s, researchers would realize that symbolic reasoning was utterly unsuited for these perceptual and sensorimotor tasks and that there were limits to this approach. The breadth of commonsense knowledge: Many important artificial intelligence applications like vision or natural language require enormous amounts of information about the world: the program needs to have some idea of what it might be looking at or what it is talking about. This requires that the program know most of the same things about the world that a child does. Researchers soon discovered that this was a vast amount of information with billions of atomic facts. No one in 1970 could build a database large enough and no one knew how a program might learn so much information. Representing commonsense reasoning: A number of related problems appeared when researchers tried to represent commonsense reasoning using formal logic or symbols. Descriptions of very ordinary deductions tended to get longer and longer the more one worked on them, as more and more exceptions, clarifications and distinctions were required. However, when people thought about ordinary concepts they did not rely on precise definitions, rather they seemed to make hundreds of imprecise assumptions, correcting them when necessary using their entire body of commonsense knowledge. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise." Decrease in funding The agencies which funded AI research, such as the British government, DARPA and the National Research Council (NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected AI research. The pattern began in 1966 when the Automatic Language Processing Advisory Committee (ALPAC) report criticized machine translation efforts. After spending $20 million, the NRC ended all support. In 1973, the Lighthill report on the state of AI research in the UK criticized the failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country. (The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.) DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of $3 million. Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration." However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA, which instead directed money at specific projects with clear objectives, such as autonomous tanks and battle management systems. The major laboratories (MIT, Stanford, CMU and Edinburgh) had been receiving generous support from their governments, and when it was withdrawn, these were the only places that were seriously impacted by the budget cuts. The thousands of researchers outside these institutions and the many more thousands that were joining the field were unaffected. Philosophical and ethical critiques Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could. Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how". John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking". These critiques were not taken seriously by AI researchers. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. MIT's Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored." Dreyfus, who also taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me." Joseph Weizenbaum, the author of ELIZA, was also an outspoken critic of Dreyfus' positions, but he "deliberately made it plain that [his AI colleagues' treatment of Dreyfus] was not the way to treat a human being," and was unprofessional and childish. Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct psychotherapeutic dialogue" based on ELIZA. Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life. Logic at Stanford, CMU and Edinburgh Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal. In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems. A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and who created the successful logic programming language Prolog. Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition. Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof. McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do. MIT's "anti-logic" approach Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. MIT chose instead to focus on writing programs that solved a given task without using high-level abstract definitions or general theories of cognition, and measured performance by iterative testing, rather than arguments from first principles. Schank described their "anti-logic" approaches as scruffy, as opposed to the neat paradigm used by McCarthy, Kowalski, Feigenbaum, Newell and Simon. In 1975, in a seminal paper, Minsky noted that many of his fellow researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on (none of which are true for all birds). Minsky associated these assumptions with the general category and they could be inherited by the frames for subcategories and individuals, or over-ridden as necessary. He called these structures frames. Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English. Frames would eventually be widely used in software engineering under the name object-oriented programming. The logicians rose to the challenge. Pat Hayes claimed that "most of 'frames' is just a new syntax for parts of first-order logic." But he noted that "there are one or two apparently minor details which give a lot of trouble, however, especially defaults". Ray Reiter admitted that "conventional logics, such as first-order logic, lack the expressive power to adequately represent the knowledge required for reasoning by default". He proposed augmenting first-order logic with a closed world assumption that a conclusion holds (by default) if its contrary cannot be shown. He showed how such an assumption corresponds to the common sense assumption made in reasoning with frames. He also showed that it has its "procedural equivalent" as negation as failure in Prolog. The closed world assumption, as formulated by Reiter, "is not a first-order notion. (It is a meta notion.)" However, Keith Clark showed that negation as finite failure can be understood as reasoning implicitly with definitions in first-order logic including a unique name assumption that different terms denote different individuals. During the late 1970s and throughout the 1980s, a variety of logics and extensions of first-order logic were developed both for negation as failure in logic programming and for default reasoning more generally. Collectively, these logics have become known as non-monotonic logics. Boom (1980–1987) In the 1980s, a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. Governments provided substantial funding, such as Japan's fifth generation computer project and the U.S. Strategic Computing Initiative. "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988." Expert systems become widely used An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach. Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point. In 1980, an expert system called R1 was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion. Government funding increases In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. Much to the chagrin of scruffies, they initially chose Prolog as the primary computer language for the project. Other countries responded with new programs of their own. The UK began the £350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology. DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988. Knowledge revolution The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways," writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay". Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s. It was hoped that vast databases would solve the commonsense knowledge problem and provide the support that commonsense reasoning required. In the 1980s some researchers attempted to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started a database called Cyc, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. New directions in the 1980s Although symbolic knowledge representation and logical reasoning produced useful applications in the 80s and received massive amounts of funding, it was still unable to solve problems in perception, robotics, learning and common sense. A small number of scientists and engineers began to doubt that the symbolic approach would ever be sufficient for these tasks and developed other approaches, such as "connectionism", robotics, "soft" computing and reinforcement learning. Nils Nilsson called these approaches "sub-symbolic". Revival of neural networks: "connectionism" In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information, and provably converges after enough time under any fixed condition. It was a breakthrough, as it was previously thought that nonlinear networks would, in general, evolve chaotically. Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called "backpropagation". These two developments helped to revive the exploration of artificial neural networks. Neural networks, along with several other similar models, received widespread attention after the 1986 publication of the Parallel Distributed Processing, a two volume collection of papers edited by Rumelhart and psychologist James McClelland. The new field was christened "connectionism" and there was a considerable debate between advocates of symbolic AI and the "connectionists". Hinton called symbols the "luminous aether of AI" – that is, an unworkable and misleading model of intelligence. This was a direct attack on the principles that inspired the cognitive revolution. In 1990, Yann LeCun at Bell Labs used convolutional neural networks to recognize handwritten digits. The system was used widely in 90s, reading zip codes and personal checks. This was the first genuinely useful application of neural networks. Robotics and embodied reason Rodney Brooks, Hans Moravec and others argued that, in order to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world. Sensorimotor skills are essential to higher level skills such as commonsense reasoning. They can't be efficiently implemented using abstract symbolic reasoning, so AI should solve the problems of perception, mobility, manipulation and survival without using symbolic representation at all. These robotics researchers advocated building intelligence "from the bottom up". A precursor to this idea was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.) In his 1990 paper "Elephants Don't Play Chess," robotics researcher Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough." In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the "embodied mind thesis". Soft computing and probabilistic reasoning Soft computing uses methods that work with incomplete and imprecise information. They do not attempt to give precise, logical answers, but give results that are only "probably" correct. This allowed them to solve problems that precise symbolic methods could not handle. Press accounts often claimed these tools could "think like a human". Judea Pearl's Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, an influential 1988 book brought probability and decision theory into AI. Fuzzy logic, developed by Lofti Zadeh in the 60s, began to be more widely used in AI and robotics. Evolutionary computation and artificial neural networks also handle imprecise information, and are classified as "soft". In the 90s and early 2000s many other soft computing tools were developed and put into use, including Bayesian networks, hidden Markov models, information theory and stochastic modeling. These tools in turn depended on advanced mathematical techniques such as classical optimization. For a time in the 1990s and early 2000s, these soft tools were studied by a subfield of AI called "computational intelligence". Reinforcement learning Reinforcement learning gives an agent a reward every time it performs a desired action well, and may give negative rewards (or "punishments") when it performs poorly. It was described in the first half of the twentieth century by psychologists using animal models, such as Thorndike, Pavlov and Skinner. In the 1950s, Alan Turing and Arthur Samuel foresaw the role of reinforcement learning in AI. A successful and influential research program was led by Richard Sutton and Andrew Barto beginning 1972. Their collaboration revolutionized the study of reinforcement learning and decision making over the four decades. In 1988, Sutton described machine learning in terms of decision theory (i.e., the Markov decision process). This gave the subject a solid theoretical foundation and access to a large body of theoretical results developed in the field of operations research. Also in 1988, Sutton and Barto developed the "temporal difference" (TD) learning algorithm, where the agent is rewarded only when its predictions about the future show improvement. It significantly outperformed previous algorithms. TD-learning was used by Gerald Tesauro in 1992 in the program TD-Gammon, which played backgammon as well as the best human players. The program learned the game by playing against itself with zero prior knowledge. In an interesting case of interdisciplinary convergence, neurologists discovered in 1997 that the dopamine reward system in brains also uses a version of the TD-learning algorithm. TD learning would be become highly influential in the 21st century, used in both AlphaGo and AlphaZero. Second AI winter The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. As dozens of companies failed, the perception in the business world was that the technology was not viable. The damage to AI's reputation would last into the 21st century. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence". Over the next 20 years, AI consistently delivered working solutions to specific isolated problems. By the late 1990s, it was being used throughout the technology industry, although somewhat behind the scenes. The success was due to increasing computer power, by collaboration with other fields (such as mathematical optimization and statistics) and using the highest standards of scientific accountability. By 2000, AI had achieved some of its oldest goals. The field was both more cautious and more successful than it had ever been. AI winter The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow. Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks. The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, and they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs). Expert systems proved useful, but only in a few special contexts. In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally". New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results. By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" would not be accomplished for another 40 years. As with other AI projects, expectations had run much higher than what was actually possible. Over 300 AI companies had shut down, gone bankrupt, or been acquired by the end of 1993, effectively ending the first commercial wave of AI. In 1994, HP Newquist stated in The Brain Makers that "The immediate future of artificial intelligence—in its commercial form—seems to rest in part on the continued success of neural networks." AI behind the scenes In the 1990s, algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry, such as data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google's search engine. The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science. Nick Bostrom explains: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." Many researchers in AI in the 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, "cognitive systems" or computational intelligence. In part, this may have been because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers." Mathematical rigor, greater collaboration and a narrow focus AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past. Most of the new directions in AI relied heavily on mathematical models, including artificial neural networks, probabilistic reasoning, soft computing and reinforcement learning. In the 90s and 2000s, many other highly mathematical tools were adapted for AI. These tools were applied to machine learning, perception and mobility. There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like statistics, mathematics, electrical engineering, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Another key reason for the success in the 90s was that AI researchers focussed on specific problems with verifiable solutions (an approach later derided as narrow AI). This provided useful tools in the present, rather than speculation about the future. Intelligent agents A new paradigm called "intelligent agents" became widely accepted during the 1990s. Although earlier researchers had proposed modular "divide and conquer" approaches to AI, the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI. When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete. An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence. The paradigm gave researchers license to study isolated problems and to disagree about methods, but still retain hope that their work could be combined into an agent architecture that would be capable of general intelligence. Milestones and Moore's law On May 11, 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an urban environment while responding to traffic hazards and adhering to traffic laws. These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computers by the 90s. In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951. This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome. Big data, deep learning, AGI (2005–2017) In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. A turning point was the success of deep learning around 2012 which improved the performance of machine learning on many tasks, including image and video processing, text analysis, and speech recognition. Investment in AI increased along with its capabilities, and by 2016, the market for AI-related products, hardware, and software reached more than $8 billion, and the New York Times reported that interest in AI had reached a "frenzy". In 2002, Ben Goertzel and others became concerned that AI had largely abandoned its original goal of producing versatile, fully intelligent machines, and argued in favor of more direct research into artificial general intelligence. By the mid-2010s several companies and institutions had been founded to pursue Artificial General Intelligence (AGI), such as OpenAI and Google's DeepMind. During the same period, new insights into superintelligence raised concerns that AI was an existential threat. The risks and unintended consequences of AI technology became an area of serious academic research after 2016. Big data and big machines The success of machine learning in the 2000s depended on the availability of vast amounts of training data and faster computers. Russell and Norvig wrote that the "improvement in performance obtained by increasing the size of the data set by two or three orders of magnitude outweighs any improvement that can be made by tweaking the algorithm." Geoffrey Hinton recalled that back in the 90s, the problem was that "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow." This was no longer true by 2010. The most useful data in the 2000s came from curated, labeled data sets created specifically for machine learning and AI. In 2007, a group at UMass Amherst released Labeled Faces in the Wild, an annotated set of images of faces that was widely used to train and test face recognition systems for the next several decades. Fei-Fei Li developed ImageNet, a database of three million images captioned by volunteers using the Amazon Mechanical Turk. Released in 2009, it was a useful body of training data and a benchmark for testing for the next generation of image processing systems. Google released word2vec in 2013 as an open source resource. It used large amounts of data text scraped from the internet and word embedding to create a numeric vector to represent each word. Users were surprised at how well it was able to capture word meanings, for example, ordinary vector addition would give equivalences like China + River = Yangtze, London+England-France = Paris. This database in particular would be essential for the development of large language models in the late 2010s. The explosive growth of the internet gave machine learning programs access to billions of pages of text and images that could be scraped. And, for specific problems, large privately held databases contained the relevant data. McKinsey Global Institute reported that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data". This collection of information was known in the 2000s as big data. In a Jeopardy! exhibition match in February 2011, IBM's question answering system Watson defeated the two best Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Watson's expertise would have been impossible without the information available on the internet. Deep learning In 2012, AlexNet, a deep learning model, developed by Alex Krizhevsky, won the ImageNet Large Scale Visual Recognition Challenge, with significantly fewer errors than the second-place winner. Krizhevsky worked with Geoffrey Hinton at the University of Toronto. This was a turning point in machine learning: over the next few years dozens of other approaches to image recognition were abandoned in favor of deep learning. Deep learning uses a multi-layer perceptron. Although this architecture has been known since the 60s, getting it to work requires powerful hardware and large amounts of training data. Before these became available, improving performance of image processing systems required hand-crafted ad hoc features that were difficult to implement. Deep learning was simpler and more general. Deep learning was applied to dozens of problems over the next few years (such as speech recognition, machine translation, medical diagnosis, and game playing). In every case it showed enormous gains in performance. Investment and interest in AI boomed as a result. The alignment problem It became fashionable in the 2000s to begin talking about the future of AI again and several popular books considered the possibility of superintelligent machines and what they might mean for human society. Some of this was optimistic (such as Ray Kurzweil's The Singularity is Near), but others warned that a sufficiently powerful AI was existential threat to humanity, such as Nick Bostrom and Eliezer Yudkowsky. The topic became widely covered in the press and many leading intellectuals and politicians commented on the issue. AI programs in the 21st century are defined by their goals – the specific measures that they are designed to optimize. Nick Bostrom's influential 2005 book Superintelligence argued that, if one isn't careful about defining these goals, the machine may cause harm to humanity in the process of achieving a goal. Stuart J. Russell used the example of an intelligent robot that kills its owner to prevent it from being unplugged, reasoning "you can't fetch the coffee if you're dead". (This problem is known by the technical term "instrumental convergence".) The solution is to align the machine's goal function with the goals of its owner and humanity in general. Thus, the problem of mitigating the risks and unintended consequences of AI became known as "the value alignment problem" or AI alignment. At the same time, machine learning systems had begun to have disturbing unintended consequences. Cathy O'Neil explained how statistical algorithms had been among the causes of the 2008 economic crash, Julia Angwin of ProPublica argued that the COMPAS system used by the criminal justice system exhibited racial bias under some measures, others showed that many machine learning systems exhibited some form of racial bias, and there were many other examples of dangerous outcomes that had resulted from machine learning systems. In 2016, the election of Donald Trump and the controversy over the COMPAS system illuminated several problems with the current technological infrastructure, including misinformation, social media algorithms designed to maximize engagement, the misuse of personal data and the trustworthiness of predictive models. Issues of fairness and unintended consequences became significantly more popular at AI conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The value alignment problem became a serious field of academic study. Artificial general intelligence research In the early 2000s, several researchers became concerned that mainstream AI was too focused on "measurable performance in specific applications" (known as "narrow AI") and had abandoned AI's original goal of creating versatile, fully intelligent machines. An early critic was Nils Nilsson in 1995, and similar opinions were published by AI elder statesmen John McCarthy, Marvin Minsky, and Patrick Winston in 2007–2009. Minsky organized a symposium on "human-level AI" in 2004. Ben Goertzel adopted the term "artificial general intelligence" for the new sub-field, founding a journal and holding conferences beginning in 2008. The new field grew rapidly, buoyed by the continuing success of artificial neural networks and the hope that it was the key to AGI. Several competing companies, laboratories and foundations were founded to develop AGI in the 2010s. DeepMind was founded in 2010 by three English scientists, Demis Hassabis, Shane Legg and Mustafa Suleyman, with funding from Peter Thiel and later Elon Musk. The founders and financiers were deeply concerned about AI safety and the existential risk of AI. DeepMind's founders had a personal connection with Yudkowsky and Musk was among those who was actively raising the alarm. Hassabis was both worried about the dangers of AGI and optimistic about its power; he hoped they could "solve AI, then solve everything else." The New York Times wrote in 2023 "At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about AI are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep AI from endangering Earth." In 2012, Geoffrey Hinton (who been leading neural network research since the 80s) was approached by Baidu, which wanted to hire him and all his students for an enormous sum. Hinton decided to hold an auction and, at a Lake Tahoe AI conference, they sold themselves to Google for a price of $44 million. Hassabis took notice and sold DeepMind to Google in 2014, on the condition that it would not accept military contracts and would be overseen by an ethics board. Larry Page of Google, unlike Musk and Hassabis, was an optimist about the future of AI. Musk and Paige became embroiled in an argument about the risk of AGI at Musk's 2015 birthday party. They had been friends for decades but stopped speaking to each other shortly afterwards. Musk attended the one and only meeting of the DeepMind's ethics board, where it became clear that Google was uninterested in mitigating the harm of AGI. Frustrated by his lack of influence he founded OpenAI in 2015, enlisting Sam Altman to run it and hiring top scientists. OpenAI began as a non-profit, "free from the economic incentives that were driving Google and other corporations." Musk became frustrated again and left the company in 2018. OpenAI turned to Microsoft for continued financial support and Altman and OpenAI formed a for-profit version of the company with more than $1 billion in financing. In 2021, Dario Amodei and 14 other scientists left OpenAI over concerns that the company was putting profits above safety. They formed Anthropic, which soon had $6 billion in financing from Microsoft and Google. Large language models, AI boom (2020–present) The AI boom started with the initial development of key architectures and algorithms such as the transformer architecture in 2017, leading to the scaling and development of large language models exhibiting human-like traits of knowledge, attention and creativity. The new AI era began around 2020–2023, with the public release of scaled large language models (LLMs) such as ChatGPT. Transformer architecture and large language models In 2017, the transformer architecture was proposed by Google researchers. It exploits an attention mechanism and became widely used in large language models. Large language models, based on the transformer, were developed by AGI companies: OpenAI released GPT-3 in 2020, and DeepMind released Gato in 2022. These are foundation models: they are trained on vast quantities of unlabeled data and can be adapted to a wide range of downstream tasks. These models can discuss a huge number of topics and display general knowledge. The question naturally arises: are these models an example of artificial general intelligence? Bill Gates was skeptical of the new technology and the hype that surrounded AGI. However, Altman presented him with a live demo of ChatGPT4 passing an advanced biology test. Gates was convinced. In 2023, Microsoft Research tested the model with a large variety of tasks, and concluded that "it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system". In 2024, OpenAI o3, a type of advanced reasoning model developed by OpenAI was announced. On the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) benchmark developed by François Chollet in 2019, the model achieved an unofficial score of 87.5% on the semi-private test, surpassing the typical human score of 84%. The benchmark is supposed to be a necessary, but not sufficient test for AGI. Speaking of the benchmark, Chollet has said "You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible." AI boom Investment in AI grew exponentially after 2020, with venture capital funding for generative AI companies increasing dramatically. Total AI investments rose from $18 billion in 2014 to $119 billion in 2021, with generative AI accounting for approximately 30% of investments by 2023. According to metrics from 2017 to 2021, the United States outranked the rest of the world in terms of venture capital funding, number of startups, and AI patents granted. The commercial AI scene became dominated by American Big Tech companies, whose investments in this area surpassed those from U.S.-based venture capitalists. OpenAI's valuation reached $86 billion by early 2024, while NVIDIA's market capitalization surpassed $3.3 trillion by mid-2024, making it the world's largest company by market capitalization as the demand for AI-capable GPUs surged. 15.ai, launched in March 2020 by an anonymous MIT researcher, was one of the earliest examples of generative AI gaining widespread public attention during the initial stages of the AI boom. The free web application demonstrated the ability to clone character voices using neural networks with minimal training data, requiring as little as 15 seconds of audio to reproduce a voice—a capability later corroborated by OpenAI in 2024. The service went viral on social media platforms in early 2021, allowing users to generate speech for characters from popular media franchises, and became particularly notable for its pioneering role in popularizing AI voice synthesis for creative content and memes. ChatGPT was launched on November 30, 2022, marking a pivotal moment in artificial intelligence's public adoption. Within days of its release it went viral, gaining over 100 million users in two months and becoming the fastest-growing consumer software application in history. The chatbot's ability to engage in human-like conversations, write code, and generate creative content captured public imagination and led to rapid adoption across various sectors including education, business, and research. ChatGPT's success prompted unprecedented responses from major technology companies—Google declared a "code red" and rapidly launched Gemini (formerly known as Google Bard), while Microsoft incorporated the technology into Bing Chat. The rapid adoption of these AI technologies sparked intense debate about their implications. Notable AI researchers and industry leaders voiced both optimism and concern about the accelerating pace of development. In March 2023, over 20,000 signatories, including computer scientist Yoshua Bengio, Elon Musk, and Apple co-founder Steve Wozniak, signed an open letter calling for a pause in advanced AI development, citing "profound risks to society and humanity." However, other prominent researchers like Juergen Schmidhuber took a more optimistic view, emphasizing that the majority of AI research aims to make "human lives longer and healthier and easier." By mid-2024, however, the financial sector began to scrutinize AI companies more closely, particularly questioning their capacity to produce a return on investment commensurate with their massive valuations. Some prominent investors raised concerns about market expectations becoming disconnected from fundamental business realities. Jeremy Grantham, co-founder of GMO LLC, warned investors to "be quite careful" and drew parallels to previous technology-driven market bubbles. Similarly, Jeffrey Gundlach, CEO of DoubleLine Capital, explicitly compared the AI boom to the dot-com bubble of the late 1990s, suggesting that investor enthusiasm might be outpacing realistic near-term capabilities and revenue potential. These concerns were amplified by the substantial market capitalizations of AI-focused companies, many of which had yet to demonstrate sustainable profitability models. In March 2024, Anthropic released the Claude 3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus. The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google. In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis. 2024 Nobel Prizes In 2024, the Royal Swedish Academy of Sciences awarded Nobel Prizes in recognition of groundbreaking contributions to artificial intelligence. The recipients included: In physics: John Hopfield for his work on physics-inspired Hopfield networks, and Geoffrey Hinton for foundational contributions to Boltzmann machines and deep learning. In chemistry: David Baker, Demis Hassabis, and John Jumper for their advancements in protein folding predictions. See AlphaFold. See also History of artificial neural networks History of knowledge representation and reasoning History of natural language processing Outline of artificial intelligence Progress in artificial intelligence Timeline of artificial intelligence Timeline of machine learning Notes References , collected in See especially Chapter 2 and 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . With notes upon the Memoir by the Translator . . . . . . . . Reprinted in . . . . . . . . . History of computing
History of artificial intelligence
[ "Technology" ]
14,940
[ "Computers", "History of computing" ]
2,895,304
https://en.wikipedia.org/wiki/Model%20category
In mathematics, particularly in homotopy theory, a model category is a category with distinguished classes of morphisms ('arrows') called 'weak equivalences', 'fibrations' and 'cofibrations' satisfying certain axioms relating them. These abstract from the category of topological spaces or of chain complexes (derived category theory). The concept was introduced by . In recent decades, the language of model categories has been used in some parts of algebraic K-theory and algebraic geometry, where homotopy-theoretic approaches led to deep results. Motivation Model categories can provide a natural setting for homotopy theory: the category of topological spaces is a model category, with the homotopy corresponding to the usual theory. Similarly, objects that are thought of as spaces often admit a model category structure, such as the category of simplicial sets. Another model category is the category of chain complexes of R-modules for a commutative ring R. Homotopy theory in this context is homological algebra. Homology can then be viewed as a type of homotopy, allowing generalizations of homology to other objects, such as groups and R-algebras, one of the first major applications of the theory. Because of the above example regarding homology, the study of closed model categories is sometimes thought of as homotopical algebra. Formal definition The definition given initially by Quillen was that of a closed model category, the assumptions of which seemed strong at the time, motivating others to weaken some of the assumptions to define a model category. In practice the distinction has not proven significant and most recent authors (e.g., Mark Hovey and Philip Hirschhorn) work with closed model categories and simply drop the adjective 'closed'. The definition has been separated to that of a model structure on a category and then further categorical conditions on that category, the necessity of which may seem unmotivated at first but becomes important later. The following definition follows that given by Hovey. A model structure on a category C consists of three distinguished classes of morphisms (equivalently subcategories): weak equivalences, fibrations, and cofibrations, and two functorial factorizations and subject to the following axioms. A fibration that is also a weak equivalence is called an acyclic (or trivial) fibration and a cofibration that is also a weak equivalence is called an acyclic (or trivial) cofibration (or sometimes called an anodyne morphism). Axioms Retracts: if g is a morphism belonging to one of the distinguished classes, and f is a retract of g (as objects in the arrow category , where 2 is the 2-element ordered set), then f belongs to the same distinguished class. Explicitly, the requirement that f is a retract of g means that there exist i, j, r, and s, such that the following diagram commutes: 2 of 3: if f and g are maps in C such that gf is defined and any two of these are weak equivalences then so is the third. Lifting: acyclic cofibrations have the left lifting property with respect to fibrations, and cofibrations have the left lifting property with respect to acyclic fibrations. Explicitly, if the outer square of the following diagram commutes, where i is a cofibration and p is a fibration, and i or p is acyclic, then there exists h completing the diagram. Factorization: every morphism f in C can be written as for a fibration p and an acyclic cofibration i; every morphism f in C can be written as for an acyclic fibration p and a cofibration i. A model category is a category that has a model structure and all (small) limits and colimits, i.e., a complete and cocomplete category with a model structure. Definition via weak factorization systems The above definition can be succinctly phrased by the following equivalent definition: a model category is a category C and three classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that C has all limits and colimits, is a weak factorization system, is a weak factorization system satisfies the 2 of 3 property. First consequences of the definition The axioms imply that any two of the three classes of maps determine the third (e.g., cofibrations and weak equivalences determine fibrations). Also, the definition is self-dual: if C is a model category, then its opposite category also admits a model structure so that weak equivalences correspond to their opposites, fibrations opposites of cofibrations and cofibrations opposites of fibrations. Examples Topological spaces The category of topological spaces, Top, admits a standard model category structure with the usual (Serre) fibrations and with weak equivalences as weak homotopy equivalences. The cofibrations are not the usual notion found here, but rather the narrower class of maps that have the left lifting property with respect to the acyclic Serre fibrations. Equivalently, they are the retracts of the relative cell complexes, as explained for example in Hovey's Model Categories. This structure is not unique; in general there can be many model category structures on a given category. For the category of topological spaces, another such structure is given by Hurewicz fibrations and standard cofibrations, and the weak equivalences are the (strong) homotopy equivalences. Chain complexes The category of (nonnegatively graded) chain complexes of R-modules carries at least two model structures, which both feature prominently in homological algebra: weak equivalences are maps that induce isomorphisms in homology; cofibrations are maps that are monomorphisms in each degree with projective cokernel; and fibrations are maps that are epimorphisms in each nonzero degree or weak equivalences are maps that induce isomorphisms in homology; fibrations are maps that are epimorphisms in each degree with injective kernel; and cofibrations are maps that are monomorphisms in each nonzero degree. This explains why Ext-groups of R-modules can be computed by either resolving the source projectively or the target injectively. These are cofibrant or fibrant replacements in the respective model structures. The category of arbitrary chain-complexes of R-modules has a model structure that is defined by weak equivalences are chain homotopy equivalences of chain-complexes; cofibrations are monomorphisms that are split as morphisms of underlying R-modules; and fibrations are epimorphisms that are split as morphisms of underlying R-modules. Further examples Other examples of categories admitting model structures include the category of all small categories, the category of simplicial sets or simplicial presheaves on any small Grothendieck site, the category of topological spectra, and the categories of simplicial spectra or presheaves of simplicial spectra on a small Grothendieck site. Simplicial objects in a category are a frequent source of model categories; for instance, simplicial commutative rings or simplicial R-modules admit natural model structures. This follows because there is an adjunction between simplicial sets and simplicial commutative rings (given by the forgetful and free functors), and in nice cases one can lift model structures under an adjunction. A simplicial model category is a simplicial category with a model structure that is compatible with the simplicial structure. Given any category C and a model category M, under certain extra hypothesis the category of functors Fun (C, M) (also called C-diagrams in M) is also a model category. In fact, there are always two candidates for distinct model structures: in one, the so-called projective model structure, fibrations and weak equivalences are those maps of functors which are fibrations and weak equivalences when evaluated at each object of C. Dually, the injective model structure is similar with cofibrations and weak equivalences instead. In both cases the third class of morphisms is given by a lifting condition (see below). In some cases, when the category C is a Reedy category, there is a third model structure lying in between the projective and injective. The process of forcing certain maps to become weak equivalences in a new model category structure on the same underlying category is known as Bousfield localization. For example, the category of simplicial sheaves can be obtained as a Bousfield localization of the model category of simplicial presheaves. Denis-Charles Cisinski has developed a general theory of model structures on presheaf categories (generalizing simplicial sets, which are presheaves on the simplex category). If C is a model category, then so is the category Pro(C) of pro-objects in C. However, a model structure on Pro(C) can also be constructed by imposing a weaker set of axioms to C. Some constructions Every closed model category has a terminal object by completeness and an initial object by cocompleteness, since these objects are the limit and colimit, respectively, of the empty diagram. Given an object X in the model category, if the unique map from the initial object to X is a cofibration, then X is said to be cofibrant. Analogously, if the unique map from X to the terminal object is a fibration then X is said to be fibrant. If Z and X are objects of a model category such that Z is cofibrant and there is a weak equivalence from Z to X then Z is said to be a cofibrant replacement for X. Similarly, if Z is fibrant and there is a weak equivalence from X to Z then Z is said to be a fibrant replacement for X. In general, not all objects are fibrant or cofibrant, though this is sometimes the case. For example, all objects are cofibrant in the standard model category of simplicial sets and all objects are fibrant for the standard model category structure given above for topological spaces. Left homotopy is defined with respect to cylinder objects and right homotopy is defined with respect to path space objects. These notions coincide when the domain is cofibrant and the codomain is fibrant. In that case, homotopy defines an equivalence relation on the hom sets in the model category giving rise to homotopy classes. Characterizations of fibrations and cofibrations by lifting properties Cofibrations can be characterized as the maps which have the left lifting property with respect to acyclic fibrations, and acyclic cofibrations are characterized as the maps which have the left lifting property with respect to fibrations. Similarly, fibrations can be characterized as the maps which have the right lifting property with respect to acyclic cofibrations, and acyclic fibrations are characterized as the maps which have the right lifting property with respect to cofibrations. Homotopy and the homotopy category The homotopy category of a model category C is the localization of C with respect to the class of weak equivalences. This definition of homotopy category does not depend on the choice of fibrations and cofibrations. However, the classes of fibrations and cofibrations are useful in describing the homotopy category in a different way and in particular avoiding set-theoretic issues arising in general localizations of categories. More precisely, the "fundamental theorem of model categories" states that the homotopy category of C is equivalent to the category whose objects are the objects of C which are both fibrant and cofibrant, and whose morphisms are left homotopy classes of maps (equivalently, right homotopy classes of maps) as defined above. (See for instance Model Categories by Hovey, Thm 1.2.10) Applying this to the category of topological spaces with the model structure given above, the resulting homotopy category is equivalent to the category of CW complexes and homotopy classes of continuous maps, whence the name. Quillen adjunctions A pair of adjoint functors between two model categories C and D is called a Quillen adjunction if F preserves cofibrations and acyclic cofibrations or, equivalently by the closed model axioms, such that G preserves fibrations and acyclic fibrations. In this case F and G induce an adjunction between the homotopy categories. There is also an explicit criterion for the latter to be an equivalence (F and G are called a Quillen equivalence then). A typical example is the standard adjunction between simplicial sets and topological spaces: involving the geometric realization of a simplicial set and the singular chains in some topological space. The categories sSet and Top are not equivalent, but their homotopy categories are. Therefore, simplicial sets are often used as models for topological spaces because of this equivalence of homotopy categories. See also (∞,1)-category Cocycle category Stable model category Notes References Denis-Charles Cisinski: Les préfaisceaux commes modèles des types d'homotopie, Astérisque, (308) 2006, xxiv+392 pp. Philip S. Hirschhorn: Model Categories and Their Localizations, 2003, . Mark Hovey: Model Categories, 1999, . Klaus Heiner Kamps and Timothy Porter: Abstract homotopy and simple homotopy theory, 1997, World Scientific, . Georges Maltsiniotis: La théorie de l'homotopie de Grothendieck. Astérisque, (301) 2005, vi+140 pp. Further reading "Do we still need model categories?" "(infinity,1)-categories directly from model categories" Paul Goerss and Kristen Schemmerhorn, Model Categories and Simplicial Methods External links Model category in Joyal's catlab Homotopy theory Category theory
Model category
[ "Mathematics" ]
3,068
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
2,896,282
https://en.wikipedia.org/wiki/Resilin
Resilin is an elastomeric protein found in many insects and other arthropods. It provides soft rubber-elasticity to mechanically active organs and tissue; for example, it enables insects of many species to jump or pivot their wings efficiently. Resilin was first discovered by Torkel Weis-Fogh in locust wing-hinges. Resilin is currently the most efficient elastic protein known (Elvin et al., 2005). The elastic efficiency of the resilin isolated from locust tendon has been reported to be 97% (only 3% of stored energy is lost as heat). It does not have any regular structure but its randomly coiled chains are crosslinked by di- and tri-tyrosine links at the right spacing to confer the elasticity needed to propel some jumping insects distances up to 38 times their length (as found in fleas). Resilin must last for the lifetime of adult insects and must therefore operate for hundreds of millions of extensions and contractions; its elastic efficiency ensures performance during the insect's lifetime. Resilin exhibits unusual elastomeric behavior only when swollen in polar solvents such as water. In 2005, a recombinant form of the resilin protein of the fly Drosophila melanogaster was synthesized by expressing a part of the fly gene in the bacterium Escherichia coli. Active studies are investigating potential application of recombinant resilins in biomedical engineering and medicine. Occurrence After its discovery in elastic tendons in dragon flies and wing hinges in locusts, resilin has been found in many structures and organs in arthropods. Resilin is often found as a composite with chitin in insect cuticle, where chitin serves as the structural component. Resilin provides elasticity and possibly other properties. It has been discovered in the salivary pump of assassin bugs (Rhodnius prolixus), tsetse flies, and honey bees, and in the resistance providing mechanism for the venom-dispensing pump of honey bee stingers. Resilin has also been found in the sound production organs of arthropods, such as cicadas and the moth family Pyralidae, where both high elasticity and high resilience of resilin play important roles due to the rapid stress-release cycles of sound-producing tymbals. Besides these structures, resilin exists most widely in the locomotion systems of arthropods. It was discovered in wing hinges to enable recovery from deformation of wing elements, and to dampen the aerodynamic forces felt by the wing; in ambulatory systems of cockroaches and flies to facilitate rapid joint deformation; in jumping mechanisms, resilin stores kinetic energy with great efficiency and releases it upon unloading. It is also abundant in the cuticle surrounding the abdomens of termites, ants, and bees, which expand and swell to a great extent during feeding and reproduction process. Composition of resilin Amino acid constituents Amino acid composition in resilin was analyzed in 1961 by Bailey and Torkel Weis-Fogh when they observed samples of prealar arm and wing hinge ligaments of locusts. The result indicates that resilin lacks methionine, hydroxyproline, and cysteine constituents in its amino acid composition. Protein sequence Resilin was identified to be a product of the Drosophila melanogaster gene CG15920 due to the similarities between amino acid compositions of resilin and the gene product. The Drosophila melanogaster gene is composed of 4 exons, which encode for 4 functional segments in CG15920: signal peptide and 3 peptide encoded by exon 1, 2, and 3. The signal peptide guides pro-resilin into extracellular space, where resilin proteins aggregate and cross link to form a network, and then is cut off from the peptides, so that nascent resilin becomes mature resilin. From the N-terminal, segment encoded by exon 1 contains 18 copies of a 15-residue repeating sequence (GGRPSDSYGAPGGGN); segment corresponding to exon 2 contains 62 amino acids of the chitin-binding Rebers-Riddiford (R-R) consensus sequence (); exon 3 encoded peptide is dominated by 11 copies of a 13-residual repeating sequence (GYSGGRPGGQDLG). While enriched glycine and proline in exon 1 and 3 introduce cyclic structures into the protein, tyrosine residuals are able to form di- and tri-tyrosine cross-links between proteins. Secondary structure Resilin is a disordered protein; however its segments may take on secondary structures under different conditions. It is discovered that peptide sequence encoded by exon 1 exhibit an unstructured form and cannot be crystallized, which allows the peptide sequence segment to be very soft and highly flexible. Exon 3 encoded peptide takes on the unstructured form before loading, but transforms to an ordered beta-turn structure once stress is applied. Meanwhile, segment encoded by exon 2 serves as a chitin binding domain. It is proposed that as stress is applied, or there is energy input, exon 1 encoded peptide responds immediately due to its high flexibility. Once this occurs, the energy is passed onto exon 3 encoded peptide, which transforms from the unstructured form to beta-turn structure to store energy. Once the stress or energy is removed, exon 3 encoded segment reverses the structural transformation and outputs the energy to exon 1 encoded segment. Another secondary structure exon 1 and exon 3 corresponding peptides may take on is the polyproline helix (PPII), indicated by the high occurrence of proline and glycine in these 2 segments. The PPII structure widely exists in elastomeric proteins, such as abductin, elastin, and titin. It is believed to contribute in the self-assembling process and the elasticity of the protein. The elastic mechanism of resilin is proposed to be entropy related. Under relaxed state, the peptide is folded, and possesses a large entropy, but once it is stretched out, the entropy decreases as the peptide unfold. The coexistence of PPII and beta-turn play an important role of increasing entropy as resilin returns to its disordered form. The other function of PPII is to facilitate self-assembling process: it is found that the quasi-extended PPII is able to interact through an intermolecular reaction, and initiate the formation of fibrillar supramolecular structure. Hierarchical structure While the secondary structures are determined by energy state and hydrogen bonds formed between amino acids, hierarchical structures are determined by the hydrophobicity of the peptide. Exon 1 encoded peptide is mainly hydrophilic, and is more extended when immersed in water. In contrast, exon 3 encoded peptide contains both hydrophobic and hydrophilic blocks, suggesting the formation of micelles, where the hydrophobic block will cluster on the inside with the hydrophilic portion surrounding it. Thus, a single complete resilin protein, when immersed in water, takes on the structure in which exon 1 encoded segment extends out from the micelle exon 3 encoded peptide forms. Once resilin is transferred to the outside of the cell, their exon 2 encoded peptides, the chitin binding segments, bind to chitin. Meanwhile, di- or tri-tyrosine crosslinking is formed by oxidative coupling, mediated by peroxidase, between tyrosine residuals. Like other elastomeric proteins, the degree of cross linking in resilin is low, which ensures the low stiffness and high resilience. Cross linked peptides encoded by exon 1 have a resilience greater than 93%, while that encoded by exon 3 has a resilience of 86%. In addition, natural resilin has a resilience of 92%, similar to that of exon 1, suggesting again that exon 1 may play a more important role in the elastic property of resilin. Tyrosine residues in resilin Andersen, in 1996, discovered that the tyrosine residues are involved in chemically covalent cross-links in many forms such as dityrosine, trityrosine, and tetratyrosine. Primarily, in resilin, tyrosine and dityrosine served as the chemical cross-links, in which R groups of Tyrosine and Dityrosine add to the backbone of the growing peptide chain. Andersen came to this conclusion based on a study involving these two compounds in which he was able to rule out other forms of cross linking such as disulfide bridges, ester groups, and amide bonds. Though the mechanism of cross-linking of Tyrosine is understood that occurs through radical initiation, the cross linking of resilin still remains a mystery. Cross linking of resilin occurs very quickly and this is possibly a result of temperature. At increasing temperature, the rate of cross linking of the residues increases and leads to a highly cross-linked resilin network. The amino acid composition of resilin indicates that proline and glycine has a relatively high presence in the amino acid composition of resilin. The presence of glycine and proline in the composition of resilin contributes greatly to the elasticity of resilin. Resilin, however, has an absence of an alpha-helix leading to a randomly coiled structure and a disordered structure. This is primarily due to the significantly high proline content in resilin. Proline is a bulky amino acid that has the ability to cause a kink the peptide chain and due to the sterically hindered side chains, it is not able to fit in the alpha-helices. However, the segments of resilin are able to take on secondary structure forms at different conditions. Properties Like other biomaterials, resilin is a hydrogel, meaning it is swollen with water. The water content of resilin at neutral pH is 50-60%, and the absence of this water will make a big difference on the material's property: while the hydrated resilin behaves like a rubber, the dehydrated resilin has the properties of a glassy polymer. However, dehydrated resilin is able to return to its rubbery state if water is available. Water serves as a plasticizer in resilin network by increasing the amount of hydrogen bonds. The high concentration of proline and glycine, polyproline helices, and hydrophilic portions all serves to increase water content in resilin protein network. The increase in hydrogen bonds lead to an increase in chain mobility, thus decreases glass transition temperature. The more water content is in resilin network, the less stiff and more resilient the material is. Dehydrated resilin behaves as a glass polymer with low stiffness, strain, and resilience, but a relatively high compressible modulus and glass transition temperature. Rubber like proteins, such as resilin and elastin, are characterized based on their high resilience, low stiffness, and large strain. A high resilience indicate that a sufficient amount of energy input can be stored in the material, and released afterwards. An example of energy input is to stretch the material. Natural resilin (hydrated) has a resilience of 92%, which means it can store 92% of the energy input for release during unloading, indicating a very efficient energy transfer. In order for a better understanding the stiffness and strain of resilin, Hooke's Law should be taken into consideration. For linear springs, Hooke's Law states that the force required to deform the spring is directly proportional to the amount of deformation by a constant which is the characteristic of the spring. A material is viewed as elastic when it can be deformed to a large extend with a limited amount of force. Hydrated resilin has a tensile modulus of 640-2000 kPa, an unconfined compressive modulus of 600-700 kPa, and a strain to break of 300%. Although there has been no actual data acquired for the fatigue lifetime of resilin, we can think about this intuitively. If we consider the case of honey bees, where they live for around 8 weeks during which they fly 8 hours a day, flapping wings at 720,000 cycles/h, they are likely to flap their wings more than 300 million times [9]. Since resilin functions over the entire lifetime of insects, its fatigue lifetime should be considerably large. However, in live insects, resilin molecular can be produced and replaced constantly, which introduces an error in our conclusion. Recombinant resilin Initial studies Due to the remarkable rubber elasticity of resilin, scientists began exploring recombinant versions for a variety of material and medical applications. With the rise in DNA technologies, this field of research has seen a rapid increase in the synthesis of biosynthetic protein polymers that can be tuned to having certain mechanical properties. Thus, this field of research is rather promising and can provide new methods for treating diseases and disorders that affect the population. Recombinant resilin was first studied in 2005 when it was expressed in Escherichia coli from the first exon of the Drosophila Melanogaster's CG15920 gene. During its study, pure resilin was synthesized into 20% protein-mass hydrogel and was cross-linked with ruthenium-catalyzed tyrosine in the presence of ultraviolet light. This reaction yielded the product, recombinant resilin (rec1-Resilin). One of the most important aspects of successful rec1-Resilin synthesis is that its mechanical properties match that of the original resilin (native resilin). In the study indicated above, Scanning Probe Microscopy (SPM) and Atomic Force Microscopy (AFM) were used to investigate the mechanical properties of rec1-Resilin and native resilin. The results of these tests revealed that the resilience of both recombinant and native resilin were relatively similar but can differ in its applications. In this study, rec1-Resilin could be placed into a polymeric scaffold to mimic the extracellular matrix in order to generate a cell and tissue responses. Though this field of research is still ongoing, it has generated a wide amount of interest in the scientific community and is currently being investigated for a variety of biomedical applications in areas of tissue regeneration and repair. Fluorescence of recombinant resilin One unique property of rec1-Resilin is its ability to be identified due to autofluorescence. Fluorescence for resilin stems primarily from dityrosine, which are the result of crosslinks of tyrosine residues. When ultraviolet light irradiates a sample of rec1-Resilin at 315 nm to 409 nm emissions, the rec1-Resilin begins to show blue fluorescence. An example of the blue fluorescence exhibited by the dityrosine residues in resilin is shown in the figure below of a flea. Resilience Another unique property of resilin is its high resilience. Recombinant resilin demonstrated excellent mechanical properties similar to that of pure resilin. Elvin et al. aimed to compare the resilience of rec1-Resilin to other rubbers, a scanning probe microscope of used. This study compared the resilience of rec1-Resilin to two different types of rubber: chlorobutyl rubber and polybutadiene rubber, both rubbers with high resilience properties. This study concluded that rec1-Resilin was 92% resilient compared to chlorobutyl rubber at 56% and polybutadiene rubber at 80%, respectively. With such high mechanical resilience, the properties of rec1-Resilin can be applied to other clinical applications within the field of Materials Engineering and Medicine. This study on recombinant resilin has led to several years of research on the use of resilin like proteins for several biomedical applications that retains the mechanical properties of resilin. The ongoing results of the studies involving recombinant resilin may lead to further research in which other unexplored mechanical properties and chemical structure of resilin may be investigated. Clinical applications Recombinant resilins have been studied for potential application in the fields of biomedical engineering and medicine. In particular, hydrogels composed of recombinant resilins have been utilized as tissue engineering scaffolds for mechanically-active tissues including cardiovascular, cartilage and vocal cord tissues. Early work has focused on optimizing the mechanical properties, chemistry and cytocompability of these materials, but some in vivo testing of resilin hydrogels has also been performed. Researchers at the University of Delaware and Purdue University have developed methods for creating elastic hydrogels composed of resilin that were compatible with stem cells and displayed similar rubber elasticity to that of natural resilin. Semi-synthetic resilin-based hydrogels, which incorporate poly(ethylene glycols), have also been reported. See also elastin: a vertebrate protein References External links Summary from University of South Australia Torkel Weis-Fogh: Scientific Papers and Correspondence Insect proteins Elastomers Articles containing video clips
Resilin
[ "Chemistry" ]
3,698
[ "Synthetic materials", "Elastomers" ]
2,896,399
https://en.wikipedia.org/wiki/Harmonic%20balance
Harmonic balance is a method used to calculate the steady-state response of nonlinear differential equations, and is mostly applied to nonlinear electrical circuits. It is a frequency domain method for calculating the steady state, as opposed to the various time-domain steady-state methods. The name "harmonic balance" is descriptive of the method, which starts with Kirchhoff's Current Law written in the frequency domain and a chosen number of harmonics. A sinusoidal signal applied to a nonlinear component in a system will generate harmonics of the fundamental frequency. Effectively the method assumes a linear combination of sinusoids can represent the solution, then balances current and voltage sinusoids to satisfy Kirchhoff's law. The method is commonly used to simulate circuits which include nonlinear elements, and is most applicable to systems with feedback in which limit cycles occur. Microwave circuits were the original application for harmonic balance methods in electrical engineering. Microwave circuits were well-suited because, historically, microwave circuits consist of many linear components which can be directly represented in the frequency domain, plus a few nonlinear components. System sizes were typically small. For more general circuits, the method was considered impractical for all but these very small circuits until the mid-1990s, when Krylov subspace methods were applied to the problem. The application of preconditioned Krylov subspace methods allowed much larger systems to be solved, both in the size of the circuit and in the number of harmonics. This made practical the present-day use of harmonic balance methods to analyze radio-frequency integrated circuits (RFICs). Example Consider the differential equation . We use the ansatz solution , and plugging in, we obtain Then by matching the terms, we have , which yields approximate period . For a more exact approximation, we use ansatz solution . Plugging these in and matching the , terms, we obtain after routine algebra: The cubic equation for has only one real root . With that, we obtain an approximate period Thus we approach the exact solution . Algorithm The harmonic balance algorithm is a special version of Galerkin's method. It is used for the calculation of periodic solutions of autonomous and non-autonomous differential-algebraic systems of equations. The treatment of non-autonomous systems is slightly simpler than the treatment of autonomous ones. A non-autonomous DAE system has the representation with a sufficiently smooth function where is the number of equations and are placeholders for time, the vector of unknowns, and the vector of time derivatives. The system is non-autonomous if the function is not constant for (some) fixed and . Nevertheless, we require that there is a known excitation period such that is -periodic. A natural candidate set for the -periodic solutions of the system equations is the Sobolev space of weakly differentiable functions on the interval with periodic boundary conditions . We assume that the smoothness and the structure of ensures that is square-integrable for all . The system of harmonic functions is a Schauder basis of and forms a :Hilbert basis of the Hilbert space of square-integrable functions. Therefore, each solution candidate can be represented by a Fourier-series with Fourier-coefficients and the system equation is satisfied in the weak sense if for every base function the variational equation is fulfilled. This variational equation represents an infinite sequence of scalar equations since it has to be tested for the infinite number of base functions in . The Galerkin approach to the harmonic balance is to project the candidate set as well as the test space for the variational equation to the finitely dimensional sub-space spanned by the finite base . This gives the finite-dimensional solution and the finite set of equations which can be solved numerically. In the special context of electronics, the algorithm starts with Kirchhoff's current law written in the frequency-domain. To increase the efficiency of the procedure, the circuit may be partitioned into its linear and nonlinear parts, since the linear part is readily described and calculated using nodal analysis directly in the frequency domain. First, an initial guess is made for the solution, then an iterative process continues: Voltages are used to calculate the currents of the linear part, in the frequency domain. Voltages are then used to calculate the currents in the nonlinear part, . Since nonlinear devices are described in the time domain, the frequency-domain voltages are transformed into the time domain, typically using inverse Fast Fourier transforms. The nonlinear devices are then evaluated using the time-domain voltage waveforms to produce their time-domain currents. The currents are then transformed back into the frequency domain. According to Kirchhoff's circuit laws, the sum of the currents must be zero, . An iterative process, usually Newton iteration, is used to update the network voltages such that the current residual is reduced. This step requires formulation of the Jacobian . Convergence is reached when is acceptably small, at which point all voltages and currents of the steady-state solution are known, most often represented as Fourier coefficients. References Electronic design Electronic circuits Electrical engineering
Harmonic balance
[ "Engineering" ]
1,027
[ "Electronic design", "Electronic circuits", "Electronic engineering", "Electrical engineering", "Design" ]
2,896,995
https://en.wikipedia.org/wiki/Annual%20average%20daily%20traffic
Annual average daily traffic (AADT) is a measure used primarily in transportation planning, transportation engineering and retail location selection. Traditionally, it is the total volume of vehicle traffic of a highway or road for a year divided by 365 days. AADT is a simple, but useful, measurement of how busy the road is. AADT is the standard measurement for vehicle traffic load on a section of road, and the basis for some decisions regarding transport planning, or to the environmental hazards of pollution related to road transport. Uses One of the most important uses of AADT is for determining funding for the maintenance and improvement of highways. In the United States, the amount of federal funding a state will receive is related to the total traffic measured across its highway network. Each year on June 15, every state's department of transportation (DOT) submits a Highway Performance Monitoring System (HPMS) report. The HPMS report contains various information regarding the road segments in the state based on a sample (not all of the road segments) of the road segments. In the report, the AADT is converted to vehicle miles traveled (VMT). VMT is the AADT multiplied by the length of the road segment. To determine the amount of traffic a state has, the AADT cannot be summed for all road segments since an AADT is a rate. The VMT is summed and is used as an indicator of the amount of traffic a state has. For federal funding, formulas are applied to include the VMT and other highway statistics. In the United Kingdom, AADT is one of a number of measures of traffic used by local highway authorities, National Highways, and the Department for Transport to forecast maintenance needs and expenditure. Data collection To measure AADT on individual road segments, traffic data is collected by an automated traffic counter, hiring an observer to record traffic or licensing estimated counts from GPS data providers. There are two different techniques of measuring the AADTs for road segments with automated traffic counters. One technique is called continuous count data collection method. This method includes sensors that are permanently embedded into a road and traffic data is measured for the entire 365 days. The AADT is the sum of the total traffic for the entire year divided by 365 days. There can be problems with calculating the AADT with this method. For example, if the continuous count equipment is not operating for the full 365 days due to maintenance or repair. Because of this issue, seasonal or day-of-week biases might skew the calculated AADT. In 1992, AASHTO released the AASHTO Guidelines for Traffic Data Programs, which identified a way to produce an AADT without seasonal or day-of-week biases by creating an "average of averages." For every month and day-of-week, a Monthly Average Day of Week (MADW) is calculated (84 per year). Each day-of-week's MADW is then calculated across months to calculate an Annual Average Day of Week (AADW) (7 per year). Finally, the AADWs are averaged to calculate an AADT. The United States Federal Highway Administration (FHWA) has adopted this method as the preferred method in the FHWA Traffic Monitoring Guide. While providing the most accurate AADT, installing and maintaining continuous count stations method is costly. Most public agencies are only able to monitor a very small percentage of the roadway using this method. Most AADTs are generated using short-term data collection methods sometimes known as the coverage count data collection method. Traffic is collected with portable sensors that are attached to the road and record traffic data typically for 2 – 14 days. These are typically pneumatic road tubes although other more expensive technology such as radar, laser, or sonar exist. After recording the traffic data, the traffic counts on the same road segment are taken again in another three years. The FHWA Traffic Monitoring Guide recommends performing a short count on a road segment at a minimum of every three years. There are many methods used to calculate an AADT from a short-term count, but most methods attempt to remove seasonal and day-of-week biases during the collection period by applying factors created from associated continuous counters. Short counts are taken either by state agencies, local government, or contractors. For the years when a traffic count is not recorded, the AADT is often estimated by applying a factor called the Growth Factor. Growth Factors are statistically determined from historical data of the road segment. If there is no historical data, Growth Factors from similar road segments are used. Similar measures Annual average weekday traffic (AAWT) is similar to AADT but only includes Monday to Friday data. Public holidays are often excluded from the AAWT calculation. Average summer daily traffic (abbreviated to ASDT) is a similar measure to the annual average daily traffic. Data collecting methods of the two are exactly the same, however the ASDT data is collected during summer only. The measure is useful in areas where there are significant seasonal traffic volumes carried by a given road. References The 1992 Edition of the AASHTO Guidelines is out of date. The current edition is from 2018. The Gary Davis article was published in Transportation Research Record 1593, 1997. the date currently shown in the article is the date of an on-line posting. External links Florida New York State - Traffic Data Viewer - interactive map program graphically displays traffic data Oklahoma Virginia FHWA Traffic Monitoring Guide New Zealand State Highway AADTs Louisiana AADTs Temporal rates Transportation planning
Annual average daily traffic
[ "Physics" ]
1,117
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
2,897,576
https://en.wikipedia.org/wiki/Fluent%20%28artificial%20intelligence%29
In artificial intelligence, a fluent is a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time. For example, the condition "the box is on the table", if it can change over time, cannot be represented by ; a third argument is necessary to the predicate to specify the time: means that the box is on the table at time . This representation of fluents is modified in the situation calculus by using the sequence of the past actions in place of the current time. A fluent can also be represented by a function, dropping the time argument. For example, that the box is on the table can be represented by , where is a function and not a predicate. In first-order logic, converting predicates to functions is called reification; for this reason, fluents represented by functions are said to be reified. When using reified fluents, a separate predicate is necessary to tell when a fluent is actually true or not. For example, means that the box is actually on the table at time , where the predicate is the one that tells when fluents are true. This representation of fluents is used in the event calculus, in the fluent calculus, and in the features and fluents logics. Some fluents can be represented as functions in a different way. For example, the position of a box can be represented by a function whose value is the object the box is standing on at time . Conditions that can be represented in this way are called functional fluents. Statements about the values of such functions can be given in first-order logic with equality using literals such as . Some fluents are represented this way in the situation calculus. Naive physics From a historical point of view, fluents were introduced in the context of qualitative reasoning. The idea is to describe a process model not with mathematical equations but with natural language. That means an action is not only determined by its trajectory, but with a symbolic model, very similar to a text adventure. Naive physics stands in opposition to a numerical physics engine and has the obligation to predict the outcome of actions. The fluent realizes the common sense grounding between the robot's motion and the task description in natural language. From a technical perspective, a fluent is equal to a parameter that is parsed by the naive physics engine. The parser converts between natural language fluents and numerical values measured by sensors. As a consequence, the human-machine interaction is improved. See also Event calculus Fluent calculus Frame problem Situation calculus References Logic in computer science
Fluent (artificial intelligence)
[ "Mathematics" ]
539
[ "Mathematical logic", "Logic in computer science" ]
2,897,680
https://en.wikipedia.org/wiki/Event%20calculus
The event calculus is a logical theory for representing and reasoning about events and about the way in which they change the state of some real or artificial world. It deals both with action events, which are performed by agents, and with external events, which are outside the control of any agent. The event calculus represents the state of the world at any time by the set of all the facts (called fluents) that hold at the time. Events initiate and terminate fluents: The event calculus differs from most other approaches for reasoning about change by reifying time, associating events with the time at which they happen, and associating fluents with the times at which they hold. The original version of the event calculus, introduced by Robert Kowalski and Marek Sergot in 1986, was formulated as a logic program and developed for representing narratives and database updates. Kave Eshghi showed how to use the event calculus for planning, by using abduction to generate hypothetical actions to achieve a desired state of affairs. It was extended by Murray Shanahan and Rob Miller in the 1990s and reformulated in first-order logic with circumscription. These and later extensions have been used to formalize non-deterministic actions, concurrent actions, actions with delayed effects, gradual changes, actions with duration, continuous change, and non-inertial fluents. Van Lambalgen and Hamm showed how a formulation of the event calculus as a constraint logic program can be used to give an algorithmic semantics to tense and aspect in natural language. Fluents and events In the event calculus, fluents are reified. This means that fluents are represented by terms. For example, expresses that the is on the at time . Here is a predicate, while is a term. In general, the atomic formula expresses that the holds at the Events are also reified and represented by terms. For example, expresses that the is moved onto the at time . In general: expresses that the happens at the The relationships between events and the fluents that they initiate and terminate are also represented by atomic formulae: expresses that if the happens at the then the becomes true after the . expresses that if the happens at the then the ceases to be true after the . Domain-independent axiom The event calculus was developed in part as an alternative to the situation calculus, as a solution to the frame problem, of representing and reasoning about the way in which actions and other events change the state of some world. There are many variants of the event calculus. But the core axiom of one of the simplest and most useful variants can be expressed as a single, domain-independent axiom: The axiom states that a fluent holds at a time if an event happens at a time and initiates at and is before and it is not the case that there exists an event and a time such that happens at and terminates at and is before or at the same time as and is before . The event calculus solves the frame problem by interpreting this axiom in a non-monotonic logic, such as first-order logic with circumscription or, as a logic program, in Horn clause logic with negation as failure. In fact, circumscription is one of the several semantics that can be given to negation as failure, and it is closely related to the completion semantics for logic programs (which interprets if as if and only if). The core event calculus axiom defines the predicate in terms of the , , , and predicates. To apply the event calculus to a particular problem, these other predicates also need to be defined. The event calculus is compatible with different definitions of the temporal predicates and . In most applications, times are represented discretely, by the natural numbers, or continuously, by non-negative real numbers. However, times can also be partially ordered. Domain-dependent axioms To apply the event calculus in a particular problem domain, it is necessary to define the and predicates for that domain. For example, in the blocks world domain, an event of moving an object onto a place intitiates the fluent , which expresses that the object is on the place and terminates the fluent , which expresses that the object is on a different place: If we want to represent the fact that a holds in an initial state, say at time 1, then with the simple core axiom above we need an event, say , which initiates the at any time: Problem-dependent axioms To apply the event calculus, given the definitions of the , , , and predicates, it is necessary to define the predicates that describe the specific context of the problem. For example, in the blocks world domain, we might want to describe an initial state in which there are two blocks, a red block on a green block on a table, like a toy traffic light, followed by moving the red block to the table at time 1 and moving the green block onto the red block at time 3, turning the traffic light upside down: A Prolog implementation The event calculus has a natural implementation in pure Prolog (without any features that do not have a logical interpretation). For example, the blocks world scenario above can be implemented (with minor modifications) by the program: holdsAt(Fluent, Time2) :- before(Time1, Time2), happensAt(Event, Time1), initiates(Event, Fluent, Time1), not(clipped(Fluent, Time1, Time2)). clipped(Fluent, Time1, Time2) :- terminates(Event, Fluent, Time), happensAt(Event, Time), before(Time1, Time), before(Time, Time2). initiates(initialise(Fluent), Fluent, Time). initiates(move(Object, Place), on(Object, Place), Time). terminates(move(Object, Place), on(Object, Place1), Time). happensAt(initialise(on(green_block, table)), 0). happensAt(initialise(on(red_block, green_block)), 0). happensAt(move(red_block, table), 1). happensAt(move(green_block, red_block), 3). The Prolog program differs from the earlier formalisation in the following ways: The core axiom has been rewritten, using an auxiliary predicate clipped(Fact, Time1, Time2). This rewriting enables the elimination of existential quantifiers, conforming to the Prolog convention that all variables are universally quantified. The order of the conditions in the body of the core axiom(s) has been changed, to generate answers to queries in temporal order. The equality in the condition has been removed from the corresponding condition before(Time1, Time). This builds in a simplifying assumption that events do not simultaneously initiate and terminate the same fluent. As a consequence, the definition of the predicate has been simplified by eliminating the condition that . Given an appropriate definition of the predicate before(Time1, Time2), the Prolog program generates all answers to the query what holds when? in temporal order: ?- holdsAt(Fluent, Time). Fluent = on(green_block,table), Time = 1. Fluent = on(red_block,green_block), Time = 1. Fluent = on(green_block,table), Time = 2. Fluent = on(red_block,table), Time = 2. Fluent = on(green_block,table), Time = 3. Fluent = on(red_block,table), Time = 3. Fluent = on(red_block,table), Time = 4. Fluent = on(green_block,red_block), Time = 4. Fluent = on(red_block,table), Time = 5. Fluent = on(green_block,red_block), Time = 5. The program can also answer negative queries, such as which fluents do not hold at which times? However, to work correctly, all variables in negative conditions must first be instantiated to terms containing no variables. For example: timePoint(1). timePoint(2). timePoint(3). timePoint(4). timePoint(5). fluent(on(red_block, green_block)). fluent(on(green_block, red_block)). fluent(on(red_block, table)). fluent(on(green_block, table)). ?- timePoint(T), fluent(F), not(holdsAt(F, T)). F = on(green_block,red_block), T = 1. F = on(red_block,table), T = 1. F = on(red_block,green_block), T = 2. F = on(green_block,red_block), T = 2. F = on(red_block,green_block), T = 3. F = on(green_block,red_block), T = 3. F = on(red_block,green_block), T = 4. F = on(green_block,table), T = 4. F = on(red_block,green_block), T = 5. F = on(green_block,table), T = 5. Reasoning tools In addition to Prolog and its variants, several other tools for reasoning using the event calculus are also available: Abductive Event Calculus Planners Discrete Event Calculus Reasoner Event Calculus Answer Set Programming Reactive Event Calculus Run-Time Event Calculus (RTEC) Epistemic Probabilistic Event Calculus (EPEC) Extensions Notable extensions of the event calculus include Markov logic networks–based variants probabilistic, epistemic and their combinations. See also First-order logic Frame problem Situation calculus References Further reading Brandano, S. (2001) "The Event Calculus Assessed," IEEE TIME Symposium: 7-12. R. Kowalski and F. Sadri (1995) "Variants of the Event Calculus," ICLP: 67-81. Mueller, Erik T. (2015). Commonsense Reasoning: An Event Calculus Based Approach (2nd Ed.). Waltham, MA: Morgan Kaufmann/Elsevier. . (Guide to using the event calculus) Shanahan, M. (1997) Solving the frame problem: A mathematical investigation of the common sense law of inertia. MIT Press. Shanahan, M. (1999) "The Event Calculus Explained" Springer Verlag, LNAI (1600): 409-30. Notes 1986 introductions Logic in computer science Logic programming Knowledge representation Logical calculi
Event calculus
[ "Mathematics" ]
2,250
[ "Mathematical logic", "Logic in computer science", "Logical calculi" ]
2,898,337
https://en.wikipedia.org/wiki/Disaccharidase
Disaccharidases are glycoside hydrolases, enzymes that break down certain types of sugars called disaccharides into simpler sugars called monosaccharides. In the human body, disaccharidases are made mostly in an area of the small intestine's wall called the brush border, making them members of the group of "brush border enzymes". A genetic defect in one of these enzymes will cause a disaccharide intolerance, such as lactose intolerance or sucrose intolerance. Examples of disaccharidases Lactase (breaks down lactose into glucose and galactose) Maltase (breaks down maltose into 2 glucoses) Sucrase (breaks down sucrose into glucose and fructose) Trehalase (breaks down trehalose into 2 glucoses) For a thorough scientific overview of small-intestinal disaccharidases, one can consult chapter 75 of OMMBID. For more online resources and references, see inborn error of metabolism. References Further reading EC 3.2.1 Glycobiology
Disaccharidase
[ "Chemistry", "Biology" ]
240
[ "Biochemistry", "Glycobiology" ]
2,898,453
https://en.wikipedia.org/wiki/Calcium%20hypochlorite
Calcium hypochlorite is an inorganic compound with chemical formula , also written as . It is a white solid, although commercial samples appear yellow. It strongly smells of chlorine, owing to its slow decomposition in moist air. This compound is relatively stable as a solid and solution and has greater available chlorine than sodium hypochlorite. "Pure" samples have 99.2% active chlorine. Given common industrial purity, an active chlorine content of 65-70% is typical. It is the main active ingredient of commercial products called bleaching powder, used for water treatment and as a bleaching agent. History Charles Tennant and Charles Macintosh developed an industrial process in the late 18th century for the manufacture of chloride of lime, patenting it in 1799. Tennant's process is essentially still used today, and became of military importance during World War I, because calcium hypochlorite was the active ingredient in trench disinfectant. Uses Sanitation Calcium hypochlorite is commonly used to sanitize public swimming pools and disinfect drinking water. Generally the commercial substances are sold with a purity of 65% to 73% with other chemicals present, such as calcium chloride and calcium carbonate, resulting from the manufacturing process. In solution, calcium hypochlorite could be used as a general purpose sanitizer, but due to calcium residue (making the water harder), sodium hypochlorite (bleach) is usually preferred. Organic chemistry Calcium hypochlorite is a general oxidizing agent and therefore finds some use in organic chemistry. For instance the compound is used to cleave glycols, α-hydroxy carboxylic acids and keto acids to yield fragmented aldehydes or carboxylic acids. Calcium hypochlorite can also be used in the haloform reaction to manufacture chloroform. Calcium hypochlorite can be used to oxidize thiol and sulfide byproducts in organic synthesis and thereby reduce their odour and make them safe to dispose of. The reagent used in organic chemistry is similar to the sanitizer at ~70% purity. Production Calcium hypochlorite is produced industrially by reaction of moist slaked calcium hydroxide with chlorine gas. The one-step reaction is shown below: Industrial setups allow for the reaction to be conducted in stages to give various compositions, each producing different ratios of calcium hypochlorite, unconverted lime, and calcium chloride. In one process, the chloride-rich first stage water is discarded, while the solid precipitate is dissolved in a mixture of water and lye for another round of chlorination to reach the target purity. Commercial calcium hypochlorite consists of anhydrous , dibasic calcium hypochlorite (also written as ), and dibasic calcium chloride (also written as ). Reactions Calcium hypochlorite reacts rapidly with acids producing calcium chloride, chlorine gas, and water: Safety It is a strong oxidizing agent, as it contains a hypochlorite ion at the valence +1 (redox state: Cl+1). Calcium hypochlorite should not be stored wet and hot, or near any acid, organic materials, or metals. The unhydrated form is safer to handle. See also Calcium hydroxychloride Sodium hypochlorite Winchlor References External links Chemical Land Antiseptics Bleaches Hypochlorites Calcium compounds Oxidizing agents Household chemicals
Calcium hypochlorite
[ "Chemistry" ]
755
[ "Redox", "Oxidizing agents" ]
2,898,458
https://en.wikipedia.org/wiki/New%20Worlds%20Mission
The New Worlds Mission is a proposed project comprising a large occulter flying in formation with a space telescope designed to block the light of nearby stars in order to observe their orbiting exoplanets. The observations could be taken with an existing space telescope or a dedicated visible light optical telescope optimally designed for the task of finding exoplanets. A preliminary research project was funded from 2005 through 2008 by NASA Institute for Advanced Concepts (NIAC) and headed by Webster Cash of the University of Colorado at Boulder in conjunction with Ball Aerospace & Technologies Corp., Northrop Grumman, Southwest Research Institute and others. Since 2010 the project has been looking for additional financing from NASA and other sources in the amount of roughly US$3 billion including its own four-meter telescope. If financed and launched, it would operate for five years. Purpose Currently, the direct detection of extrasolar planets (or exoplanets) is extremely difficult. This is primarily due to: Exoplanets appearing extremely close to their host stars when observed at astronomical distances. Even the closest of stars are several light years away. This means that while looking for exoplanets, one would typically be observing very small angles from the star, on the order of several tens of milli-arcseconds. Angles this small are impossible to resolve from the ground due to astronomical seeing. Exoplanets being extremely dim compared to their host stars. Typically, the star will be approximately a billion times brighter than the orbiting planet. This makes it nearly impossible to see planets against the star's glare. The difficulty of observing such a dim planet so close to a bright star is the obstacle that has prevented astronomers from directly photographing exoplanets. To date, only a handful of exoplanets have been photographed. The first exoplanet to be photographed, 2M1207b, is in orbit around a star called 2M1207. Astronomers were only able to photograph this planet because it is a very unusual planet that is very far from its host star, approximately 55 astronomical units (about twice the distance of Neptune). Furthermore, the planet is orbiting a very dim star, known as a brown dwarf. To overcome the difficulty of distinguishing more Earth-like planets in the vicinity of a bright star, the New Worlds Mission would block the star's light with an occulter. The occulter would block all of the starlight from reaching the observer, while allowing the planet's light to pass undisturbed. The starshade would be tens of meters across and probably made out of Kapton, a lightweight material similar to Mylar. Methods Traditional methods of exoplanet detection rely on indirect means of inferring the existence of orbiting bodies. These methods include: Astrometry – watching a star move slightly due to the gravitational influence of a nearby planet Observing Doppler shifts of the star's spectrum due to the star's movement Observing the amount of light from a star change as an extrasolar planet transits the star, preventing a portion of the light from reaching the observer Pulsar timing Gravitational microlensing Observing radiation from circumstellar disks in the infrared All of these methods provide convincing evidence for the existence of extrasolar planets, but none of them provide actual images of the planets. The goal of the New Worlds Mission is to block the light coming from nearby stars with an occulter. This would allow the direct observation of orbiting planets. The occulter would be a large sheet disc flown thousands of kilometers along the line of sight. The disc would likely be several tens of meters in diameter and would fit inside existing expendable launch vehicles and be deployed after launch. One difficulty with this concept is that light incoming from the target star would diffract around the disc and constructively interfere along the central axis. Thus the starlight would still be easily visible, making planet detection impossible. This concept was first famously theorized by Siméon Poisson in order to disprove the wave theory of light, as he thought the existence of a bright spot at the center of the shadow to be nonsensical. However Dominique Arago experimentally verified the existence of the spot of Arago. This effect can be negated by specifically shaping the occulter. By adding specially shaped petals to the outer edge of the disc, the spot of Arago will disappear, allowing the suppression of the star's light. This technique would make planetary detection possible for stars within approximately 10 parsecs (about 32 light years) of Earth. It is estimated that there could be several thousand exoplanets within that distance. The starshade is similar to but should not be confused with the Aragoscope, which is a proposed imaging device designed to use the diffraction of light around a perfectly-circular light-shield to produce an image. The starshade is a proposed sunflower-shaped coronagraph disc that was designed to block starlight that interferes with telescopic observations of other worlds. The "petals" of the "sunflower" shape of the starshade are designed to eliminate the diffraction that is the central feature of an Aragoscope. The starshade is a spacecraft designed by Webster Cash, an astrophysicist at the University of Colorado at Boulder's Center for Astrophysics and Space Astronomy. The proposed spacecraft was designed to work in tandem with space telescopes like the James Webb Space Telescope, which did not use it, or a new 4-meter telescope. It would fly in front of a space telescope (between the telescope and a target star) and approximately away from Earth, outside of Earth's heliocentric orbit. When unfurled, the starshade resembles a sunflower, with pointed protrusions around its circumference. The starshade acts as a very large coronagraph: it blocks light of a distant star, making it easier to observe associated planets. The unfurled starshade could reduce collected light from bright stars by as much as 10 billion-fold. Light that "leaks" around the edges would be used by the telescope as it scans the target system for planets. With the reduction of the harsh light, astronomers will be able to check exoplanet atmospheres tens of trillions of miles away for the potential chemical signatures of life. Objectives The New Worlds Mission aims to discover and analyze terrestrial extrasolar planets: Detection: First, using the space telescope and 'starshade', or occulter, exoplanetary systems will be directly detected. System mapping: Following detection, system mapping would involve the direct mapping of planetary systems through the detection of the planetary light separate from the parent star. In a sufficiently high-quality image, planets would appear as individual star-like objects. A series of images of the planetary system would allow measurements of planetary orbits, and the brightness and broadband colors of the planets would provide information about their basic nature. Planet studies: At this stage, detailed study of individual planets would take place. With a low noise level and a modest signal, spectroscopy and photometry can be performed. Spectroscopy allows scientists to perform chemical analysis of atmospheres and surfaces, which might hold clues to the existence of life elsewhere in the universe. Photometry will show variation in color and intensity as surface features rotate in and out of the field of view, allowing for the detection of oceans, continents, polar caps and clouds. Planet imaging: A large increase in capability is needed to achieve true planet imaging. However, techniques of interferometry show that, in principle, this is possible to achieve. Fifty to one hundred percent of a planet's surface could theoretically be mapped, depending on the planet's inclination. Planetary assessment: The final step in extrasolar planet studies would be the ability to study these distant worlds in the same way that Earth-observing systems study the Earth's surface. Such a telescope would need to be extremely large, to collect enough light to resolve and analyze small details on the planet's surface. However, these kinds of studies do not lie in the foreseeable future, for it takes square kilometers of collecting area to capture the needed signal. In addition to finding and analyzing terrestrial planets, it can also discover and analyze gas giants. The New Worlds Mission will also find moons and rings orbiting extrasolar planets. This technique will involve direct imaging of planets by blocking the starlight with a starshade. It will study the moons and rings in detail and find whether moons can also support life if gas giant planets orbit in the habitable zones of parent stars. Architecture There are many possibilities for various New Worlds Missions, including New Worlds Discoverer proposed to use an existing space telescope (like the James Webb telescope), to find exoplanets. The size of the starshade could be optimized for the observing telescope. New Worlds Observer would use two spacecraft, one that has a dedicated telescope and one with a starshade to find exoplanets. The possibility of two starshades is also a consideration. One starshade to point towards the desired target while the other moves into position for the next target. This would eliminate some of the time delay in observing different systems and allow for many more targets to be observed in the same timespan. New Worlds Imager would use many spacecraft/starshades. This would allow observers to resolve the planet and obtain true planetary imaging. See also Aragoscope Coronagraph, a telescopic attachment to block out the light from a star so that nearby objects can be resolved Space sunshade References External links New World's official website at University of Colorado, Boulder Pinhole Camera to Image New World Finding planets through a pinhole Biggest Pinhole Camera Ever Alien Planets to Pose for Giant Pinhole Camera in Space Diffraction Exoplanet search projects Proposed spacecraft Space telescopes Articles containing video clips NASA programs
New Worlds Mission
[ "Physics", "Chemistry", "Materials_science", "Astronomy" ]
2,015
[ "Exoplanet search projects", "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Space telescopes", "Astronomy projects", "Spectroscopy" ]
2,898,710
https://en.wikipedia.org/wiki/Hot-filament%20ionization%20gauge
The hot-filament ionization gauge, sometimes called a hot-filament gauge or hot-cathode gauge, is the most widely used low-pressure (vacuum) measuring device for the region from 10−3 to 10−10 Torr. It is a triode, with the filament being the cathode. Note: Principles are mostly the same for hot-cathode ion sources in particle accelerators to create electrons. Function A regulated electron current (typically 10 mA) is emitted from a heated filament. The electrons are attracted to the helical grid by a DC potential of about +150 V. Most of the electrons pass through the grid and collide with gas molecules in the enclosed volume, causing a fraction of them to be ionized. The gas ions formed by the electron collisions are attracted to the central ion collector wire by the negative voltage on the collector (typically −30 V). Ion currents are on the order of 1 mA/Pa. This current is amplified and displayed by a high-gain differential amplifier/electrometer. This ion current differs for different gases at the same pressure; that is, a hot-filament ionization gauge is composition-dependent. Over a wide range of molecular density, however, the ion current from a gas of constant composition is directly proportional to the molecular density of the gas in the gauge. Construction A hot-cathode ionization gauge is composed mainly of three electrodes, all acting as a triode, wherein the cathode is the filament. The three electrodes are a collector or plate, a filament, and a grid. The collector current is measured in picoamperes by an electrometer. The filament voltage to ground is usually at a potential of 30 volts, while the grid voltage at 180–210 volts DC, unless there is an optional electron bombardment feature, by heating the grid, which may have a high potential of approximately 565 volts. The most common ion gauge is the hot-cathode Bayard–Alpert gauge, with a small collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually the nude gauge is inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot-cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. Electrons emitted from the filament move several times in back-and-forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gas molecule to form a pair of an ion and an electron (electron ionization). The number of these ions is proportional to the gas molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gas molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current. The low-pressure sensitivity of hot-cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce X-rays that produce photoelectric noise in the ion collector. This limits the range of older hot-cathode gauges to 10−8 Torr and the Bayard–Alpert gauges to about 10−10 Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire but by an open cone. As the ions cannot decide which part of the cone to hit (this is absurd anthropomorphism which explains nothing), they pass through the hole and form an ion beam. This ion beam can be passed on to a Faraday cup, quadrupole mass analyzer with Faraday cup, microchannel plate detector with Faraday cup, quadrupole mass analyzer with microchannel plate detector Faraday cup, ion lens and acceleration voltage and directed at a target to form a sputter gun; in this case a valve lets gas into the grid cage. Types Bayard–Alpert (uses sealed tube). Nude gauge (uses the vacuum chamber to make a complete seal). See also Electron ionization Ionization gauge Vacuum References External links How does an ion gauge work? What filament material should I use with my ion gauge? Hot filament ionization gauge tubes U.S. Patent 4792763 - Hot cathode ionization pressure gauge U.S. Patent 5373240 - Hot-cathode ionization pressure gauge including a sequence of electrodes arranged at a distance from one another in sequence along an axis Vacuum gauges Pressure gauges
Hot-filament ionization gauge
[ "Physics", "Technology", "Engineering" ]
987
[ "Vacuum", "Measuring instruments", "Vacuum gauges", "Vacuum systems", "Pressure gauges", "Matter" ]
2,898,953
https://en.wikipedia.org/wiki/Hot%20cathode
In vacuum tubes and gas-filled tubes, a hot cathode or thermionic cathode is a cathode electrode which is heated to make it emit electrons due to thermionic emission. This is in contrast to a cold cathode, which does not have a heating element. The heating element is usually an electrical filament heated by a separate electric current passing through it. Hot cathodes typically achieve much higher power density than cold cathodes, emitting significantly more electrons from the same surface area. Cold cathodes rely on field electron emission or secondary electron emission from positive ion bombardment, and do not require heating. There are two types of hot cathode. In a directly heated cathode, the filament is the cathode and emits the electrons. In an indirectly heated cathode, the filament or heater heats a separate metal cathode electrode which emits the electrons. From the 1920s to the 1960s, a wide variety of electronic devices used hot-cathode vacuum tubes. Today, hot cathodes are used as the source of electrons in fluorescent lamps, vacuum tubes, and the electron guns used in cathode ray tubes and laboratory equipment such as electron microscopes. Description A cathode electrode in a vacuum tube or other vacuum system is a metal surface which emits electrons into the evacuated space of the tube. Since the negatively charged electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it. This energy is called the work function of the metal. In a hot cathode, the cathode surface is induced to emit electrons by heating it with a filament, a thin wire of refractory metal like tungsten with current flowing through it. The cathode is heated to a temperature that causes electrons to be 'boiled off' of its surface into the evacuated space in the tube, a process called thermionic emission. There are two types of hot cathodes: Directly heated cathode In this type, the filament itself is the cathode, emits the electrons directly and is coated in metal oxides. Directly heated cathodes were used in the first vacuum tubes. Today, they are used in fluorescent tubes and most high-power transmitting vacuum tubes. Indirectly heated cathode In this type, the filament is not the cathode but rather heats a separate cathode consisting of a sheet metal cylinder surrounding the filament, and the cylinder emits electrons. Indirectly heated cathodes are used in most low power vacuum tubes. For example, in most vacuum tubes the cathode is a nickel tube, coated with metal oxides. It is heated by a tungsten filament inside it, and the heat from the filament causes the outside surface of the oxide coating to emit electrons. The filament of an indirectly heated cathode is usually called the heater. The main reason for using an indirectly heated cathode is to isolate the rest of the vacuum tube from the electric potential across the filament, allowing vacuum tubes to use alternating current to heat the filament. In a tube in which the filament itself is the cathode, the alternating electric field from the filament surface would affect the movement of the electrons and introduce hum into the tube output. It also allows the filaments in all the tubes in an electronic device to be tied together and supplied from the same current source, even though the cathodes they heat may be at different potentials. To improve electron emission, cathodes are usually treated with chemicals, compounds of metals with a low work function. These form a metal layer on the surface which emits more electrons. Treated cathodes require less surface area, lower temperatures and less power to supply the same cathode current. The untreated thoriated tungsten filaments used in early vacuum tubes (called "bright emitters") had to be heated to 2500 °F (1400 °C), white-hot, to produce sufficient thermionic emission for use, while modern coated cathodes (called "dull emitters") produce far more electrons at a given temperature, so they only have to be heated to 800–1100 °F (425–600 °C). Types Oxide-coated cathodes The most common type of indirectly heated cathode is the oxide-coated cathode, in which the nickel cathode surface has a coating of alkaline earth metal oxide to increase emission. One of the earliest materials used for this was barium oxide; it forms a monatomic layer of barium with an extremely low work function. More modern formulations utilize a mixture of barium oxide, strontium oxide and calcium oxide. Another standard formulation is barium oxide, calcium oxide, and aluminium oxide in a 5:3:2 ratio. Thorium oxide may be used as well. Oxide-coated cathodes operate at about 800-1000 °C, orange-hot. They are used in most small glass vacuum tubes, but are rarely used in high-power tubes because the coating is degraded by positive ions that bombard the cathode, accelerated by the high voltage on the tube. For manufacturing convenience, the oxide-coated cathodes are usually coated with carbonates, which are then converted to oxides by heating. The activation may be achieved by microwave heating, direct electric current heating, or electron bombardment while the tube is on the exhausting machine, until the production of gases ceases. The purity of cathode materials is crucial for tube lifetime. The Ba content significantly increases on the surface layers of oxide cathodes down to several tens of nanometers in depth, after the cathode activation process. The lifetime of oxide cathodes can be evaluated with a stretched exponential function. The survivability of electron emission sources is significantly improved by high doping of high‐speed activator. Barium oxide reacts with traces of silicon in the underlying metal, forming barium silicate (Ba2SiO4) layer. This layer has high electrical resistance, especially under discontinuous current load, and acts as a resistor in series with the cathode. This is particularly undesirable for tubes used in computer applications, where they can stay without conducting current for extended periods of time. Barium also sublimates from the heated cathode, and deposits on nearby structures. For electron tubes, where the grid is subjected to high temperatures and barium contamination would facilitate electron emission from the grid itself, higher proportion of calcium is added to the coating mix (up to 20% of calcium carbonate). Boride cathodes Lanthanum hexaboride (LaB6) and cerium hexaboride (CeB6) are used as the coating of some high-current cathodes. Hexaborides show low work function, around 2.5 eV. They are also resistant to poisoning. Cerium boride cathodes show lower evaporation rate at 1700 K than lanthanum boride, but it becomes equal at 1850 K and higher. Cerium boride cathodes have one and a half times the lifetime of lanthanum boride, due to its higher resistance to carbon contamination. Boride cathodes are about ten times as "bright" as the tungsten ones and have 10-15 times longer lifetime. They are used e.g. in electron microscopes, microwave tubes, electron lithography, electron beam welding, X-Ray tubes, and free electron lasers. However these materials tend to be expensive. Other hexaborides can be employed as well; examples are calcium hexaboride, strontium hexaboride, barium hexaboride, yttrium hexaboride, gadolinium hexaboride, samarium hexaboride, and thorium hexaboride. Thoriated filaments A common type of directly heated cathode, used in most high power transmitting tubes, is the thoriated tungsten filament, discovered in 1914 and made practical by Irving Langmuir in 1923. A small amount of thorium is added to the tungsten of the filament. The filament is heated white-hot, at about 2400 °C, and thorium atoms migrate to the surface of the filament and form the emissive layer. Heating the filament in a hydrocarbon atmosphere carburizes the surface and stabilizes the emissive layer. Thoriated filaments can have very long lifetimes and are resistant to the ion bombardment that occurs at high voltages, because fresh thorium continually diffuses to the surface, renewing the layer. They are used in nearly all high-power vacuum tubes for radio transmitters, and in some tubes for hi-fi amplifiers. Their lifetimes tend to be longer than those of oxide cathodes. Thorium alternatives Due to concerns about thorium radioactivity and toxicity, efforts have been made to find alternatives. One of them is zirconiated tungsten, where zirconium dioxide is used instead of thorium dioxide. Other replacement materials are lanthanum(III) oxide, yttrium(III) oxide, cerium(IV) oxide, and their mixtures. Other materials In addition to the listed oxides and borides, other materials can be used as well. Some examples are carbides and borides of transition metals, e.g. zirconium carbide, hafnium carbide, tantalum carbide, hafnium diboride, and their mixtures. Metals from groups IIIB (scandium, yttrium, and some lanthanides, often gadolinium and samarium) and IVB (hafnium, zirconium, titanium) are usually chosen. In addition to tungsten, other refractory metals and alloys can be used, e.g. tantalum, molybdenum and rhenium and their alloys. A barrier layer of other material can be placed between the base metal and the emission layer, to inhibit chemical reaction between these. The material has to be resistant to high temperatures, have high melting point and very low vapor pressure, and be electrically conductive. Materials used can be e.g. tantalum diboride, titanium diboride, zirconium diboride, niobium diboride, tantalum carbide, zirconium carbide, tantalum nitride, and zirconium nitride. Cathode heater A cathode heater is a heated wire filament used to heat the cathode in a vacuum tube or cathode ray tube. The cathode element has to achieve the required temperature in order for these tubes to function properly. This is why older electronics often need some time to "warm up" after being powered on; this phenomenon can still be observed in the cathode ray tubes of some modern televisions and computer monitors. The cathode heats to a temperature that causes electrons to be 'boiled out' of its surface into the evacuated space in the tube, a process called thermionic emission. The temperature required for modern oxide-coated cathodes is around . The cathode is usually in the form of a long narrow sheet metal cylinder at the center of the tube. The heater consists of a fine wire or ribbon, made of a high resistance metal alloy like nichrome, similar to the heating element in a toaster but finer. It runs through the center of the cathode, often being coiled on tiny insulating supports or bent into hairpin-like shapes to give enough surface area to produce the required heat. Typical heaters have a ceramic coating on the wire. When it's bent sharply at the ends of the cathode sleeve, the wire is exposed. The ends of the wire are electrically connected to two of the several pins protruding from the end of the tube. When current passes through the wire it becomes red hot, and the radiated heat strikes the inside surface of the cathode, heating it. The red or orange glow seen coming from operating vacuum tubes is produced by the heater. There is not much room in the cathode, and the cathode is often built with the heater wire touching it. The inside of the cathode is insulated by a coating of alumina (aluminum oxide). This is not a very good insulator at high temperatures, therefore tubes have a rating for maximum voltage between cathode and heater, usually only 200 to 300 V. Heaters require a low voltage, high current source of power. Miniature receiving tubes for line-operated equipment use on the order of 0.5 to 4 watts for heater power; high power tubes such as rectifiers or output tubes use on the order of 10 to 20 watts, and broadcast transmitter tubes might need a kilowatt or more to heat the cathode. The voltage required is usually 5 or 6 volts AC. This is supplied by a separate 'heater winding' on the device's power supply transformer that also supplies the higher voltages required by the tubes' plates and other electrodes. One approach used in transformerless line-operated radio and television receivers such as the All American Five is to connect all the tube heaters in series across the supply line. Since all the heaters are rated at the same current, they would share voltage according to their heater ratings. Battery-operated radio sets used direct-current power for the heaters (commonly known as filaments), and tubes intended for battery sets were designed to use as little filament power as necessary, to economize on battery replacement. The final models of tube-equipped radio receivers were built with subminiature tubes using less than 50 mA for the heaters, but these types were developed at about the same time as transistors which replaced them. Where leakage or stray fields from the heater circuit could potentially be coupled to the cathode, direct current is sometimes used for heater power. This eliminates a source of noise in sensitive audio or instrumentation circuits. The majority of power required to operate low power tube equipment is consumed by the heaters. Transistors have no such power requirement, which is often a great advantage. Failure modes The emissive layers on coated cathodes degrade slowly with time, and much more quickly when the cathode is overloaded with too high current. The result is weakened emission and diminished power of the tubes, or in CRTs diminished brightness. The activated electrodes can be destroyed by contact with oxygen or other chemicals (e.g. aluminium, or silicates), either present as residual gases, entering the tube via leaks, or released by outgassing or migration from the construction elements. This results in diminished emissivity. This process is known as cathode poisoning. High-reliability tubes had to be developed for the early Whirlwind computer, with filaments free of traces of silicon. Slow degradation of the emissive layer and sudden burning and interruption of the filament are two main failure modes of vacuum tubes. Transmitting tube hot cathode characteristics See also Hot filament ionization gauge References External links John Harper (2003) Tubes 201 - How vacuum tubes really work, John Harper's home page Electrodes Gas discharge lamps Vacuum tubes Accelerator physics
Hot cathode
[ "Physics", "Chemistry" ]
3,225
[ "Applied and interdisciplinary physics", "Vacuum tubes", "Electrodes", "Vacuum", "Electrochemistry", "Experimental physics", "Accelerator physics", "Matter" ]
2,898,991
https://en.wikipedia.org/wiki/Planck%20postulate
The Planck postulate (or Planck's postulate), one of the fundamental principles of quantum mechanics, is the postulate that the energy of oscillators in a black body is quantized, and is given by where is an integer (1, 2, 3, ...), is the Planck constant, and (the Greek letter nu) is the frequency of the oscillator. The postulate was introduced by Max Planck in his derivation of his law of black body radiation in 1900. This assumption allowed Planck to derive a formula for the entire spectrum of the radiation emitted by a black body. Planck was unable to justify this assumption based on classical physics; he considered quantization as being purely a mathematical trick, rather than (as is now known) a fundamental change in the understanding of the world. In other words, Planck then contemplated virtual oscillators. In 1905, Albert Einstein adapted the Planck postulate to explain the photoelectric effect, but Einstein proposed that the energy of photons themselves was quantized (with photon energy given by the Planck–Einstein relation), and that quantization was not merely a feature of microscopic oscillators. Planck's postulate was further applied to understanding the Compton effect, and was applied by Niels Bohr to explain the emission spectrum of the hydrogen atom and derive the correct value of the Rydberg constant. Notes References Tipler, Paul A. (1978). Modern Physics. Worth Publishers, Inc. Planck Postulate—from Eric Weisstein's World of Physics Foundational quantum physics Max Planck
Planck postulate
[ "Physics" ]
325
[ "Foundational quantum physics", "Quantum mechanics" ]
25,914,855
https://en.wikipedia.org/wiki/Why%20Most%20Published%20Research%20Findings%20Are%20False
"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience. In the paper, Ioannidis argued that a large number, if not the majority, of published medical research papers contain results that cannot be replicated. In simple terms, the essay states that scientists use hypothesis testing to determine whether scientific discoveries are significant. Statistical significance is formalized in terms of probability, with its p-value measure being reported in the scientific literature as a screening mechanism. Ioannidis posited assumptions about the way people perform and report these tests; then he constructed a statistical model which indicates that most published findings are likely false positive results. While the general arguments in the paper recommending reforms in scientific research methodology were well-received, Ionnidis received criticism for the validity of his model and his claim that the majority of scientific findings are false. Responses to the paper suggest lower false positive and false negative rates than what Ionnidis puts forth. Argument Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by . When a study is conducted, the probability that a positive result is obtained is . Given these two factors, we want to compute the conditional probability , which is known as the positive predictive value (PPV). Bayes' theorem allows us to compute the PPV as:where is the type I error rate (false positives) and is the type II error rate (false negatives); the statistical power is . It is customary in most scientific research to desire and . If we assume for a given scientific field, then we may compute the PPV for different values of and : However, the simple formula for PPV derived from Bayes' theorem does not account for bias in study design or reporting. Some published findings would not have been presented as research findings if not for researcher bias. Let be the probability that an analysis was only published due to researcher bias. Then the PPV is given by the more general expression:The introduction of bias will tend to depress the PPV; in the extreme case when the bias of a study is maximized, . Even if a study meets the benchmark requirements for and , and is free of bias, there is still a 36% probability that a paper reporting a positive result will be incorrect; if the base probability of a true result is lower, then this will push the PPV lower too. Furthermore, there is strong evidence that the average statistical power of a study in many scientific fields is well below the benchmark level of 0.8. Given the realities of bias, low statistical power, and a small number of true hypotheses, Ioannidis concludes that the majority of studies in a variety of scientific fields are likely to report results that are false. Corollaries In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research. Research findings in a scientific field are less likely to be true, the smaller the studies conducted. the smaller the effect sizes. the greater the number and the lesser the selection of tested relationships. the greater the flexibility in designs, definitions, outcomes, and analytical modes. the greater the financial and other interests and prejudices. the hotter the scientific field (with more scientific teams involved). Ioannidis has added to this work by contributing to a meta-epidemiological study which found that only 1 in 20 interventions tested in Cochrane Reviews have benefits that are supported by high-quality evidence. He also contributed to research suggesting that the quality of this evidence does not seem to improve over time. Reception Despite skepticism about extreme statements made in the paper, Ioannidis's broader argument and warnings have been accepted by a large number of researchers. The growth of metascience and the recognition of a scientific replication crisis have bolstered the paper's credibility, and led to calls for methodological reforms in scientific research. In commentaries and technical responses, statisticians Goodman and Greenland identified several weaknesses in Ioannidis' model. Ioannidis's use of dramatic and exaggerated language that he "proved" that most research findings' claims are false and that "most research findings are false for most research designs and for most fields" [italics added] was rejected, and yet they agreed with his paper's conclusions and recommendations. Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted. Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians. Leek summarized the key points of agreement as: when talking about the science-wise false discovery rate one has to bring data; there are different frameworks for estimating the science-wise false discovery rate; and "it is pretty unlikely that most published research is false", but that probably varies by one's definition of "most" and "false". Statistician Ulrich Schimmack reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields is not the actual discovery rate because non-significant results are rarely reported. Ioannidis's theoretical model fails to account for that, but when a statistical method ("z-curve") to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%. Causes of high false positive rate Despite these weaknesses there is nonetheless general agreement with the problem and recommendations Ioannidis discusses, yet his tone has been described as "dramatic" and "alarmingly misleading", which runs the risk of making people unnecessarily skeptical or cynical about science. A lasting impact of this work has been awareness of the underlying drivers of the high false positive rate in clinical medicine and biomedical research, and efforts by journals and scientists to mitigate them. Ioannidis restated these drivers in 2016 as being: Solo, siloed investigator limited to small sample sizes No preregistration of hypotheses being tested Post-hoc cherry picking of hypotheses with best P values Only requiring P < .05 No replication No data sharing References Further reading Carnegie Mellon University, Statistics Journal Club: Summary and discussion of: “Why Most Published Research Findings Are False” Applications to Economics: De Long, J. Bradford; Lang, Kevin. "Are all Economic Hypotheses False?" Journal of Political Economy. 100 (6): 1257–1272, 1992 Applications to Social Sciences: Hardwicke, Tom E.; Wallach, Joshua D.; Kidwell, Mallory C.; Bendixen, Theiss; Crüwell Sophia and Ioannidis, John P. A. "An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017)." Royal Society Open Science. 7: 190806, 2020. External links YouTube video(s) from the Berkeley Initiative for Transparency in the Social Sciences, 2016, "Why Most Published Research Findings are False" (Part I, Part II, Part III) YouTube video of John Ioannidis at Talks at Google, 2014 "Reproducible Research: True or False?" 2005 essays Academic journal articles Applied probability Criticism of academia Metascience Scientific method
Why Most Published Research Findings Are False
[ "Mathematics" ]
1,585
[ "Applied mathematics", "Applied probability" ]
25,916,521
https://en.wikipedia.org/wiki/Plasma%20%28physics%29
Plasma () is one of four fundamental states of matter (the other three being solid, liquid, and gas) characterized by the presence of a significant portion of charged particles in any combination of ions or electrons. It is the most abundant form of ordinary matter in the universe, mostly in stars (including the Sun), but also dominating the rarefied intracluster medium and intergalactic medium. Plasma can be artificially generated, for example, by heating a neutral gas or subjecting it to a strong electromagnetic field. The presence of charged particles makes plasma electrically conductive, with the dynamics of individual particles and macroscopic plasma motion governed by collective electromagnetic fields and very sensitive to externally applied fields. The response of plasma to electromagnetic fields is used in many modern devices and technologies, such as plasma televisions or plasma etching. Depending on temperature and density, a certain number of neutral particles may also be present, in which case plasma is called partially ionized. Neon signs and lightning are examples of partially ionized plasmas. Unlike the phase transitions between the other three states of matter, the transition to plasma is not well defined and is a matter of interpretation and context. Whether a given degree of ionization suffices to call a substance "plasma" depends on the specific phenomenon being considered. Early history Plasma was first identified in laboratory by Sir William Crookes. Crookes presented a lecture on what he called "radiant matter" to the British Association for the Advancement of Science, in Sheffield, on Friday, 22 August 1879. Systematic studies of plasma began with the research of Irving Langmuir and his colleagues in the 1920s. Langmuir also introduced the term "plasma" as a description of ionized gas in 1928: Lewi Tonks and Harold Mott-Smith, both of whom worked with Langmuir in the 1920s, recall that Langmuir first used the term by analogy with the blood plasma. Mott-Smith recalls, in particular, that the transport of electrons from thermionic filaments reminded Langmuir of "the way blood plasma carries red and white corpuscles and germs." Definitions The fourth state of matter Plasma is called the fourth state of matter after solid, liquid, and gas. It is a state of matter in which an ionized substance becomes highly electrically conductive to the point that long-range electric and magnetic fields dominate its behaviour. Plasma is typically an electrically quasineutral medium of unbound positive and negative particles (i.e., the overall charge of a plasma is roughly zero). Although these particles are unbound, they are not "free" in the sense of not experiencing forces. Moving charged particles generate electric currents, and any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn, this governs collective behaviour with many degrees of variation. Plasma is distinct from the other states of matter. In particular, describing a low-density plasma as merely an "ionized gas" is wrong and misleading, even though it is similar to the gas phase in that both assume no definite shape or volume. The following table summarizes some principal differences: Ideal plasma Three factors define an ideal plasma: The plasma approximation: The plasma approximation applies when the plasma parameter Λ, representing the number of charge carriers within the Debye sphere is much higher than unity. It can be readily shown that this criterion is equivalent to smallness of the ratio of the plasma electrostatic and thermal energy densities. Such plasmas are called weakly coupled. Bulk interactions: The Debye length is much smaller than the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral. Collisionlessness: The electron plasma frequency (measuring plasma oscillations of the electrons) is much larger than the electron–neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics. Such plasmas are called collisionless. Non-neutral plasma The strength and range of the electric force and the good conductivity of plasmas usually ensure that the densities of positive and negative charges in any sizeable region are equal ("quasineutrality"). A plasma with a significant excess of charge density, or, in the extreme case, is composed of a single species, is called a non-neutral plasma. In such a plasma, electric fields play a dominant role. Examples are charged particle beams, an electron cloud in a Penning trap and positron plasmas. Dusty plasma A dusty plasma contains tiny charged particles of dust (typically found in space). The dust particles acquire high charges and interact with each other. A plasma that contains larger particles is called grain plasma. Under laboratory conditions, dusty plasmas are also called complex plasmas. Properties and parameters Density and ionization degree For plasma to exist, ionization is necessary. The term "plasma density" by itself usually refers to the electron density , that is, the number of charge-contributing electrons per unit volume. The degree of ionization is defined as fraction of neutral particles that are ionized: where is the ion density and the neutral density (in number of particles per unit volume). In the case of fully ionized matter, . Because of the quasineutrality of plasma, the electron and ion densities are related by , where is the average ion charge (in units of the elementary charge). Temperature Plasma temperature, commonly measured in kelvin or electronvolts, is a measure of the thermal kinetic energy per particle. High temperatures are usually needed to sustain ionization, which is a defining feature of a plasma. The degree of plasma ionization is determined by the electron temperature relative to the ionization energy (and more weakly by the density). In thermal equilibrium, the relationship is given by the Saha equation. At low temperatures, ions and electrons tend to recombine into bound states—atoms—and the plasma will eventually become a gas. In most cases, the electrons and heavy plasma particles (ions and neutral atoms) separately have a relatively well-defined temperature; that is, their energy distribution function is close to a Maxwellian even in the presence of strong electric or magnetic fields. However, because of the large difference in mass between electrons and ions, their temperatures may be different, sometimes significantly so. This is especially common in weakly ionized technological plasmas, where the ions are often near the ambient temperature while electrons reach thousands of kelvin. The opposite case is the z-pinch plasma where the ion temperature may exceed that of electrons. Plasma potential Since plasmas are very good electrical conductors, electric potentials play an important role. The average potential in the space between charged particles, independent of how it can be measured, is called the "plasma potential", or the "space potential". If an electrode is inserted into a plasma, its potential will generally lie considerably below the plasma potential due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of "quasineutrality", which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length, there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: Differentiating this relation provides a means to calculate the electric field from the density: It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small, otherwise, it will be dissipated by the repulsive electrostatic force. Magnetization The existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. Plasma with a magnetic field strong enough to influence the motion of the charged particles is said to be magnetized. A common quantitative criterion is that a particle on average completes at least one gyration around the magnetic-field line before making a collision, i.e., , where is the electron gyrofrequency and is the electron collision rate. It is often the case that the electrons are magnetized while the ions are not. Magnetized plasmas are anisotropic, meaning that their properties in the direction parallel to the magnetic field are different from those perpendicular to it. While electric fields in plasmas are usually small due to the plasma high conductivity, the electric field associated with a plasma moving with velocity in the magnetic field is given by the usual Lorentz formula , and is not affected by Debye shielding. Mathematical descriptions To completely describe the state of a plasma, all of the particle locations and velocities that describe the electromagnetic field in the plasma region would need to be written down. However, it is generally not practical or necessary to keep track of all the particles in a plasma. Therefore, plasma physicists commonly use less detailed descriptions, of which there are two main types: Fluid model Fluid models describe plasmas in terms of smoothed quantities, like density and averaged velocity around each position (see Plasma parameters). One simple fluid model, magnetohydrodynamics, treats the plasma as a single fluid governed by a combination of Maxwell's equations and the Navier–Stokes equations. A more general description is the two-fluid plasma, where the ions and electrons are described separately. Fluid models are often accurate when collisionality is sufficiently high to keep the plasma velocity distribution close to a Maxwell–Boltzmann distribution. Because fluid models usually describe the plasma in terms of a single flow at a certain temperature at each spatial location, they can neither capture velocity space structures like beams or double layers, nor resolve wave-particle effects. Kinetic model Kinetic models describe the particle velocity distribution function at each point in the plasma and therefore do not need to assume a Maxwell–Boltzmann distribution. A kinetic description is often necessary for collisionless plasmas. There are two common approaches to kinetic description of a plasma. One is based on representing the smoothed distribution function on a grid in velocity and position. The other, known as the particle-in-cell (PIC) technique, includes kinetic information by following the trajectories of a large number of individual particles. Kinetic models are generally more computationally intensive than fluid models. The Vlasov equation may be used to describe the dynamics of a system of charged particles interacting with an electromagnetic field. In magnetized plasmas, a gyrokinetic approach can substantially reduce the computational expense of a fully kinetic simulation. Plasma science and technology Plasmas are studied by the vast academic field of plasma science or plasma physics, including several sub-disciplines such as space plasma physics. Plasmas can appear in nature in various forms and locations, with a few examples given in the following table: Space and astrophysics Plasmas are by far the most common phase of ordinary matter in the universe, both by mass and by volume. Above the Earth's surface, the ionosphere is a plasma, and the magnetosphere contains plasma. Within our Solar System, interplanetary space is filled with the plasma expelled via the solar wind, extending from the Sun's surface out to the heliopause. Furthermore, all the distant stars, and much of interstellar space or intergalactic space is also filled with plasma, albeit at very low densities. Astrophysical plasmas are also observed in accretion disks around stars or compact objects like white dwarfs, neutron stars, or black holes in close binary star systems. Plasma is associated with ejection of material in astrophysical jets, which have been observed with accreting black holes or in active galaxies like M87's jet that possibly extends out to 5,000 light-years. Artificial plasmas Most artificial plasmas are generated by the application of electric and/or magnetic fields through a gas. Plasma generated in a laboratory setting and for industrial use can be generally categorized by: The type of power source used to generate the plasma—DC, AC (typically with radio frequency (RF)) and microwave The pressure they operate at—vacuum pressure (< 10 mTorr or 1 Pa), moderate pressure (≈1 Torr or 100 Pa), atmospheric pressure (760 Torr or 100 kPa) The degree of ionization within the plasma—fully, partially, or weakly ionized The temperature relationships within the plasma—thermal plasma (), non-thermal or "cold" plasma () The electrode configuration used to generate the plasma The magnetization of the particles within the plasma—magnetized (both ion and electrons are trapped in Larmor orbits by the magnetic field), partially magnetized (the electrons but not the ions are trapped by the magnetic field), non-magnetized (the magnetic field is too weak to trap the particles in orbits but may generate Lorentz forces) Generation of artificial plasma Just like the many uses of plasma, there are several means for its generation. However, one principle is common to all of them: there must be energy input to produce and sustain it. For this case, plasma is generated when an electric current is applied across a dielectric gas or fluid (an electrically non-conducting material) as can be seen in the adjacent image, which shows a discharge tube as a simple example (DC used for simplicity). The potential difference and subsequent electric field pull the bound electrons (negative) toward the anode (positive electrode) while the cathode (negative electrode) pulls the nucleus. As the voltage increases, the current stresses the material (by electric polarization) beyond its dielectric limit (termed strength) into a stage of electrical breakdown, marked by an electric spark, where the material transforms from being an insulator into a conductor (as it becomes increasingly ionized). The underlying process is the Townsend avalanche, where collisions between electrons and neutral gas atoms create more ions and electrons (as can be seen in the figure on the right). The first impact of an electron on an atom results in one ion and two electrons. Therefore, the number of charged particles increases rapidly (in the millions) only "after about 20 successive sets of collisions", mainly due to a small mean free path (average distance travelled between collisions). Electric arc Electric arc is a continuous electric discharge between two electrodes, similar to lightning. With ample current density, the discharge forms a luminous arc, where the inter-electrode material (usually, a gas) undergoes various stages — saturation, breakdown, glow, transition, and thermal arc. The voltage rises to its maximum in the saturation stage, and thereafter it undergoes fluctuations of the various stages, while the current progressively increases throughout. Electrical resistance along the arc creates heat, which dissociates more gas molecules and ionizes the resulting atoms. Therefore, the electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly by elastic collisions to the heavy particles. Examples of industrial plasma Plasmas find applications in many fields of research, technology and industry, for example, in industrial and extractive metallurgy, surface treatments such as plasma spraying (coating), etching in microelectronics, metal cutting and welding; as well as in everyday vehicle exhaust cleanup and fluorescent/luminescent lamps, fuel ignition, and even in supersonic combustion engines for aerospace engineering. Low-pressure discharges Glow discharge plasmas: non-thermal plasmas generated by the application of DC or low frequency RF (<100 kHz) electric field to the gap between two metal electrodes. Probably the most common plasma; this is the type of plasma generated within fluorescent light tubes. Capacitively coupled plasma (CCP): similar to glow discharge plasmas, but generated with high frequency RF electric fields, typically 13.56 MHz. These differ from glow discharges in that the sheaths are much less intense. These are widely used in the microfabrication and integrated circuit manufacturing industries for plasma etching and plasma enhanced chemical vapor deposition. Cascaded arc plasma source: a device to produce low temperature (≈1eV) high density plasmas (HDP). Inductively coupled plasma (ICP): similar to a CCP and with similar applications but the electrode consists of a coil wrapped around the chamber where plasma is formed. Wave heated plasma: similar to CCP and ICP in that it is typically RF (or microwave). Examples include helicon discharge and electron cyclotron resonance (ECR). Atmospheric pressure Arc discharge: this is a high power thermal discharge of very high temperature (≈10,000 K). It can be generated using various power supplies. It is commonly used in metallurgical processes. For example, it is used to smelt minerals containing Al2O3 to produce aluminium. Corona discharge: this is a non-thermal discharge generated by the application of high voltage to sharp electrode tips. It is commonly used in ozone generators and particle precipitators. Dielectric barrier discharge (DBD): this is a non-thermal discharge generated by the application of high voltages across small gaps wherein a non-conducting coating prevents the transition of the plasma discharge into an arc. It is often mislabeled "Corona" discharge in industry and has similar application to corona discharges. A common usage of this discharge is in a plasma actuator for vehicle drag reduction. It is also widely used in the web treatment of fabrics. The application of the discharge to synthetic fabrics and plastics functionalizes the surface and allows for paints, glues and similar materials to adhere. The dielectric barrier discharge was used in the mid-1990s to show that low temperature atmospheric pressure plasma is effective in inactivating bacterial cells. This work and later experiments using mammalian cells led to the establishment of a new field of research known as plasma medicine. The dielectric barrier discharge configuration was also used in the design of low temperature plasma jets. These plasma jets are produced by fast propagating guided ionization waves known as plasma bullets. Capacitive discharge: this is a nonthermal plasma generated by the application of RF power (e.g., 13.56 MHz) to one powered electrode, with a grounded electrode held at a small separation distance on the order of 1 cm. Such discharges are commonly stabilized using a noble gas such as helium or argon. "Piezoelectric direct discharge plasma:" is a nonthermal plasma generated at the high side of a piezoelectric transformer (PT). This generation variant is particularly suited for high efficient and compact devices where a separate high voltage power supply is not desired. MHD converters A world effort was triggered in the 1960s to study magnetohydrodynamic converters in order to bring MHD power conversion to market with commercial power plants of a new kind, converting the kinetic energy of a high velocity plasma into electricity with no moving parts at a high efficiency. Research was also conducted in the field of supersonic and hypersonic aerodynamics to study plasma interaction with magnetic fields to eventually achieve passive and even active flow control around vehicles or projectiles, in order to soften and mitigate shock waves, lower thermal transfer and reduce drag. Such ionized gases used in "plasma technology" ("technological" or "engineered" plasmas) are usually weakly ionized gases in the sense that only a tiny fraction of the gas molecules are ionized. These kinds of weakly ionized gases are also nonthermal "cold" plasmas. In the presence of magnetics fields, the study of such magnetized nonthermal weakly ionized gases involves resistive magnetohydrodynamics with low magnetic Reynolds number, a challenging field of plasma physics where calculations require dyadic tensors in a 7-dimensional phase space. When used in combination with a high Hall parameter, a critical value triggers the problematic electrothermal instability which limited these technological developments. Complex plasma phenomena Although the underlying equations governing plasmas are relatively simple, plasma behaviour is extraordinarily varied and subtle: the emergence of unexpected behaviour from a simple model is a typical feature of a complex system. Such systems lie in some sense on the boundary between ordered and disordered behaviour and cannot typically be described either by simple, smooth, mathematical functions, or by pure randomness. The spontaneous formation of interesting spatial features on a wide range of length scales is one manifestation of plasma complexity. The features are interesting, for example, because they are very sharp, spatially intermittent (the distance between features is much larger than the features themselves), or have a fractal form. Many of these features were first studied in the laboratory, and have subsequently been recognized throughout the universe. Examples of complexity and complex structures in plasmas include: Filamentation Striations or string-like structures are seen in many plasmas, like the plasma ball, the aurora, lightning, electric arcs, solar flares, and supernova remnants. They are sometimes associated with larger current densities, and the interaction with the magnetic field can form a magnetic rope structure. (See also Plasma pinch) Filamentation also refers to the self-focusing of a high power laser pulse. At high powers, the nonlinear part of the index of refraction becomes important and causes a higher index of refraction in the center of the laser beam, where the laser is brighter than at the edges, causing a feedback that focuses the laser even more. The tighter focused laser has a higher peak brightness (irradiance) that forms a plasma. The plasma has an index of refraction lower than one, and causes a defocusing of the laser beam. The interplay of the focusing index of refraction, and the defocusing plasma makes the formation of a long filament of plasma that can be micrometers to kilometers in length. One interesting aspect of the filamentation generated plasma is the relatively low ion density due to defocusing effects of the ionized electrons. (See also Filament propagation) Impermeable plasma Impermeable plasma is a type of thermal plasma which acts like an impermeable solid with respect to gas or cold plasma and can be physically pushed. Interaction of cold gas and thermal plasma was briefly studied by a group led by Hannes Alfvén in 1960s and 1970s for its possible applications in insulation of fusion plasma from the reactor walls. However, later it was found that the external magnetic fields in this configuration could induce kink instabilities in the plasma and subsequently lead to an unexpectedly high heat loss to the walls. In 2013, a group of materials scientists reported that they have successfully generated stable impermeable plasma with no magnetic confinement using only an ultrahigh-pressure blanket of cold gas. While spectroscopic data on the characteristics of plasma were claimed to be difficult to obtain due to the high pressure, the passive effect of plasma on synthesis of different nanostructures clearly suggested the effective confinement. They also showed that upon maintaining the impermeability for a few tens of seconds, screening of ions at the plasma-gas interface could give rise to a strong secondary mode of heating (known as viscous heating) leading to different kinetics of reactions and formation of complex nanomaterials. Gallery See also Quark-gluon plasma References External links Plasmas: the Fourth State of Matter Introduction to Plasma Physics: Graduate course given by Richard Fitzpatrick|M.I.T. Introduction by I.H.Hutchinson Plasma Material Interaction How to make a glowing ball of plasma in your microwave with a grape |More (Video) OpenPIC3D – 3D Hybrid Particle-In-Cell simulation of plasma dynamics Plasma Formulary Interactive Plasma Electromagnetism Astrophysics Electrical conductors Gases Articles containing video clips
Plasma (physics)
[ "Physics", "Chemistry", "Astronomy" ]
4,937
[ "Matter", "Physical phenomena", "Electromagnetism", "Astronomical sub-disciplines", "Plasma (physics)", "Phases of matter", "Astrophysics", "Materials", "Fundamental interactions", "Electrical conductors", "Statistical mechanics", "Gases" ]
25,916,835
https://en.wikipedia.org/wiki/Pitzer%20equations
Pitzer equations are important for the understanding of the behaviour of ions dissolved in natural waters such as rivers, lakes and sea-water. They were first described by physical chemist Kenneth Pitzer. The parameters of the Pitzer equations are linear combinations of parameters, of a virial expansion of the excess Gibbs free energy, which characterise interactions amongst ions and solvent. The derivation is thermodynamically rigorous at a given level of expansion. The parameters may be derived from various experimental data such as the osmotic coefficient, mixed ion activity coefficients, and salt solubility. They can be used to calculate mixed ion activity coefficients and water activities in solutions of high ionic strength for which the Debye–Hückel theory is no longer adequate. They are more rigorous than the equations of specific ion interaction theory (SIT theory), but Pitzer parameters are more difficult to determine experimentally than SIT parameters. Historical development A starting point for the development can be taken as the virial equation of state for a gas. where is the pressure, is the volume, is the temperature and ... are known as virial coefficients. The first term on the right-hand side is for an ideal gas. The remaining terms quantify the departure from the ideal gas law with changing pressure, . It can be shown by statistical mechanics that the second virial coefficient arises from the intermolecular forces between pairs of molecules, the third virial coefficient involves interactions between three molecules, etc. This theory was developed by McMillan and Mayer. Solutions of uncharged molecules can be treated by a modification of the McMillan-Mayer theory. However, when a solution contains electrolytes, electrostatic interactions must also be taken into account. The Debye–Hückel theory was based on the assumption that each ion was surrounded by a spherical "cloud" or ionic atmosphere made up of ions of the opposite charge. Expressions were derived for the variation of single-ion activity coefficients as a function of ionic strength. This theory was very successful for dilute solutions of 1:1 electrolytes and, as discussed below, the Debye–Hückel expressions are still valid at sufficiently low concentrations. The values calculated with Debye–Hückel theory diverge more and more from observed values as the concentrations and/or ionic charges increases. Moreover, Debye–Hückel theory takes no account of the specific properties of ions such as size or shape. Brønsted had independently proposed an empirical equation, in which the activity coefficient depended not only on ionic strength, but also on the concentration, m, of the specific ion through the parameter β. This is the basis of SIT theory. It was further developed by Guggenheim. Scatchard extended the theory to allow the interaction coefficients to vary with ionic strength. Note that the second form of Brønsted's equation is an expression for the osmotic coefficient. Measurement of osmotic coefficients provides one means for determining mean activity coefficients. The Pitzer parameters The exposition begins with a virial expansion of the excess Gibbs free energy Ww is the mass of the water in kilograms, bi, bj ... are the molalities of the ions and I is the ionic strength. The first term, f(I) represents the Debye–Hückel limiting law. The quantities λij(I) represent the short-range interactions in the presence of solvent between solute particles i and j. This binary interaction parameter or second virial coefficient depends on ionic strength, on the particular species i and j and the temperature and pressure. The quantities μijk represent the interactions between three particles. Higher terms may also be included in the virial expansion. Next, the free energy is expressed as the sum of chemical potentials, or partial molal free energy, and an expression for the activity coefficient is obtained by differentiating the virial expansion with respect to a molality b. For a simple electrolyte MpXq, at a concentration m, made up of ions Mz+ and Xz−, the parameters , and are defined as The term fφ is essentially the Debye–Hückel term. Terms involving and are not included as interactions between three ions of the same charge are unlikely to occur except in very concentrated solutions. The B parameter was found empirically to show an ionic strength dependence (in the absence of ion-pairing) which could be expressed as With these definitions, the expression for the osmotic coefficient becomes A similar expression is obtained for the mean activity coefficient. These equations were applied to an extensive range of experimental data at 25 °C with excellent agreement to about 6 mol kg−1 for various types of electrolyte. The treatment can be extended to mixed electrolytes and to include association equilibria. Values for the parameters β(0), β(1) and C for inorganic and organic acids, bases and salts have been tabulated. Temperature and pressure variation is also discussed. One area of application of Pitzer parameters is to describe the ionic strength variation of equilibrium constants measured as concentration quotients. Both SIT and Pitzer parameters have been used in this context, For example, both sets of parameters were calculated for some uranium complexes and were found to account equally well for the ionic strength dependence of the stability constants. Pitzer parameters and SIT theory have been extensively compared. There are more parameters in the Pitzer equations than in the SIT equations. Because of this the Pitzer equations provide for more precise modelling of mean activity coefficient data and equilibrium constants. However, the determination of the greater number of Pitzer parameters means that they are more difficult to determine. Compilation of Pitzer parameters Besides the set of parameters obtained by Pitzer et al. in the 1970s mentioned in the previous section. Kim and Frederick published the Pitzer parameters for 304 single salts in aqueous solutions at 298.15 K, extended the model to the concentration range up to the saturation point. Those parameters are widely used, however, many complex electrolytes including ones with organic anions or cations, which are very significant in some related fields, were not summarized in their paper. For some complex electrolytes, Ge et al. obtained the new set of Pitzer parameters using up-to-date measured or critically reviewed osmotic coefficient or activity coefficient data. Comparable activity coefficient models Besides the well-known Pitzer-like equations, there is a simple and easy-to-use semi-empirical model, which is called the three-characteristic-parameter correlation (TCPC) model. It was first proposed by Lin et al. It is a combination of the Pitzer long-range interaction and short-range solvation effect: ln γ = ln γPDH + ln γSV Ge et al. modified this model, and obtained the TCPC parameters for a larger number of single salt aqueous solutions. This model was also extended for a number of electrolytes dissolved in methanol, ethanol, 2-propanol, and so on. Temperature dependent parameters for a number of common single salts were also compiled, available at. The performance of the TCPC model in correlation with the measured activity coefficient or osmotic coefficients is found to be comparable with Pitzer-like models. Due to its empirical aspects, the Pitzer modelling framework has a number of well-known limitations. Most importantly, to improve the fits to experimental data, different variations of the equations have been described. Extrapolations, especially in the temperature and pressure domain, are generally problematic. One alternative modelling approach has been specifically designed to address this extrapolation issue by reducing the number of equation parameters while maintaining similar predictive precision and accuracy. See also Bromley equation Davies equation Osmotic coefficient References Chapter 3. *Pitzer, K.S. Ion interaction approach: theory and data correlation, pp. 75–153. Thermodynamic equations Chemical thermodynamics Equilibrium chemistry Electrochemical equations
Pitzer equations
[ "Physics", "Chemistry", "Mathematics" ]
1,633
[ "Thermodynamic equations", "Equations of physics", "Mathematical objects", "Equations", "Equilibrium chemistry", "Electrochemistry", "Thermodynamics", "Chemical thermodynamics", "Electrochemical equations" ]
25,920,056
https://en.wikipedia.org/wiki/Rating%20curve
In hydrology, a rating curve is a graph of discharge versus stage for a given point on a stream, usually at gauging stations, where the stream discharge is measured across the stream channel with a flow meter. Numerous measurements of stream discharge are made over a range of stream stages. The rating curve is usually plotted as discharge on x-axis versus stage (surface elevation) on y-axis. The development of a rating curve involves two steps. In the first step the relationship between stage and discharge is established by measuring the stage and corresponding discharge in the river. And in the second part, stage of river is measured and discharge is calculated by using the relationship established in the first part. Stage is measured by reading a gauge installed in the river. If the stage-discharge relationship does not change with time, it is called permanent control. If the relationship does change, it is called shifting control. Shifting control is usually due to erosion or deposition of sediment at the stage measurement site. Bedrock-bottomed parts of rivers or concrete/metal weirs or structures are often, though not always, permanent controls. If G represents stage for discharge Q, then the relationship between G and Q can possibly be approximated with an equation: where and are rating curve constants, and is a constant which represents the gauge reading corresponding to zero discharge. The constant can be measured when a stream is flowing under "section control" as the surveyed gauge height of the lowest point of the section control feature. When a stream is flowing under "channel control" conditions, the parameter does not have a physical analogue and must be estimated by following standard methods given in literature. The parameter is typically in the range of 2.0 to 3.0 when a stream is flowing under section control, and in the range of 1.0 to 2.0 when a stream is flowing under channel control. A stream will typically transition from section control at lower gauge heights to channel control at higher gauge heights. The transition from section control to channel control can often be inferred by a change in the slope of a rating curve when plotted on log-log graph paper. References Hydrology
Rating curve
[ "Chemistry", "Engineering", "Environmental_science" ]
434
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
34,622,141
https://en.wikipedia.org/wiki/Tseytin%20transformation
The Tseytin transformation, alternatively written Tseitin transformation, takes as input an arbitrary combinatorial logic circuit and produces an equisatisfiable boolean formula in conjunctive normal form (CNF). The length of the formula is linear in the size of the circuit. Input vectors that make the circuit output "true" are in 1-to-1 correspondence with assignments that satisfy the formula. This reduces the problem of circuit satisfiability on any circuit (including any formula) to the satisfiability problem on 3-CNF formulas. It was discovered by the Russian scientist Grigori Tseitin. Motivation The naive approach is to write the circuit as a Boolean expression, and use De Morgan's law and the distributive property to convert it to CNF. However, this can result in an exponential increase in equation size. The Tseytin transformation outputs a formula whose size grows linearly relative to the input circuit's. Approach The output equation is the constant 1 set equal to an expression. This expression is a conjunction of sub-expressions, where the satisfaction of each sub-expression enforces the proper operation of a single gate in the input circuit. The satisfaction of the entire output expression thus enforces that the entire input circuit is operating properly. For each gate, a new variable representing its output is introduced. A small pre-calculated CNF expression that relates the inputs and outputs is appended (via the "and" operation) to the output expression. Note that inputs to these gates can be either the original literals or the introduced variables representing outputs of sub-gates. Though the output expression contains more variables than the input, it remains equisatisfiable, meaning that it is satisfiable if, and only if, the original input equation is satisfiable. When a satisfying assignment of variables is found, those assignments for the introduced variables can simply be discarded. A final clause is appended with a single literal: the final gate's output variable. If this literal is complemented, then the satisfaction of this clause enforces the output expression's to false; otherwise the expression is forced true. Examples Consider the following formula . Consider all subformulas (excluding simple variables): Introduce a new variable for each subformula: Conjunct all substitutions and the substitution for : All substitutions can be transformed into CNF, e.g. Gate sub-expressions Listed are some of the possible sub-expressions that can be created for various logic gates. In an operation expression, C acts as an output; in a CNF sub-expression, C acts as a new Boolean variable. For each operation, the CNF sub-expression is true if and only if C adheres to the contract of the Boolean operation for all possible input values. Simple combinatorial logic The following circuit returns true when at least some of its inputs are true, but not more than two at a time. It implements the equation . A variable is introduced for each gate's output; here each is marked in red: Notice that the output of the inverter with x as an input has two variables introduced. While this is redundant, it does not affect the equisatisfiability of the resulting equation. Now substitute each gate with its appropriate CNF sub-expression: The final output variable is gate8 so to enforce that the output of this circuit be true, one final simple clause is appended: (gate8). Combining these equations results in the final instance of SAT: (gate1 ∨ x1) ∧ ( ∨ ) ∧ ( ∨ gate1) ∧ ( ∨ x2) ∧ ( ∨ gate2 ∨ ) ∧ (gate3 ∨ x2) ∧ ( ∨ ) ∧ ( ∨ x1) ∧ ( ∨ gate3) ∧ ( ∨ gate4 ∨ ) ∧ (gate5 ∨ x2) ∧ ( ∨ ) ∧ ( ∨ gate5) ∧ ( ∨ x3) ∧ ( ∨ gate6 ∨ ) ∧ (gate7 ∨ ) ∧ (gate7 ∨ ) ∧ (gate2 ∨ ∨ gate4) ∧ (gate8 ∨ ) ∧ (gate8 ∨ ) ∧ (gate6 ∨ ∨ gate7) ∧ (gate8) = 1 One possible satisfying assignment of these variables is: The values of the introduced variables are usually discarded, but they can be used to trace the logic path in the original circuit. Here, indeed meets the criteria for the original circuit to output true. To find a different answer, the clause (x1 ∨ x2 ∨ ) can be appended and the SAT solver executed again. Derivation Presented is one possible derivation of the CNF sub-expression for some chosen gates: OR Gate An OR gate with two inputs A and B and one output C satisfies the following conditions: if the output C is true, then at least one of its inputs A or B is true, if the output C is false, then both its inputs A and B are false. We can express these two conditions as the conjunction of two implications: Replacing the implications with equivalent expressions involving only conjunctions, disjunctions, and negations yields which is nearly in conjunctive normal form already. Distributing the rightmost clause twice yields and applying the associativity of conjunction gives the CNF formula NOT Gate The NOT gate is operating properly when its input and output oppose each other. That is: if the output C is true, the input A is false if the output C is false, the input A is true express these conditions as an expression that must be satisfied: NOR Gate The NOR gate is operating properly when the following conditions hold: if the output C is true, then neither A or B are true if the output C is false, then at least one of A and B were true express these conditions as an expression that must be satisfied: References G.S. Tseytin: On the complexity of derivation in propositional calculus. In: Slisenko, A.O. (ed.) Studies in Constructive Mathematics and Mathematical Logic, Part II, Seminars in Mathematics, pp. 115–125. Steklov Mathematical Institute (1970). Translated from Russian: Zapiski Nauchnykh Seminarov LOMI 8 (1968), pp. 234–259. G.S. Tseytin: On the complexity of derivation in propositional calculus. Presented at the Leningrad Seminar on Mathematical Logic held in September 1966. Logic gates Logic in computer science
Tseytin transformation
[ "Mathematics" ]
1,331
[ "Mathematical logic", "Logic in computer science" ]
34,623,859
https://en.wikipedia.org/wiki/BuildingSMART
buildingSMART, formerly the International Alliance for Interoperability (IAI), is an international organisation which aims to improve the exchange of information between software applications used in the construction industry. It has developed Industry Foundation Classes (IFCs) as a neutral and open specification for Building Information Models (BIM) as well as Information Delivery Specification (IDS). History The IAI started in 1994 as an industry consortium of 12 US companies invited by Autodesk to advise on developing a set of C++ classes to support integrated application development. The other founding members were AT&T; Archibus; Carrier Corporation; Hellmuth, Obata & Kassabaum (HOK); Honeywell; Jaros, Baum & Bolles (JB&B); Lawrence Berkeley Laboratory; Primavera Systems; Softdesk; Timberline Software Corp; and Tishman Research Corp (part of Tishman Realty & Construction). The new technology was first demonstrated in June 1995 in Atlanta at A/E/C SYSTEMS '95. This Industry Alliance for Interoperability opened membership to all interested parties in September 1995 and in May 1996 was renamed the International Alliance for Interoperability as Autodesk users insisted that the IFCs should be non-proprietary and urged development of the IFC standard. The first version of IFC was published in June 1996 at which point 26 companies, including Autodesk, Bentley, Nemetschek and IEZ, committed to making their software IFC-compliant. The IAI was reconstituted as a not-for-profit industry-led organisation, promoting the Industry Foundation Class (IFC) as a neutral product model supporting the building lifecycle. In 2005, partly because its members felt the IAI name was too long and complex for people to understand, it was renamed buildingSMART. It has regional chapters in Europe, North America, Australia, Asia and the Middle East. Activities BuildingSMART says it develops and maintains international standards for openBIM, combining: buildingSMART Processes - information delivery manuals buildingSMART Data Dictionary (bsDD) buildingSMART Data model - the organisation manages the software-neutral Industry Foundation Classes (IFC) data model buildingSMART also maintains the BIM Collaboration Format (BCF), a structured file format used for issue tracking in relation to building information models. Chapters BuildingSMART has several chapters around the world. Australasia Benelux Canada China Finland France German speaking Hong Kong Italy India Japan Korea Malaysia Nordic North America Norway Poland Portugal Russia Singapore Spain Switzerland UK & Ireland (in January 2018, the UK chapter merged with the UK BIM Alliance) External links References Industrial computing 1994 establishments in Germany Building information modeling
BuildingSMART
[ "Technology", "Engineering" ]
547
[ "Building engineering", "Industrial engineering", "Automation", "Building information modeling", "Industrial computing" ]
34,624,069
https://en.wikipedia.org/wiki/Siconos
SICONOS is an open source scientific software primarily targeted at modeling and simulating non-smooth dynamical systems (NSDS): Mechanical systems (Rigid body or solid) with Unilateral contact and Coulomb friction as we find in Non-smooth mechanics, Contact dynamics or Granular material. Switched Electrical Circuit such as Power converter, Rectifier, Phase-locked loop (PLL) or Analog-to-digital converter Sliding mode control systems Other applications are found in Systems and Control (hybrid systems, differential inclusions, optimal control with state constraints), Optimization (Complementarity problem and Variational inequality) Biology Gene regulatory network, Fluid Mechanics and Computer graphics, etc. Components The software is based on 3 main components Siconos/Numerics (C API). Collection of low-level algorithms for solving basic Algebra and optimization problems arising in the simulation of nonsmooth dynamical systems Linear complementarity problem (LCP) Mixed linear complementarity problem (MLCP) Nonlinear complementarity problem (NCP) Quadratic programming problems (QP) Friction-contact problems (2D or 3D) (Second-order cone programming (SOCP)) Primal or Dual Relay problems Siconos/Kernel. API C++ that allows one to model and simulate the nonsmooth dynamical systems. It contains Dynamical systems classes : first order one, Lagrangian systems, Newton-Euler systems Nonsmooth laws : complementarity, Relay, Friction, Contact, impact Siconos/Front-end (API Python) Mainly an auto-generated SWIG interface of the API C++ which a special support for data structure. Performance According to peer reviewed studies published by its developers, Siconos was approximately five times faster than Ngspice or ELDO (a commercial SPICE by Mentor Graphics) and 250 times faster than PLECS when solving a buck converter. See also (an extension of the notion of differential equation) on which much of the NSDS theory relies , which affects ODEs/DAEs for functions with "sharp turns" and which affects numerical convergence References External links The official Siconos site other related publications Free science software Free software programmed in C Free software programmed in C++ Software using the Apache license Cross-platform free software Free software for Linux Free software for Windows Free software for macOS Dynamical systems Scientific simulation software
Siconos
[ "Physics", "Mathematics" ]
493
[ "Mechanics", "Dynamical systems" ]
34,626,610
https://en.wikipedia.org/wiki/LisH%20domain
In molecular biology, the LisH domain (lis homology domain) is a protein domain found in a large number of eukaryotic proteins, from metazoa, fungi and plants that have a wide range of functions. The recently solved structure of the LisH domain in the N-terminal region of LIS1 depicted it as a novel dimerisation motif, and that other structural elements are likely to play an important role in dimerisation. The LisH domain is found in the Saccharomyces cerevisiae SIF2 protein, a component of the SET3 complex which is responsible for repressing meiotic genes In SIF2 the LisH domain has been shown to mediate dimer and tetramer formation. It has been shown that the LisH domain helps mediate interaction with components of the SET3 complex. References Protein domains
LisH domain
[ "Biology" ]
180
[ "Protein domains", "Protein classification" ]
34,627,159
https://en.wikipedia.org/wiki/Lon%20protease%20family
In molecular biology, the Lon protease family is a family of enzymes that break peptide bonds in proteins resulting in smaller peptides or amino acids. They are found in archaea, bacteria and eukaryotes. Lon proteases are ATP-dependent serine peptidases belonging to the MEROPS peptidase family S16 (Lon protease family, clan SJ). In the eukaryotes the majority of the Lon proteases are located in the mitochondrial matrix. In yeast, the Lon protease PIM1 is located in the mitochondrial matrix. It is required for mitochondrial function, it is constitutively expressed but is increased after thermal stress, suggesting that PIM1 may play a role in the heat shock response. Lon proteases have two specific subfamilies: LonA and LonB, differentiated by the number of AAA+ domains found in the protein. See also LONP1 References External links MEROPS family S16 Protein families
Lon protease family
[ "Biology" ]
211
[ "Protein families", "Protein classification" ]
34,629,138
https://en.wikipedia.org/wiki/Conformal%20geometric%20algebra
Conformal geometric algebra (CGA) is the geometric algebra constructed over the resultant space of a map from points in an -dimensional base space to null vectors in . This allows operations on the base space, including reflections, rotations and translations to be represented using versors of the geometric algebra; and it is found that points, lines, planes, circles and spheres gain particularly natural and computationally amenable representations. The effect of the mapping is that generalized (i.e. including zero curvature) -spheres in the base space map onto -blades, and so that the effect of a translation (or any conformal mapping) of the base space corresponds to a rotation in the higher-dimensional space. In the algebra of this space, based on the geometric product of vectors, such transformations correspond to the algebra's characteristic sandwich operations, similar to the use of quaternions for spatial rotation in 3D, which combine very efficiently. A consequence of rotors representing transformations is that the representations of spheres, planes, circles and other geometrical objects, and equations connecting them, all transform covariantly. A geometric object (a -sphere) can be synthesized as the wedge product of linearly independent vectors representing points on the object; conversely, the object can be decomposed as the repeated wedge product of vectors representing distinct points in its surface. Some intersection operations also acquire a tidy algebraic form: for example, for the Euclidean base space , applying the wedge product to the dual of the tetravectors representing two spheres produces the dual of the trivector representation of their circle of intersection. As this algebraic structure lends itself directly to effective computation, it facilitates exploration of the classical methods of projective geometry and inversive geometry in a concrete, easy-to-manipulate setting. It has also been used as an efficient structure to represent and facilitate calculations in screw theory. CGA has particularly been applied in connection with the projective mapping of the everyday Euclidean space into a five-dimensional vector space , which has been investigated for applications in robotics and computer vision. It can be applied generally to any pseudo-Euclidean space - for example, Minkowski space to the space . Construction of CGA Notation and terminology In this article, the focus is on the algebra as it is this particular algebra that has been the subject of most attention over time; other cases are briefly covered in a separate section. The space containing the objects being modelled is referred to here as the base space, and the algebraic space used to model these objects as the representation or conformal space. A homogeneous subspace refers to a linear subspace of the algebraic space. The terms for objects: point, line, circle, sphere, quasi-sphere etc. are used to mean either the geometric object in the base space, or the homogeneous subspace of the representation space that represents that object, with the latter generally being intended unless indicated otherwise. Algebraically, any nonzero null element of the homogeneous subspace will be used, with one element being referred to as normalized by some criterion. Boldface lowercase Latin letters are used to represent position vectors from the origin to a point in the base space. Italic symbols are used for other elements of the representation space. Base and representation spaces The base space is represented by extending a basis for the displacements from a chosen origin and adding two basis vectors and orthogonal to the base space and to each other, with and , creating the representation space . It is convenient to use two null vectors and as basis vectors in place of and , where , and . It can be verified, where is in the base space, that: These properties lead to the following formulas for the basis vector coefficients of a general vector in the representation space for a basis with elements orthogonal to every other basis element: The coefficient of for is The coefficient of for is The coefficient of for is . Mapping between the base space and the representation space The mapping from a vector in the base space (being from the origin to a point in the affine space represented) is given by the formula: Points and other objects that differ only by a nonzero scalar factor all map to the same object in the base space. When normalisation is desired, as for generating a simple reverse map of a point from the representation space to the base space or determining distances, the condition may be used. The forward mapping is equivalent to: first conformally projecting from onto a unit 3-sphere in the space (in 5-D this is in the subspace ); then lift this into a projective space, by adjoining , and identifying all points on the same ray from the origin (in 5-D this is in the subspace ); then change the normalisation, so the plane for the homogeneous projection is given by the co-ordinate having a value , i.e. . Inverse mapping An inverse mapping for on the null cone is given (Perwass eqn 4.37) by This first gives a stereographic projection from the light-cone onto the plane , and then throws away the and parts, so that the overall result is to map all of the equivalent points to . Origin and point at infinity The point in maps to in , so is identified as the (representation) vector of the point at the origin. A vector in with a nonzero coefficient, but a zero coefficient, must (considering the inverse map) be the image of an infinite vector in . The direction therefore represents the (conformal) point at infinity. This motivates the subscripts and for identifying the null basis vectors. The choice of the origin is arbitrary: any other point may be chosen, as the representation is of an affine space. The origin merely represents a reference point, and is algebraically equivalent to any other point. As with any translation, changing the origin corresponds to a rotation in the representation space. Geometrical objects Basis Together with and , these are the 32 basis blades of the algebra. The Flat Point Origin is written as an outer product because the geometric product is of mixed grade.(). As the solution of a pair of equations Given any nonzero blade of the representing space, the set of vectors that are solutions to a pair of homogeneous equations of the form is the union of homogeneous 1-d subspaces of null vectors, and is thus a representation of a set of points in the base space. This leads to the choice of a blade as being a useful way to represent a particular class of geometric objects. Specific cases for the blade (independent of the number of dimensions of the space) when the base space is Euclidean space are: a scalar: the empty set a vector: a single point a bivector: a pair of points a trivector: a generalized circle a 4-vector: a generalized sphere etc. These each may split into three cases according to whether is positive, zero or negative, corresponding (in reversed order in some cases) to the object as listed, a degenerate case of a single point, or no points (where the nonzero solutions of exclude null vectors). The listed geometric objects (generalized -spheres) become quasi-spheres in the more general case of the base space being pseudo-Euclidean. Flat objects may be identified by the point at infinity being included in the solutions. Thus, if , the object will be a line, plane, etc., for the blade respectively being of grade 3, 4, etc. As derived from points of the object A blade representing of one of this class of object may be found as the outer product of linearly independent vectors representing points on the object. In the base space, this linear independence manifests as each point lying outside the object defined by the other points. So, for example, a fourth point lying on the generalized circle defined by three distinct points cannot be used as a fourth point to define a sphere. odds Points in e123 map onto the null cone—the null parabola if we set . We can consider the locus of points in e123 s.t. in conformal space , for various types of geometrical object A. We start by observing that compare: x. a = 0 => x perp a; x.(a∧b) = 0 => x perp a and x perp b x∧a = 0 => x parallel to a; x∧(a∧b) = 0 => x parallel to a or to b (or to some linear combination) the inner product and outer product representations are related by dualisation x∧A = 0 <=> x . A* = 0 (check—works if x is 1-dim, A is n-1 dim) g(x) . A = 0 A point: the locus of x in R3 is a point if A in R4,1 is a vector on the null cone. (N.B. that because it's a homogeneous projective space, vectors of any length on a ray through the origin are equivalent, so g(x).A =0 is equivalent to g(x).g(a) = 0). A sphere: the locus of x is a sphere if A = S, a vector off the null cone. If then S.X = 0 => these are the points corresponding to a sphere for a vector S off the null-cone, which directions are hyperbolically orthogonal? (cf Lorentz transformation pix) in 2+1 D, if S is (1,a,b), (using co-ords e-, {e+, ei}), the points hyperbolically orthogonal to S are those euclideanly orthogonal to (-1,a,b)—i.e., a plane; or in n dimensions, a hyperplane through the origin. This would cut another plane not through the origin in a line (a hypersurface in an n-2 surface), and then the cone in two points (resp. some sort of n-3 conic surface). So it's going to probably look like some kind of conic. This is the surface that is the image of a sphere under g. A plane: the locus of x is a plane if A = P, a vector with a zero no component. In a homogeneous projective space such a vector P represents a vector on the plane no=1 that would be infinitely far from the origin (ie infinitely far outside the null cone), so g(x).P =0 corresponds to x on a sphere of infinite radius, a plane. In particular: corresponds to x on a plane with normal an orthogonal distance α from the origin. corresponds to a plane half way between a and b, with normal a - b circles tangent planes lines lines at infinity point pairs Transformations reflections It can be verified that forming P g(x) P gives a new direction on the null-cone, g(x' ), where x' corresponds to a reflection in the plane of points p in R3 that satisfy g(p) . P = 0. g(x) . A = 0 => P g(x) . A P = 0 => P g(x) P . P A P (and similarly for the wedge product), so the effect of applying P sandwich-fashion to any the quantities A in the section above is similarly to reflect the corresponding locus of points x, so the corresponding circles, spheres, lines and planes corresponding to particular types of A are reflected in exactly the same way that applying P to g(x) reflects a point x. This reflection operation can be used to build up general translations and rotations: translations Reflection in two parallel planes gives a translation, If and then rotations corresponds to an x' that is rotated about the origin by an angle 2 θ where θ is the angle between a and b -- the same effect that this rotor would have if applied directly to x. general rotations rotations about a general point can be achieved by first translating the point to the origin, then rotating around the origin, then translating the point back to its original position, i.e. a sandwiching by the operator so screws the effect a screw, or motor, (a rotation about a general point, followed by a translation parallel to the axis of rotation) can be achieved by sandwiching g(x) by the operator . M can also be parametrised (Chasles' theorem) inversions an inversion is a reflection in a sphere – various operations that can be achieved using such inversions are discussed at inversive geometry. In particular, the combination of inversion together with the Euclidean transformations translation and rotation is sufficient to express any conformal mapping – i.e. any mapping that universally preserves angles. (Liouville's theorem). dilations two inversions with the same centre produce a dilation. Generalizations History Conferences and journals There is a vibrant and interdisciplinary community around Clifford and Geometric Algebras with a wide range of applications. The main conferences in this subject include the International Conference on Clifford Algebras and their Applications in Mathematical Physics (ICCA) and Applications of Geometric Algebra in Computer Science and Engineering (AGACSE) series. A main publication outlet is the Springer journal Advances in Applied Clifford Algebras. Notes References Bibliography Books Hestenes et al (2000), in G. Sommer (ed.), Geometric Computing with Clifford Algebra. Springer Verlag. (Google books) (https://davidhestenes.net/geocalc/html/UAFCG.html Hestenes website) Ch. 1: New algebraic tools for classical geometry Ch. 2: Generalized Homogeneous Coordinates for Computational Geometry Ch. 3: Spherical Conformal Geometry with Geometric Algebra Ch. 4: A Universal Model for Conformal Geometries of Euclidean, Spherical and Double-Hyperbolic Spaces Hestenes (2001), in E. Bayro-Corrochano & G. Sobczyk (eds.), Advances in Geometric Algebra with Applications in Science and Engineering, Springer Verlag. Google books Old Wine in New Bottles (pp. 1–14) Hestenes (2010), in E. Bayro-Corrochano and G. Scheuermann (2010), Geometric Algebra Computing in Engineering and Computer Science. Springer Verlag. (Google books). New Tools for Computational Geometry and rejuvenation of Screw Theory Doran, C. and Lasenby, A. (2003), Geometric algebra for physicists, Cambridge University Press. §10.2; p. 351 et seq Dorst, L. et al (2007), Geometric Algebra for Computer Science, Morgan-Kaufmann. Chapter 13; p. 355 et seq Vince, J. (2008), Geometric Algebra for Computer Graphics, Springer Verlag. Chapter 11; p. 199 et seq Perwass, C. (2009), Geometric Algebra with Applications in Engineering, Springer Verlag. §4.3: p. 145 et seq Bayro-Corrochano, E. and Scheuermann G. (2010, eds.), Geometric Algebra Computing in Engineering and Computer Science. Springer Verlag. pp. 3–90 Bayro-Corrochano (2010), Geometric Computing for Wavelet Transforms, Robot Vision, Learning, Control and Action. Springer Verlag. Chapter 6; pp. 149–183 Dorst, L. and Lasenby, J. (2011, eds.), Guide to Geometric Algebra in Practice. Springer Verlag, pp. 3–252. . Online resources Wareham, R. (2006), Computer Graphics using Conformal Geometric Algebra, PhD thesis, University of Cambridge, pp. 14–26, 31—67 Bromborsky, A. (2008), Conformal Geometry via Geometric Algebra (Online slides) Dell’Acqua, A. et al (2008), 3D Motion from structures of points, lines and planes, Image and Vision Computing, 26 529–549 Dorst, L. (2010), Tutorial: Structure-Preserving Representation of Euclidean Motions through Conformal Geometric Algebra, in E. Bayro-Corrochano, G. Scheuermann (eds.), Geometric Algebra Computing, Springer Verlag. Colapinto, P. (2011), VERSOR Spatial Computing with Conformal Geometric Algebra, MSc thesis, University of California Santa Barbara Macdonald, A. (2013), A Survey of Geometric Algebra and Geometric Calculus. (Online notes) §4.2: p. 26 et seq. on the motor algebra over Rn+1: Eduardo Bayro Corrochano (2001), Geometric computing for perception action systems: Concepts, algorithms and scientific applications. (Google books) Geometric algebra Conformal geometry Inversive geometry Computational geometry
Conformal geometric algebra
[ "Mathematics" ]
3,442
[ "Computational geometry", "Computational mathematics" ]
34,630,441
https://en.wikipedia.org/wiki/Li-Fi%20Consortium
The Li-Fi Consortium is an international organization focusing on optical wireless technologies. It was founded by four technology-based organizations in October 2011. The goal of the Li-Fi Consortium is to foster the development and distribution of (Li-Fi) optical wireless technologies such as communication, navigation, natural user interfaces and others. Status the Li-Fi Consortium outlined a roadmap for different types of optical communication such as gigabit-class communication as well as a full-featured Li-Fi cloud, which includes many more besides wireless infrared, and visible light communication. References External links Official website Wireless Wireless network organizations Computer networking Optical communications
Li-Fi Consortium
[ "Technology", "Engineering" ]
129
[ "Optical communications", "Computer networking", "Telecommunications engineering", "Computer engineering", "Wireless networking", "Wireless", "Computer science", "Wireless network organizations" ]
34,631,143
https://en.wikipedia.org/wiki/Tcov
Tcov is a source code coverage analysis and statement-by-statement profiling tool for software written in Fortran, C and C++. Tcov generates exact counts of the number of times each statement in a program is executed and annotates source code to add instrumentation. It is a standard utility, provided free of cost with Sun Studio software. The tcov utility gives information on how often a program executes segments of code. It produces a copy of the source file, annotated with execution frequencies. The code can be annotated at the basic block level or the source line level. As the statements in a basic block are executed the same number of times, a count of basic block executions equals number of times each statement in the block is executed. The tcov utility does not produce any time-based data. Description tcov produces a test coverage analysis of a compiled program. tcov takes source files as arguments and produces an annotated source listing. Each basic block of code (or each line if the particular option to tcov is specified) is prefixed with the number of times it has been executed; lines that have not been executed are prefixed with "#####". The tcov utility also places a summary at the end of the annotated program listing. The statistics for the most frequently executed basic blocks are listed in order of execution frequency. The line number is the number of the first line in the block. There are two implementations of tcov: Old Style coverage analysis: In this implementation, also known as tcov original, the compiler creates a coverage data file with the suffix .d for each object file. When program completes, the coverage data files are updated. New Style coverage analysis: In this implementation, also known as tcov enhanced, no additional files are created at compile time. Instead, directory is created to store the profile data, and a single coverage data file called tcovd is created in that directory. Enhanced coverage analysis overcomes some of the shortcomings of the original analysis tool, such as: Provides more complete support for C++. Supports code found in #include header files and corrects a flaw that obscured coverage numbers for template classes and functions. More efficient runtime than the original tcov runtime. Supported for all the platforms that the compilers support. Implementation To generate annotated source code, following three steps are required: Code compilation with appropriate compiler option Program execution to accumulate profile data tcov command execution to generate annotated files Each subsequent run accumulates more coverage data into the profile data file. Data for each object file is zeroed out the first time the program is executed after recompilation. Data for the entire program is zeroed by removing the tcovd file. The above steps are explained for both original and enhanced tcov below: Old Style coverage analysis Source code is compiled with -xa option for C program and -a option for Fortran and C++ programs. The compiler creates a coverage data file with the suffix .d for each object file. The coverage data file is created in the directory specified by the environment variable TCOVDIR. If TCOVDIR is not set, the coverage data file is created in the current directory. The above instrumented build is run and at program completion, the .d files are updated. Finally, tcov command is run to generate the annotated source files. The syntax of the tcov command is as follows: tcov options source-file-list Here, source-file-list is a list of the source code filenames. For a list of options, The default output of tcov is a set of files, each with the suffix .tcov, which can be changed with the -o filename option. A program compiled for code coverage analysis can be run multiple times (with potentially varying input); tcov can be used on the program after each run to compare behavior. New Style coverage analysis Source code is compiled with -xprofile=tcov option. Unlike original mode, enhanced tcov doesn't generate any files at compile time. The above instrumented build is run and at program completion, a directory is created to store the profile data, and a single coverage data file called tcovd is created in that directory. tcovd holds the information about the line numbers, and the execution count. It is a plain text file. By default, the directory is created in the location where program is run, and it is named after executable and suffixed by .profile. The directory is also known as the profile bucket. The location of profile bucket can be overridden by setting SUN_PROFDATA_DIR or SUN_PROFDATA environment variables. Finally, tcov command is run to generate the annotated source files. The syntax of the tcov command is same as for original command, except for the mandatory -x option. tcov options -x profilebucket source-file-list The only difference in command from original tcov is the mandatory addition is of -x dir option to denote enhanced tcov. Example The following program, written in C programming language, loops overs the integers 1 to 9 and tests their divisibility with the modulus (%) operator. #include <stdio.h> int main (void) { int i; for (i = 1; i < 10; i++) { if (i % 3 == 0) printf ("%d is divisible by 3\n", i); if (i % 11 == 0) printf ("%d is divisible by 11\n", i); } return 0; } To enable coverage testing the program must be compiled with the following options: for old style code coverage, cc -xa cov.c and for new style code coverage, cc -xprofile=tcov -o cov cov.c where cov.c is the name of the program file. This creates an instrumented executable which contains additional instructions that record the number of times each line of the program is executed. -o option is used to set the name of the executable. The executable must then be run to create the coverage data. The creation and location of this file is different for old- and new- style code analysis. In old style analysis, this file with extension .d, created after compilation, either in TCOVDIR directory or current one, is updated with coverage data. In new style analysis, coverage data file, with name tcovd, is created in <executable name>.profile directory. This data can be analyzed using the tcov command and the name of a source file: for old style code coverage, tcov cov.c and for new style code coverage, tcov -x cov.profile cov.c the addition argument in new style analysis is profile bucket. The tcov command produces an annotated version of the original source file, with the file extension ‘.tcov’, containing counts of the number of times each line was executed: #include <stdio.h> int main (void) { 1 int i; 10 for (i = 1; i < 10; i++) { 9 if (i % 3 == 0) 3 printf ("%d is divisible by 3\n", i); 9 if (i % 11 == 0) ###### printf ("%d is divisible by 11\n", i); 9 } 1 return 0; 1 } The tcov utility also places a summary at the end of the annotated program listing. The statistics for the most frequently executed basic blocks are listed in order of execution frequency. The line number is the number of the first line in the block. Command line options Tcov command line utility supports following options while generating annotated files from profile data: -a: Display an execution count for each statement. If this option is not specified, then execution count is shown only for the leader of a code block. -n: Display table of the line numbers of the n most frequently executed statements and their execution counts. -o filename: Direct the output to filename instead of file.tcov. This option can be utilized to direct output to standard output by specifying -. -x dir: This is supported in new style coverage analysis. If this option is not specified, old style tcov coverage is assumed. See also Sun Studio, compiler suite that provides Tcov Common Development and Distribution License Code coverage Gcov, code coverage tool provided by GCC References Software metrics Software testing tools
Tcov
[ "Mathematics", "Engineering" ]
1,806
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
34,633,252
https://en.wikipedia.org/wiki/Ruina%20montium
Ruina montium (Latin, "wrecking of mountains") was an ancient Roman mining technique described by Pliny the Elder (Natural History 33.21), who served as procurator in Spain. It is thought to draw on the principle of Pascal's barrel. Miners would excavate narrow cavities down into a mountain, whereby filling the cavities with water would cause pressures large enough to fragment thick rock walls. See also Hushing Hydraulic mining Las Médulas Mountaintop removal mining References Hydrostatics Industry in ancient Rome Mining techniques
Ruina montium
[ "Chemistry" ]
116
[ "Fluid dynamics stubs", "Fluid dynamics" ]
6,870,554
https://en.wikipedia.org/wiki/Luhn%20mod%20N%20algorithm
The Luhn mod N algorithm is an extension to the Luhn algorithm (also known as mod 10 algorithm) that allows it to work with sequences of values in any even-numbered base. This can be useful when a check digit is required to validate an identification string composed of letters, a combination of letters and digits or any arbitrary set of characters where is divisible by 2. Informal explanation The Luhn mod N algorithm generates a check digit (more precisely, a check character) within the same range of valid characters as the input string. For example, if the algorithm is applied to a string of lower-case letters (a to z), the check character will also be a lower-case letter. Apart from this distinction, it resembles very closely the original algorithm. The main idea behind the extension is that the full set of valid input characters is mapped to a list of code-points (i.e., sequential integers beginning with zero). The algorithm processes the input string by converting each character to its associated code-point and then performing the computations in mod N (where is the number of valid input characters). Finally, the resulting check code-point is mapped back to obtain its corresponding check character. Limitation The Luhn mod N algorithm only works where is divisible by 2. This is because there is an operation to correct the value of a position after doubling its value which does not work where is not divisible by 2. For applications using the English alphabet this is not a problem, since a string of lower-case letters has 26 code-points, and adding Decimal characters adds a further 10, maintaining an divisible by 2. Explanation The second step in the Luhn algorithm re-packs the doubled value of a position into the original digit's base by adding together the individual digits in the doubled value when written in base . This step results in even numbers if the doubled value is less than or equal to , and odd numbers if the doubled value is greater than . For example, in Decimal applications where is 10, original values between 0 and 4 result in even numbers and original values between 5 and 9 result in odd numbers, effectively re-packing the doubled values between 0 and 18 into a single distinct result between 0 and 9. Where an is used that is not divisible by 2 this step returns even numbers for doubled values greater than which cannot be distinguished from doubled values less than or equal to . Outcome The algorithm will neither detect all single-digit errors nor all transpositions of adjacent digits if an is used that is not divisible by 2. As these detection capabilities are the algorithm's primary strengths, the algorithm is weakened almost entirely by this limitation. The Luhn mod N algorithm odd variation enables applications where is not divisible by 2 by replacing the doubled value at each position with the remainder after dividing the position's value by which gives odd number remainders consistent with the original algorithm design. Mapping characters to code-points Initially, a mapping between valid input characters and code-points must be created. For example, consider that the valid characters are the lower-case letters from a to f. Therefore, a suitable mapping would be: Note that the order of the characters is completely irrelevant. This other mapping would also be acceptable (although possibly more cumbersome to implement): It is also possible to intermix letters and digits (and possibly even other characters). For example, this mapping would be appropriate for lower-case hexadecimal digits: Algorithm in C# Assuming the following functions are defined: /// <summary> /// This can be any string of characters. /// </summary> private const string CodePoints = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; private int NumberOfValidInputCharacters() => CodePoints.Length; private int CodePointFromCharacter(char character) => CodePoints.IndexOf(character); private char CharacterFromCodePoint(int codePoint) => CodePoints[codePoint]; The function to generate a check character is: char GenerateCheckCharacter(string input) { int factor = 2; int sum = 0; int n = NumberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (int i = input.Length - 1; i >= 0; i--) { int codePoint = CodePointFromCharacter(input[i]); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = IntegerValue(addend / n) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". int remainder = sum % n; int checkCodePoint = (n - remainder) % n; return CharacterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: bool ValidateCheckCharacter(string input) { int factor = 1; int sum = 0; int n = NumberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (int i = input.Length - 1; i >= 0; i--) { int codePoint = CodePointFromCharacter(input[i]); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = IntegerValue(addend / n) + (addend % n); sum += addend; } int remainder = sum % n; return (remainder == 0); } Algorithm in Java Assuming the following functions are defined: int codePointFromCharacter(char character) {...} char characterFromCodePoint(int codePoint) {...} int numberOfValidInputCharacters() {...} The function to generate a check character is: char generateCheckCharacter(String input) { int factor = 2; int sum = 0; int n = numberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (int i = input.length() - 1; i >= 0; i--) { int codePoint = codePointFromCharacter(input.charAt(i)); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (addend / n) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". int remainder = sum % n; int checkCodePoint = (n - remainder) % n; return characterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: boolean validateCheckCharacter(String input) { int factor = 1; int sum = 0; int n = numberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (int i = input.length() - 1; i >= 0; i--) { int codePoint = codePointFromCharacter(input.charAt(i)); int addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (addend / n) + (addend % n); sum += addend; } int remainder = sum % n; return (remainder == 0); } Algorithm in JavaScript Assuming the following functions are defined: const codePoints = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; //This can be any string of permitted characters function numberOfValidInputCharacters() { return codePoints.length; } function codePointFromCharacter(character) { return codePoints.indexOf(character); } function characterFromCodePoint(codePoint) { return codePoints.charAt(codePoint); } The function to generate a check character is: function generateCheckCharacter(input) { let factor = 2; let sum = 0; let n = numberOfValidInputCharacters(); // Starting from the right and working leftwards is easier since // the initial "factor" will always be "2". for (let i = input.length - 1; i >= 0; i--) { let codePoint = codePointFromCharacter(input.charAt(i)); let addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (Math.floor(addend / n)) + (addend % n); sum += addend; } // Calculate the number that must be added to the "sum" // to make it divisible by "n". let remainder = sum % n; let checkCodePoint = (n - remainder) % n; return characterFromCodePoint(checkCodePoint); } And the function to validate a string (with the check character as the last character) is: function validateCheckCharacter(input) { let factor = 1; let sum = 0; let n = numberOfValidInputCharacters(); // Starting from the right, work leftwards // Now, the initial "factor" will always be "1" // since the last character is the check character. for (let i = input.length - 1; i >= 0; i--) { let codePoint = codePointFromCharacter(input.charAt(i)); let addend = factor * codePoint; // Alternate the "factor" that each "codePoint" is multiplied by factor = (factor == 2) ? 1 : 2; // Sum the digits of the "addend" as expressed in base "n" addend = (Math.floor(addend / n)) + (addend % n); sum += addend; } let remainder = sum % n; return (remainder == 0); } Example Generation Consider the above set of valid input characters and the example input string . To generate the check character, start with the last character in the string and move left doubling every other code-point. The "digits" of the code-points as written in base 6 (since there are 6 valid input characters) should then be summed up: The total sum of digits is 14 (0 + 2 + 2 + 1 + 4 + 5). The number that must be added to obtain the next multiple of 6 (in this case, 18) is 4. This is the resulting check code-point. The associated check character is e. Validation The resulting string can then be validated by using a similar procedure: The total sum of digits is 18. Since it is divisible by 6, the check character is valid. Implementation The mapping of characters to code-points and back can be implemented in a number of ways. The simplest approach (akin to the original Luhn algorithm) is to use ASCII code arithmetic. For example, given an input set of 0 to 9, the code-point can be calculated by subtracting the ASCII code for '0' from the ASCII code of the desired character. The reverse operation will provide the reverse mapping. Additional ranges of characters can be dealt with by using conditional statements. Non-sequential sets can be mapped both ways using a hard-coded switch/case statement. A more flexible approach is to use something similar to an associative array. For this to work, a pair of arrays is required to provide the two-way mapping. An additional possibility is to use an array of characters where the array indexes are the code-points associated with each character. The mapping from character to code-point can then be performed with a linear or binary search. In this case, the reverse mapping is just a simple array lookup. Weakness This extension shares the same weakness as the original algorithm, namely, it cannot detect the transposition of the sequence <first-valid-character><last-valid-character> to <last-valid-character><first-valid-character> (or vice versa). This is equivalent to the transposition of 09 to 90 (assuming a set of valid input characters from 0 to 9 in order). On a positive note, the larger the set of valid input characters, the smaller the impact of the weakness. See also International Securities Identification Number (ISIN) Modular arithmetic Checksum algorithms Articles with example C Sharp code Articles with example Java code Articles with example JavaScript code
Luhn mod N algorithm
[ "Mathematics" ]
3,034
[ "Arithmetic", "Modular arithmetic", "Number theory" ]
6,870,773
https://en.wikipedia.org/wiki/Hexamethyldisiloxane
Hexamethyldisiloxane (HMDSO or MM) is an organosilicon compound with the formula O[Si(CH3)3]2. This volatile colourless liquid is used as a solvent and as a reagent in organic synthesis. It is prepared by the hydrolysis of trimethylsilyl chloride. The molecule is the protypical disiloxane and resembles a subunit of polydimethylsiloxane. Synthesis and reactions Hexamethyldisiloxane can be produced by addition of trimethylsilyl chloride to purified water: 2 Me3SiCl + H2O → 2 HCl + O[Si(CH3)3]2 It also results from the hydrolysis of silyl ethers and other silyl-protected functional groups. HMDSO can be converted back to the chloride by reaction with Me2SiCl2. Hexamethyldisiloxane is mainly used as source of the trimethylsilyl functional group (-Si(CH3)3) in organic synthesis. For example, in the presence of acid catalyst, it converts alcohols and carboxylic acids into the silyl ethers and silyl esters, respectively. It reacts with rhenium(VII) oxide to give a siloxide: Re2O7 + O[Si(CH3)3]2 → 2 O3ReOSi(CH3)3 Niche uses HMDSO is used as an internal standard for calibrating chemical shift in1H NMR spectroscopy. It is more easily handled since it is less volatile than the usual standard tetramethylsilane but still displays only a singlet near 0ppm. HMDSO has even poorer solvating power than alkanes. It is therefore sometimes employed to crystallize highly lipophilic compounds. It is used in liquid bandages (spray on plasters) such as cavilon spray, to protect damaged skin from irritation from other bodily fluids. It is also used to soften and remove adhesive residues left by medical tape and bandages, without causing further skin irritation. HMDSO is being studied for making low-k dielectric materials for the semi-conductor industries by plasma-enhanced chemical vapor deposition (PECVD). HMDSO has been used as a reporter molecule to measure tissue oxygen tension (pO). HMDSO is highly hydrophobic and exhibits high gas solubility, and hence strong nuclear magnetic resonance spin lattice relaxation rate (R1) response to changes in pO. Molecular symmetry provides a single NMR signal. Following direct injection into tissues it has been used to generate maps of tumor and muscle oxygenation dynamics with respect to hyperoxic gas breathing challenge. References Siloxanes Trimethylsilyl compounds
Hexamethyldisiloxane
[ "Chemistry" ]
582
[ "Functional groups", "Trimethylsilyl compounds" ]
6,871,841
https://en.wikipedia.org/wiki/Piping%20and%20plumbing%20fitting
A fitting or adapter is used in pipe systems to connect sections of pipe (designated by nominal size, with greater tolerances of variance) or tube (designated by actual size, with lower tolerance for variance), adapt to different sizes or shapes, and for other purposes such as regulating (or measuring) fluid flow. These fittings are used in plumbing to manipulate the conveyance of fluids such as water for potatory, irrigational, sanitary, and refrigerative purposes, gas, petroleum, liquid waste, or any other liquid or gaseous substances required in domestic or commercial environments, within a system of pipes or tubes, connected by various methods, as dictated by the material of which these are made, the material being conveyed, and the particular environmental context in which they will be used, such as soldering, mortaring, caulking, plastic welding, welding, friction fittings, threaded fittings, and compression fittings. Fittings allow multiple pipes to be connected to cover longer distances, increase or decrease the size of the pipe or tube, or extend a network by branching, and make possible more complex systems than could be achieved with only individual pipes. Valves are specialized fittings that permit regulating the flow of fluid within a plumbing system. Standards Standard codes are followed when designing (or manufacturing) a piping system. Organizations which promulgate piping standards include: ASME: American Society of Mechanical Engineers A112.19.1 Enameled cast-iron and steel plumbing fixtures standards A112.19.2 Ceramic plumbing fixtures standard ASTM International: American Society for Testing and Materials API: American Petroleum Institute AWS: American Welding Society AWWA: American Water Works Association MSS: Manufacturers Standardization Society ANSI: American National Standards Institute NFPA: National Fire Protection Association EJMA: Expansion Joint Manufacturers Association CGA: Compressed Gas Association PCA: Plumbing Code of Australia Pipes must conform to the dimensional requirements of: ASME B36.10M: Welded and seamless wrought-steel pipe ASME B36.19M: Stainless-steel pipe ASME B31.3 2008: Process piping ASME B31.4 XXXX: Power piping Materials The material with which a pipe is manufactured is often the basis for choosing a pipe. Materials used for manufacturing pipes include: Carbon (CS) and galvanized steel Impact-tested carbon steel (ITCS) Low-temperature carbon steel (LTCS) Stainless steel (SS) Malleable iron Chrome-molybdenum (alloy) steel (generally used for high-temperature service) Non-ferrous metals (includes copper, inconel, incoloy, and cupronickel) Non-metallic (includes acrylonitrile butadiene styrene (ABS), fibre-reinforced plastic (FRP), polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC), high-density polyethylene (HDPE), Cross-linked polyethylene (PEX), and toughened glass; polybutylene has also been used, but is now banned in North America because of poor reliability) The bodies of fittings for pipe and tubing are often the same base material as the pipe or tubing connected: copper, steel, PVC, CPVC, or ABS. Any material permitted by the plumbing, health, or building code (as applicable) may be used, but it must be compatible with the other materials in the system, the fluids being transported, and the temperature and pressure inside (and outside) the system. Brass or bronze fittings are common in copper piping and plumbing systems. Fire resistance, earthquake resistance, mechanical ruggedness, theft resistance, and other factors also influence the choice of pipe and fitting materials. Gaskets Gaskets are mechanical seals, usually ring-shaped, which seal flange joints. Gaskets vary by construction, materials and features. Commonly used gaskets are non-metallic (ASME B 16.21), spiral-wound (ASME B 16.20) and ring-joint (ASME B 16.20). Non-metallic gaskets are used with flat- or raised-face flanges. Spiral-wound gaskets are used with raised-face flanges, and ring-joint gaskets are used with ring-type joint (RTJ) flanges. Stress develops between an RTJ gasket and the flange groove when the gasket is bolted to a flange, leading to plastic deformation of the gasket. Gender Piping or tubing is usually inserted into fittings to make connections. Connectors are assigned a gender, abbreviated M or F. An example of this is a "-inch female adapter NPT", which would have a corresponding male connection of the same size and thread standard (in this case also NPT). Common piping and plumbing fittings This section discusses fittings primarily used in pressurized piping systems, though there is some overlap with fittings for low-pressure or non-pressurized systems. Specialized fittings for the latter setups are discussed in the next major subsection. Adapter In plumbing, an adapter is generally a fitting that interfaces two different parts. The term commonly refers to: any fitting that connects pipes of different materials, including: expansion adapters which have a flexible section to absorb expansion or contraction from two dissimilar pipe materials mechanical joint (MJ) adapters for joining polyethylene pipe to another material bell adapters which are like mechanical joint adapters but contain a stainless steel backup ring to maintain a positive seal against the mating flange flange adapters which attach to a polyethylene pipe with butt fusion to stiffen a junction and allow another flanged pipe or fitting to be bolted on a fitting that connects pipes of different diameters, genders, or threads adapter spools (also called crossover spools), used on oilfields and pressure control, have different diameters, pressure ratings or designs at each end adapters to convert NPT to BSP pipe threads are available a fitting that connects threaded and non-threaded pipe Elbow An elbow is installed between two lengths of pipe (or tubing) to allow a change of direction, usually a 90° or 45° angle; 22.5° elbows are also available. The ends may be machined for butt welding, threaded (usually female), or socketed. When the ends differ in size, it is known as a reducing (or reducer) elbow. Clarity on the difference between plumbing terminologies and geometric angles: In plumbing, the term "45-degree elbow" for example, refers to the angle of bend from the original straight pipe position (0 degrees) to the new position (45 degrees), not the actual angle formed by the joint. On a protractor, the actual angle of the above joint is 135 degrees, an obtuse angle. This naming convention applies to other plumbing elbows, such as: - "88 degree elbow" = 92 degrees on a protractor. Visualise bending the left end of the pipe up 88 degrees. Now turn the piece of pipe around so the horizontal piece of pipe is in line with the zero degrees line on the protractor. The protractor will read 92 degrees. The key point is that the plumbing term focuses on the degree of bend from the original straight pipe, not the resulting angle. Elbows are also categorized by length. The radius of curvature of a long-radius (LR) elbow is 1.5 times the pipe diameter, but a short-radius (SR) elbow has a radius equal to the pipe diameter. Wide available short elbows are typically used in pressurized systems and physically tight locations. Long elbows are used in low-pressure gravity-fed systems and other applications where low turbulence and minimum deposition of entrained solids are of concern. They are available in acrylonitrile butadiene styrene (ABS plastic), polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC), and copper, and are used in DWV systems, sewage, and central vacuum systems. Coupling A coupling connects two pipes. The fitting is known as a reducing coupling, reducer, or adapter if their sizes differ. There are two types of collars: "regular" and "slip". A regular coupling has a small ridge or stops internally to prevent the over-insertion of a pipe and, thus, under-insertion of the other pipe segment (which would result in an unreliable connection). A slip coupling (sometimes also called a repair coupling) is deliberately made without this internal stop to allow it to be slipped into place in tight locations, such as the repair of a pipe that has a small leak due to corrosion or freeze bursting, or which had to be cut temporarily for some reason. Since the alignment stop is missing, it is up to the installer to carefully measure the final location of the slip coupling to ensure that it is located correctly. Union A union also connects two pipes but is quite different from a coupling, as it allows future disconnection of the pipes for maintenance. In contrast to a coupling requiring solvent welding, soldering, or rotation (for threaded couplings), a union allows easy connection and disconnection multiple times if needed. It consists of three parts: a nut, a female, and a male end. When the female and male ends are joined, the nut seals the joint by pressing the two ends tightly together. Unions are a type of very compact flange connector. Dielectric unions, with dielectric insulation, separate dissimilar metals (such as copper and galvanized steel) to prevent galvanic corrosion. When two dissimilar metals are in contact with an electrically conductive solution (ordinary tap water is conductive), they form an electrochemical couple which generates a voltage by electrolysis. When the metals are in direct contact with each other, the electric current from one to the other also moves metallic ions from one to the other; this dissolves one metal, depositing it on the other. A dielectric union breaks the electrical path with a plastic liner between its halves, limiting galvanic corrosion. Rotary unions allow mechanical rotation of one of the joined parts while resisting leakage. Nipple A nipple is a short stub of pipe, usually male-threaded steel, brass, chlorinated polyvinyl chloride (CPVC), or copper (occasionally unthreaded copper), which connects two other fittings. A nipple with continuous uninterrupted threading is known as a close nipple. Nipples are commonly used with plumbing and hoses. Reducer A reducer reduces the pipe size from a larger to a smaller bore (inner diameter). Alternatively, reducer may refer to any fitting which causes a change in pipe diameter. This change may be intended to meet hydraulic flow requirements of the system or adapt to existing piping of a different size. The reduction length is usually equal to the average of the larger and smaller pipe diameters. Although reducers are usually concentric, eccentric reducers are used as needed to maintain the top- or bottom-of-pipe level. A reducer can also be used either as a nozzle or diffuser, depending on the mach number of the flow. Bushing A double-tapped bushing, commonly shortened to bushing, is a fitting which serves as a reducer. It is a sleeve similar to a close nipple, but is threaded on both its inner and outer circumferences. Like a reducer, a double-tapped bushing has two threads of different sizes. A common type of this style fitting is a "hex bushing" with a hex head for installation with a pipe wrench. A double-tapped bushing is more compact than a reducer but not as flexible. While a double-tapped bushing has a more minor female thread concentric to a larger male thread (and thus couples a smaller male end to a larger female), a reducer may have large and small ends of either gender. If both ends are the same gender, it is a gender-changing reducer. There are similar fittings for both sweat and solvent joinery. Since they are not "tapped" (threaded), they are simply called reducing bushings''. Tee A tee combines or divides fluid flow. Tees can connect pipes of different diameters, change the direction of a pipe run, or both. Available in various materials, sizes and finishes, they may also be used to transport two-fluid mixtures. Tees may be equal or unequal in size of their three connections, with equal tees the most common. Diverter tee This specialized type of tee fitting is used primarily in pressurized hydronic heating systems to divert a portion of the flow from the main line into a side branch connected to a radiator or heat exchanger. The diverter tee allows the flow in the main line, even when the side branch is shut down and not calling for heat. Diverter tees must be heeded with directional markings; a tee installed backwards will function very poorly. Cross Crosses, also known as four-way fittings or cross branch lines, have one inlet and three outlets (or vice versa), and often have a solvent-welded sockets or female-threaded ends. Cross fittings may stress pipes as temperatures change because they are at the center of four connection points. A tee is steadier than a cross; it behaves like a three-legged stool, and a cross behaves like a four-legged stool. Geometrically, any three non-collinear points can self-consistently define a plane; three legs are inherently stable, whereas four points overdetermine a plane and can be inconsistent, resulting in physical stress on a fitting. Crosses are common in fire sprinkler systems (where stress caused by thermal expansion is not generally an issue), but are not common in plumbing. Cap Caps, usually liquid- or gas-tight, cover the otherwise open end of a pipe. The exterior of an industrial cap may be round, square, rectangular, U- or I-shaped, or may have a handgrip. Plug A plug''' is a short barbed fitting with a blank end that can only be used with PEX piping to end the continuation of a water line that is no longer in use due to tying in elsewhere within the system or to seal the end of a water line which may be used for future use in the case of additional facilities. All plugs are sealed watertight with a PEX crimp. Barb A barb (or hose barb), which connects flexible hose or tubing to pipes, typically has a male-threaded end which mates with female threads. The other end of the fitting has a single- or multi-barbed tube—a long tapered cone with ridges, which is inserted into a flexible hose. Valve Valves stop (or regulate) the flow of liquids or gases. They are categorized by application, such as isolation, throttling, and non-return. Isolation valves temporarily disconnect part of a piping system to allow maintenance or repair, for example. Isolation valves are typically left in either a fully open or closed position. A given isolation valve may be in place for many years without being operated but must be designed to be readily operable whenever needed, including for emergency use. Throttling valves control the amount or pressure of a fluid allowed to pass through and are designed to withstand the stress and wear caused by this operation. Because they may wear out in this usage, they are often installed alongside isolation valves which can temporarily disconnect a failing throttling valve from the rest of the system, so it can be refurbished or replaced. Non-return or check valves allow the free flow of a fluid in one direction but prevent its flow in a reverse direction. They are often seen in drainage or sewage systems but may also be used in pressurized systems. Valves are available in several types, based on design and purpose: Gate, plug, or ball valves – Isolation Globe valve – Throttling Needle valve – Throttling, usually with high precision but low flow Butterfly or diaphragm valves – Isolation and throttling Check valve – Preventing reverse flow (non-return) Drain-waste-vent (DWV) and related fittings Because they operate at low pressure and rely on gravity to move fluids (and entrained solids), drain-waste-vent systems use fittings whose interior surfaces are as smooth as possible. The fittings may be "belled" (expanded slightly in diameter) or otherwise shaped to accommodate the insertion of pipe or tubing without forming a sharp interior ridge that might catch debris or accumulate material, and cause a clog or blockage. Freshly cut ends of pipe segments are carefully deburred to remove projecting slivers of material which may snag debris (such as hair or fibers) which can build up to cause blockages. This internal smoothness also makes it easier to "snake out" or "rod out" a clogged pipe with a plumber's snake. Underground piping systems for landscaping drainage or the disposal of stormwater or groundwater also use low-pressure gravity flow, so fittings for these systems resemble larger-scale DWV fittings. With high peak-flow volumes, the design and construction of these systems may resemble those of storm sewers. Fittings for central vacuum systems are similar to DWV fittings but usually have thinner and lighter construction because the weight of the materials conveyed is less. Vacuum-system designs share with DWV designs the need to eliminate internal ridges, burrs, sharp turns, or other obstructions which might create clogs. Slip-joint fitting Slip-joint fittings are frequently used in kitchen, bathroom and tub drainage systems. They include a detached (movable) slip nut and slip-joint washer; the washer is made of rubber or nylon. An advantage of this type of fitting is that the pipe it is connecting to does not need to be cut to a precise length; the slip joint can attach within a range of the end of the inserting pipe. Many slip fittings may be tightened or loosened by hand for easier access to residential drainpipe systems (for example, to clean out a trap or access a drain line past a trap). Sweep elbow DWV elbows are usually long-radius ("sweep") types. To reduce flow resistance and solid deposits when the direction of flow is changed, they use a shallow curve with a large radius of curvature. In addition, a well-designed system will often use two 45° elbows instead of one 90° elbow (even a sweep 90° elbow) to reduce flow disruption as much as possible. Central vacuum system inlet fittings are intentionally designed with a tighter curvature radius than other bends in the system. If vacuumed debris becomes stuck, it will jam at the inlet, where it is easy to find and remove. Closet flange A closet flange (the drainpipe flange to which a flush toilet is attached) is a specialized flange designed to be flush with the floor, allowing a toilet to be installed above it. The flange must be mechanically strong to accommodate slight misalignments or movements and resist corrosion. Clean-out Clean-outs are fittings with removable elements, allowing access to drains without removing plumbing fixtures. They are used to allow an auger (or plumber's snake) to clean out a plugged drain. Since clean-out augers are limited in length, clean-outs should be placed in accessible locations at regular intervals throughout a drainage system (including outside the building). Minimum requirements are typically at the end of each branch in piping, just ahead of each water closet, at the base of each vertical stack and inside and outside the building in the main drain or sewer. Clean-outs usually have screw-on caps or screw-in plugs. They are also known as "rodding eyes", because of the eye-shaped cover plates often used on external versions. Trap primer A trap primer automatically injects water into a trap, maintaining a water seal to keep sewer gas out of buildings. It must be installed in an easily accessible place for adjustment, replacement, and repair. A trap primer, a specialized valve, is usually connected to a clean-water supply in addition to a DWV system. Because of the dual connection, it must be designed to resist the accidental backflow of contaminated water. Combo tee A combination tee (combo tee, combo wye, tee-wye, long-sweep wye, or combi) is a tee with a gradually curving central connecting joint: a wye plus an additional 1/8 bend (45°), combined in one 90° unit. It is used in drains for a smooth, gradually curving path to reduce the likelihood of clogs, to ease the pushing of a plumber's snake through a drain system and to encourage water flow in the direction of the drain. Sanitary tee A sanitary tee has a curved center section. In drainage systems, it is primarily used to connect horizontal drains (including fixture trap arms) to vertical drains. The center connection is generally to the pipe leading to a trap (the trap arm). It must not connect a vertical drain to a horizontal drain because of the likelihood that solids will accumulate at the bottom of the junction and cause a blockage. Baffle tee Also called a tee with a diverter baffle, a waste tee or an end-outlet tee, it typically connects waste lines before they enter the trap and has a baffle to keep water from one waste pipe from entering the other at the connection. Double sanitary tee (sanitary cross) This fitting differs from a standard cross in that two ports have curved inlets. Although it has been used in the past for connecting the drains of back-to-back fixtures (such as back-to-back sinks), some current codes—including the 2006 Uniform Plumbing Code in the United States—prohibit the use of this fitting for that purpose and require a double-fixture fitting (double combination wye) to minimize wastewater from one side flowing into the other. Wye (Y) or tee-wye (TY) Tee-wyes are similar to tees, except for angling the branch line to reduce friction and turbulence. They are commonly used to attach a vertical drainpipe to a horizontal one, reducing the deposition of entrained solids at the junction. Wyes and combo wyes follow a long-sweep pattern relative to sanitary tees and other short-sweep bends, which have a smaller radius and require less space. Wyes also have industrial applications. Although low-priced wyes are often spot-welded, industrial-strength wyes are flash-welded at each seam. In long-distance pipeline applications, a specialized wye is used to allow insertion of pigging to keep pipes clear and flowing. Side inlet tee-wye (TY) This fitting (also known as a "bungalow fitting" or a "cottage fitting") is a sanitary tee that allows two trap arms to be connected at the same level. A toilet is the main connection, with the option of a right or left-hand outlet to the 3" inlet with a choice of 1-1/2" or 2" in size. It is used to keep stack-vented fixtures high to the joist space and thus conserves the headroom in a basement. As the water closet must be the lowest fixture, the smaller side outlet (usually used to connect the bathtub trap arm) enters slightly above the larger connection. Hydraulic fittings Hydraulic systems use high fluid pressure, such as the hydraulic actuators for bulldozers and backhoes. Their hydraulic fittings are designed and rated for much greater pressure than that experienced in general piping systems, and they are generally not compatible with those used in plumbing. Hydraulic fittings are designed and constructed to resist high-pressure leakage and sudden failure. Connection methods Much of the work of installing a piping or plumbing system involves making leakproof, reliable connections, and most piping requires mechanical support against gravity and other forces (such as wind loads and earthquakes) which might disrupt an installation. Depending on the connection technology and application, basic skills may be sufficient, or specialized skills and professional licensure may be legally required. Fasteners and supports Fasteners join, or affix, two or more objects. Although they are usually used to attach pipe and fittings to mechanical supports in buildings, they do not connect the pipes. Fasteners commonly used with piping are a stud bolt with nuts (usually fully threaded, with two heavy, hexagonal nuts); a machine bolt and nut; or a powder-actuated tool (PAT) fastener (usually a nail or threaded stud, driven into concrete or masonry). Threaded pipe A threaded pipe has a screw thread at one or both ends for assembly. Steel pipe is often joined with threaded connections; tapered threads are cut into the end of the pipe, and sealant is applied in the form of thread-sealing compound or thread seal tape (also known as PTFE or Teflon tape) and the pipe is screwed into a threaded fitting with a pipe wrench. Threaded steel pipe is widely used in buildings to convey natural gas or propane fuel and is also a popular choice in fire sprinkler systems due to its resistance to mechanical damage and high heat (including the threaded joints). Threaded steel pipe may still be used in high-security or exposed locations because it is more resistant to vandalism, more difficult to remove, and its scrap value is much lower than copper or brass. A galvanized coating of metallic zinc was often used to protect steel water pipes from corrosion, but this protective coating eventually would dissolve away, exposing the iron to deterioration. Pipes used to convey fuel gas are often made of "black iron", which has been chemically treated to reduce corrosion, but this treatment does not resist erosion from flowing water. Despite its ruggedness, steel pipe is no longer preferred for conveying drinking water because corrosion can eventually cause leakage (especially at threaded joints), deposits on internal surfaces will eventually restrict flow, and corrosion will shed black or rusty residues into the flowing water. These disadvantages are less problematic for fire sprinkler installations because standing water in the steel pipes does not flow, except during occasional tests or activation by a fire. Introducing oxygen dissolved in freshwater supplies will cause some corrosion, but this soon stops without any source of additional water-borne oxygen. In older installations, the threaded brass pipe was similarly used and was considered superior to steel for drinking water because it was more resistant to corrosion and shed much fewer residues into the flowing water. Assembling threaded pipe is labor-intensive, and requires skill and planning to allow lengths of pipe to be screwed together in sequence. Most threaded-pipe systems require strategically located pipe-union fittings in final assembly. The threaded pipe is heavy and requires adequate attachment to support its weight. To ensure a comprehensive pressure test, it is crucial to explicitly request a 3.1 certificate in accordance with EN HFF 10204:2004. This certificate attests that the 'metallic products' meet the stipulated order requirements and provides detailed test results. Typically, each fitting is associated with a unique heat number, which corresponds to the information documented in the 3.1 certificate datasheet. Solvent welding A solvent is applied to PVC, CPVC, ABS or other plastic piping to partially dissolve and fuse the adjacent surfaces of piping and fitting. Solvent welding is usually used with a sleeve-type joint to connect pipe and fittings made of the same (or compatible) material. Unlike metal welding, solvent welding is relatively easy to perform (although care is needed to make reliable joints). Solvents typically used for plastics are usually toxic and may be carcinogenic and flammable, requiring adequate ventilation. Soldering To make a solder connection, a chemical flux is applied to the inner sleeve of a joint and the pipe is inserted. The joint is then heated, typically by using a propane or MAPP gas torch, although electrically heated soldering tools are sometimes used. Once the fitting and pipe have reached sufficient temperature, solder is applied to the heated joint, and the molten solder is drawn into the joint by capillary action as the flux vaporizes. "Sweating" is a term sometimes used to describe the soldering of pipe joints. Where many connections must be made in a short period (such as plumbing of a new building), soldering is quicker and less expensive joinery than compression or flare fittings. A degree of skill is needed to make several reliable soldered joints quickly. If flux residue is thoroughly cleaned, soldering can produce a long-lasting connection at a low cost. However, using an open flame for heating joints can present fire and health hazards to building occupants and requires adequate ventilation. Welding The welding of metals differs from soldering and brazing in that the joint is made without adding a lower-melting-point material (e.g. solder); instead, the pipe or tubing material is partially melted, and the fitting and piping are directly fused. This generally requires piping and fitting to be the same (or compatible) material. Skill is required to melt the joint sufficiently to ensure good fusion while not deforming or damaging the joined pieces. Properly welded joints are considered reliable and durable. Pipe welding is often performed by specially licensed workers whose skills are retested periodically. For critical applications, every joint is tested with nondestructive methods. Because of the skills required, welded pipe joints are usually restricted to high-performance applications such as shipbuilding, and in chemical and nuclear reactors. Adequate ventilation is essential to remove metal fumes from welding operations, and personal protective equipment must be worn. Because the high temperatures during welding can often generate intense ultraviolet light, dark goggles or full face shields must be used to protect the eyes. Precautions must also be taken to avoid fires caused by stray sparks and hot welding debris. Compression fittings Compression fittings (sometimes called "lock-bush fittings") consist of a tapered, concave conical seat; a hollow, barrel-shaped compression ring (sometimes called a ferrule); and a compression nut which is threaded onto the body of the fitting and tightened to make a leakproof connection. They are typically brass or plastic, but stainless steel or other materials may be used. Although compression connections are less durable than soldered (aka sweated) connections, they are easy to install with simple tools. However, they take longer to install than soldered joints and sometimes require re-tightening to stop slow leaks which may develop over time. Because of this possible leakage, they are generally restricted to accessible locations (such as under a kitchen or bathroom sink) and are prohibited in concealed locations such as the interiors of walls. Push-to-pull compression fittings Push-to-pull fittings are easily removed compression fitting that allows pipes to be connected with minimal tools. These fittings are similar to regular compression fittings but use an O-ring for sealing and a grip ring to hold the pipe. The main advantage is that it can easily be removed and re-used, it is easy to assemble, and the joints are still rotatable even after assembly. The pipe end should be square, so it sits against the stop in the fittings and does not create turbulence, and needs to be a clean cut to avoid damaging the O-ring during insertion. Flare fittings Flared connectors should not be confused with compression connectors, which are generally not interchangeable. Lacking a compression ring, they use a tapered conical shaped connection instead. A specialized flaring tool is used to enlarge tubing into a 45º tapered bell shape matching the projecting shape of the flare fitting. The flare nut, which had previously been installed over the tubing, is then tightened over the fitting to force the tapered surfaces tightly together. Flare connectors are typically brass or plastic, but stainless steel or other materials may be used. Although flare connections are labor-intensive, they are durable and reliable. Considered more secure against leaks and sudden failure, they are used in hydraulic brake systems and in other high-pressure, high-reliability applications. Flange fittings Flange fittings are generally used to connect valves, inline instruments or equipment nozzles. Two surfaces are joined tightly together with threaded bolts, wedges, clamps, or other means of applying high compressive force. Although a gasket, packing, or O-ring may be installed between the flanges to prevent leakage, it is sometimes possible to use only a special grease or nothing at all (if the mating surfaces are sufficiently precisely formed). Although flange fittings are bulky, they perform well in demanding applications such as large water supply networks and hydroelectric systems. Flanges are rated at 150, 300, 400, 600, 900, 1500, and 2500 psi; or 10, 15, 25, 40, 64, 100, and 150 bars of pressure. Various types of flanges are available, depending on construction. Flanges used in piping (orifice, threaded, slip-on, blind, weld neck, socket weld, lap-joint, and reducing) are available with a variety of facings, such as raised, flat, and ring-joint. Flange connections tend to be expensive because they require the precision forming of metal. Factory-installed flanges must meet carefully measured dimensional specifications, and pipe segments cut to length on-site require skilled precision welding to attach flanges under more-difficult field conditions. Mechanical fittings Manufacturers such as Victaulic and Grinnell produce sleeve-clamp fittings, which replace many flange connections. They attach to the end of a pipe segment via circumferential grooves pressed (or cut) around the end of the pipe to be joined. They are widely used on larger steel pipes and can also be used with other materials. The chief advantage of these connectors is that they can be installed after cutting the pipe to length in the field. This can save time and considerable expense compared to flange connections, which must be factory- or field-welded to pipe segments. However, mechanically fastened joints are sensitive to residual and thickness stresses caused by dissimilar metals and temperature changes. A grooved fitting, also known as a grooved coupling, has four elements: grooved pipe, gasket, coupling housing, and nuts and bolts. The groove is made by cold-forming (or machining) a groove at the end of a pipe. A gasket encompassed by coupling housing is wrapped around the two pipe ends, with the coupling engaging the groove; the bolts and nuts are tightened with a socket or impact wrench. The installed coupling housing encases the gasket and engages the grooves around the pipe to create a leakproof seal in a self-restrained pipe joint. There are two types of grooved coupling; a flexible coupling allows a limited amount of angular movement, and a rigid coupling does not allow movement and may be used where joint immobility is required (similar to a flange or welded joint). Pressed or crimped fittings Crimped or pressed connections to use special fittings permanently attached to tubing with a powered crimper. The fittings, manufactured with a pre-installed sealant or O-ring, slide over the tubing to be connected. High pressure is used to deform the fitting and compress the sealant against the inner tubing, creating a leakproof seal. The advantages of this method are durability, speed, neatness, and safety. The connection can be made even when the tubing is wet. Crimped fittings are suitable for drinking water pipes and other hot-and-cold systems (including central heating). They are more expensive than sweated fittings. Press fittings with either V and M profile (or contour) in stainless steel, carbon steel, and copper are trendy in Europe, and several manufacturers such as Viega, Geberit, Swiss Fittings, and ISOTUBI, distribute proprietary systems of press fittings. Compared to other connection types, press fittings have the advantages of installation speed and safety. Pressing a stainless steel fitting can be completed within five seconds with the correct equipment. Primary pressing of fittings to pipes or other fittings is performed using electrically powered press equipment, but mechanically driven press equipment is also available. Swiss Fittings is legally protected the German Brand "Pressfittings aus Edelstahl" in the USA. Press fittings of some major brands carry a plastic slip around the sleeves on each end of the fitting which falls off when the fitting has been compressed. This allows for a simple identification whether a press fitting has securely been installed. Press fittings with appropriate and region-specific certification may be used for gas lines. Stainless steel and carbon steel press fittings can withstand up to 16 bars of pressure. A disadvantage of press fittings is the dead space between the pipe and the fitting, which can possibly rule out use for beverage and food applications. Leaded hub fittings Cast iron piping was traditionally made with one "spigot" end (plain, which was cut to length as needed) and one "socket" or "hub" end (cup-shaped). The larger-diameter hub was also called a "bell" because of its shape. In use, the spigot of one segment was placed into the socket of the preceding one, and a ring of oakum was forced down into the joint with a caulking iron. Then the remainder of the space in the hub was filled up. Ideally, this would be done by pouring molten lead, allowing it to set, and hammering it tightly with a caulking tool. If this was not possible due to position or some other constraint, the joint could be filled with lead wool or rope, which was forcibly compacted one layer at a time. This labor-intensive technique was durable if appropriately done but required time, skill, and patience for each joint to be made up. Quicker and lower-cost methods, such as rubber sleeve joints, have replaced mainly leaded hub connections of cast-iron piping in most new installations, but the older technology may still be used for some repairs. In addition, some conservative plumbing codes still require leaded hub joints for final connections where the sewer main leaves a building. Rubber sleeve fittings Cast iron DWV pipe and fittings are still used in premium construction because they muffle the sound of wastewater rushing through them, but today they are rarely joined with traditional lead joints. Instead, pipe and fittings with plain (non-belled) connections are butted against each other, and clamped with special rubber sleeve (or "no-hub") fittings. The rubber sleeves are typically secured with stainless steel worm drive clamping bands, which compress the rubber to make a tight seal around the pipes and fittings. These pipe clamps are similar to hose clamps, but are heavier-duty and ideally are made completely of stainless steel (including the screw) to provide maximum service life. Optionally, the entire rubber sleeve may be jacketed with thin sheet metal, to provide extra stiffness, durability, and resistance to accidental penetration by a misplaced nail or screw. Although the fittings are not cheap, they are reasonably durable (the rubber is typically neoprene or flexible PVC). An alternative design also allows the selective use of belled fittings made entirely of flexible rubber, including more-complex shapes such as wyes or tee-wyes. They are secured to cast iron pipe segments by use of stainless steel worm drive clamps. Because these fittings are not as stiff as traditional cast-iron fittings, the heavy pipe segments may need better anchoring and support to prevent unwanted movement. The lighter rubber fittings may not muffle sound as well as the heavy cast-iron fittings. An advantage of flexible rubber fittings is that they can accommodate small misalignments and can be flexed slightly for installation in tight locations. A flexible fitting may be preferred to connect a shower or heavy tub to the drainage system without transmitting slight movements or stresses, which could eventually cause cracking. Flexible fittings may also be used to reduce the transmission of vibration into the DWV system. If necessary, clamped joints can be disassembled later, and the fittings and pipe may be reconfigured. However, it is often not customary to re-use the clamps and rubber sleeves, which their previous installation may deform and may not seal well after rearranging. Clamped fittings may occasionally need to be disassembled to provide access for "snaking" or "rodding-out" with a unique tool to clear blockage or clogs. This is also an indication that a clean-out fitting could be installed to provide easier future access. See also Cutting ring fitting Drain (plumbing) Driving cap Flange Gladhand connector Pipe Pipefitter Pipe support Plumber Rainwater, surface, and subsurface water drainage Septic systems Traps, Drains, and Vents Victaulic Water cooling Water supply systems References External links International Association of Plumbing and Mechanical Officials International Code Council the American Society for Testing and Materials Bathrooms Building engineering Piping Plumbing Water industry
Piping and plumbing fitting
[ "Chemistry", "Engineering", "Environmental_science" ]
8,570
[ "Hydrology", "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Civil engineering", "Water industry", "Mechanical engineering", "Piping", "Architecture" ]
6,874,997
https://en.wikipedia.org/wiki/Geomembrane
A geomembrane is very low permeability synthetic membrane liner or barrier used with any geotechnical engineering related material so as to control fluid (liquid or gas) migration in a human-made project, structure, or system. Geomembranes are made from relatively thin continuous polymeric sheets, but they can also be made from the impregnation of geotextiles with asphalt, elastomer or polymer sprays, or as multilayered bitumen geocomposites. Continuous polymer sheet geomembranes are, by far, the most common. Manufacturing The manufacturing of geomembranes begins with the production of the raw materials, which include the polymer resin, and various additives such as antioxidants, plasticizers, fillers, carbon black, and lubricants (as a processing aid). These raw materials (i.e., the "formulation") are then processed into sheets of various widths and thickness by extrusion, calendering, and/or spread coating. A 2010 estimate cited geomembranes as the largest geosynthetic material in dollar terms at US$1.8 billion per year worldwide, which is 35% of the market. The US market is currently divided between HDPE, LLDPE, fPP, PVC, CSPE-R, EPDM-R and others (such as EIA-R and BGMs), and can be summarized as follows: (Note that Mm2 refers to millions of square meters.) high-density polyethylene (HDPE) ~ 35% or 105 Mm2 linear low-density polyethylene (LLDPE) ~ 25% or 75 Mm2 polyvinyl chloride (PVC) ~ 25% or 75 Mm2 flexible polypropylene (fPP) ~ 10% or 30 Mm2 chlorosulfonated polyethylene (CSPE) ~ 2% or 6 Mm2 ethylene propylene diene terpolymer (EPDM) ~ 3% or 9 Mm2 The above represents approximately $1.8 billion in worldwide sales. Projections for future geomembrane usage are strongly dependent on the application and geographical location. Landfill liners and covers in North America and Europe will probably see modest growth (~ 5%), while in other parts of the world growth could be dramatic (10–15%). Perhaps the greatest increases will be seen in the containment of coal ash and heap leach mining for precious metal capture. Properties The majority of generic geomembrane test methods that are referenced worldwide are by the ASTM International|American Society of Testing and Materials (ASTM) due to their long history in this activity. More recent are test method developed by the International Organization for Standardization (ISO). Lastly, the Geosynthetic Research Institute (GRI) has developed test methods that are only for test methods not addressed by ASTM or ISO. Of course, individual countries and manufacturers often have specific (and sometimes) proprietary test methods. Physical properties The main physical properties of geomembranes in the as-manufactured state are: Thickness (smooth sheet, textured, asperity height) Density Melt flow index Mass per unit area (weight) Vapor transmission (water and solvent). Mechanical properties There are a number of mechanical tests that have been developed to determine the strength of polymeric sheet materials. Many have been adopted for use in evaluating geomembranes. They represent both quality control and design, i.e., index versus performance tests. tensile strength and elongation (index, wide width, axisymmetric, and seams) tear resistance impact resistance puncture resistance interface shear strength anchorage strength stress cracking (constant load and single point). Endurance Any phenomenon that causes polymeric chain scission, bond breaking, additive depletion, or extraction within the geomembrane must be considered as compromising to its long-term performance. There are a number of potential concerns in this regard. While each is material-specific, the general behavior trend is to cause the geomembrane to become brittle in its stress-strain behavior over time. There are several mechanical properties to track in monitoring such long term degradation: the decrease in elongation at failure, the increase in modulus of elasticity, the increase (then decrease) in stress at failure (i.e., strength), and the general loss of ductility. Obviously, many of the physical and mechanical properties could be used to monitor the polymeric degradation process. ultraviolet light exposure (laboratory of field) radioactive degradation biological degradation (animals, fungi or bacteria) chemical degradation thermal behavior (hot or cold) oxidative degradation. Lifetime Geomembranes degrade slowly enough that their lifetime behavior is as yet uncharted. Thus, accelerated testing, either by high stress, elevated temperatures and/or aggressive liquids, is the only way to determine how the material will behave long-term. Lifetime prediction methods use the following means of interpreting the data: Stress limit testing: A method by the HDPE pipe industry in the United States for determining the value of hydrostatic design basis stress. Rate process method: Used in Europe for pipes and geomembranes, the method yields similar results as stress limit testing. Hoechst multiparameter approach: A method that utilizes biaxial stresses and stress relaxation for lifetime prediction and can include seams as well. Arrhenius modeling: A method for testing geomembranes (and other geosynthetics) described in Koerner for both buried and exposed conditions. Seaming The fundamental mechanism of seaming polymeric geomembrane sheets together is to temporarily reorganize the polymer structure (by melting or softening) of the two opposing surfaces to be joined in a controlled manner that, after the application of pressure, results in the two sheets being bonded together. This reorganization results from an input of energy that originates from either thermal or chemical processes. These processes may involve the addition of additional polymer in the area to be bonded. Ideally, seaming two geomembrane sheets should result in no net loss of tensile strength across the two sheets, and the joined sheets should perform as one single geomembrane sheet. However, due to stress concentrations resulting from the seam geometry, current seaming techniques may result in minor tensile strength and/or elongation loss relative to the parent sheet. The characteristics of the seamed area are a function of the type of geomembrane and the seaming technique used. Applications Geomembranes have been used in the following environmental, geotechnical, hydraulic, transportation, and private development applications: As liners for potable water As liners for reserve water (e.g., safe shutdown of nuclear facilities) As liners for waste liquids (e.g., sewage sludge) Liners for radioactive or hazardous waste liquid As liners for secondary containment of underground storage tanks As liners for solar ponds As liners for brine solutions As liners for the agriculture industry As liners for the aquiculture industry, such as fish/shrimp pond As liners for golf course water holes and sand bunkers As liners for all types of decorative and architectural ponds As liners for water conveyance canals As liners for various waste conveyance canals As liners for primary, secondary, and/or tertiary solid-waste landfills and waste piles As liners for heap leach pads As covers (caps) for solid-waste landfills As covers for aerobic and anaerobic manure digesters in the agriculture industry As covers for power plant coal ash As liners for vertical walls: single or double with leak detection As cutoffs within zoned earth dams for seepage control As linings for emergency spillways As waterproofing liners within tunnels and pipelines As waterproof facing of earth and rockfill dams As waterproof facing for roller compacted concrete dams As waterproof facing for masonry and concrete dams Within cofferdams for seepage control As floating reservoirs for seepage control As floating reservoir covers for preventing pollution To contain and transport liquids in trucks To contain and transport potable water and other liquids in the ocean As a barrier to odors from landfills As a barrier to vapors (radon, hydrocarbons, etc.) beneath buildings To control expansive soils To control frost-susceptible soils To shield sinkhole-susceptible areas from flowing water To prevent infiltration of water in sensitive areas To form barrier tubes as dams To face structural supports as temporary cofferdams To conduct water flow into preferred paths Beneath highways to prevent pollution from deicing salts Beneath and adjacent to highways to capture hazardous liquid spills As containment structures for temporary surcharges To aid in establishing uniformity of subsurface compressibility and subsidence Beneath asphalt overlays as a waterproofing layer To contain seepage losses in existing above-ground tanks As flexible forms where loss of material cannot be allowed. See also Electrical liner integrity survey References Further reading ICOLD Bulletin 135, Geomembrane Sealing Systems for Dams, 2010, Paris, France, 464 pgs. August, H., Holzlöhne, U. and Meggys, T. (1997), Advanced Landfill Liner Systems, Thomas Telford Publ., London, 389 pgs. Kays, W. B. (1987), Construction of Linings for Reservoirs, Tanks and Pollution Control Foundation, J. Wiley and Sons, New York, NY, 379 pgs. Rollin, A. and Rigo, J. M. (1991), Geomembranes: Identification and Performance Testing, Chapman and Hall Publ., London, 355 pgs. Müller, W. (2007), HDPE Geomembranes in Geotechnics, Springer-Verlag Publ., Berlin, 485 pgs. Sharma, H. D. and Lewis, S. P. (1994), Waste Containment Systems, Waste Stabilization and Landfills, J. Wiley and Sons, New York, NY, 586 pgs. Geosynthetics Building materials Landfill
Geomembrane
[ "Physics", "Engineering" ]
2,103
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
31,612,752
https://en.wikipedia.org/wiki/Warm%20inflation
In physical cosmology, warm inflation is one of two dynamical realizations of cosmological inflation. The other is the standard scenario, sometimes called cold inflation. In warm inflation radiation production occurs concurrently with inflationary expansion. This is consistent with the conditions necessary for inflation as given by the Friedmann equations of general relativity, which simply require that the vacuum energy density dominates the energy content of the universe at time of inflation, and so does not prohibit some radiation to be present. As such the most general picture of inflation would include a radiation energy density component. The presence of radiation during inflation implies the inflationary phase could smoothly end into a radiation-dominated era without a distinctively separate reheating phase, thus providing a solution to the graceful exit problem of inflation. References Particle physics Inflation (cosmology)
Warm inflation
[ "Physics" ]
166
[ "Particle physics" ]
31,614,570
https://en.wikipedia.org/wiki/The%20X-Rays
The X-Rays (also known as The X-Ray Fiend) is an 1897 British silent comic trick film directed by George Albert Smith, featuring a courting couple exposed to X-rays. The 44-second trick film, according to Michael Brooke of BFI Screenonline, "contains one of the first British examples of special effects created by means of jump cuts" Smith employs the jump-cut twice; first to transform his courting couple via "X rays," dramatized by means of the actors donning black bodysuits decorated with skeletons and with the woman holding only the metal support work of her umbrella, and then to return them and the umbrella to normal. The couple in question were played by Smith's wife Laura Bayley and Tom Green, a Brighton comedian. References External links 1897 films 1897 horror films 1890s science fiction comedy films 1890s British films British black-and-white films British silent short films Articles containing video clips X-rays Films directed by George Albert Smith British comedy horror films British science fiction comedy films 1890s romance films 1897 comedy films Fiction about skeletons 1897 short films Silent British comedy films Silent horror films Trick films
The X-Rays
[ "Physics" ]
232
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
31,615,192
https://en.wikipedia.org/wiki/Biomimetic%20antifouling%20coating
A biomimetic antifouling coating is a treatment that prevents the accumulation of marine organisms on a surface. Typical antifouling coatings are not biomimetic but are based on synthetic chemical compounds that can have deleterious effects on the environment. Prime examples are tributyltin compounds, which are components in paints to prevent biofouling of ship hulls. Although highly effective at combatting the accumulation of barnacles and other problematic organisms, organotin-containing paints are damaging to many organisms and have been shown to interrupt marine food chains. Biomimetic antifouling coatings are highly lucrative because of their low environmental impact and demonstrated success. Some properties of a biomimetic antifouling coating can be predicted from the contact angles obtained from the Wenzel equation, and the calculated ERI. Natural materials such as shark skin continue to provide inspiration for scientists to improve the coatings currently on the market. Chemical methods Most antifouling coatings are based upon chemical compounds that inhibit fouling. When incorporated into marine coatings, these biocides leach into the immediate surroundings and minimize fouling. The classic synthetic antifouling agent is tributyltin (TBT). Natural biocides typically show lower environmental impact but variable effectiveness. Natural biocides are found in a variety of sources, including sponges, algae, corals, sea urchins, bacteria, and sea-squirts, and include toxins, anaesthetics, and growth/attachment/metamorphosis-inhibiting molecules. As a group, marine microalgae alone produce over 3600 secondary metabolites that play complex ecological roles including defense from predators, as well as antifouling protection, increasing scientific interest in the screening of marine natural products as natural biocides. Natural biocides are typically divided into two categories: terpenes (often containing unsaturated ligand groups and electronegative oxygen functional groups) and nonterpenes. The most effective natural biocide is 3,4-dihydroxybufa-20,22 dienolide, or bufalin (a steroid of toad poison from Bufo vulgaris), which is over 100 times more effective than TBT at preventing biofouling. Bufalin is however expensive. A few natural compounds with simpler synthetic routes, such as nicotinamide or 2,5,6-tribromo-1-methylgramine (from Zoobotryon pellucidum), have been incorporated into patented antifouling paints. A significant drawback to biomimetic chemical agents is their modest service life. Since the natural biocides must leach out of the coating to be effective, the rate of leaching is a key parameter. Where La is the fraction of the biocide actually released (typically around 0.7), a is the weight fraction of the active ingredient in the biocide, DFT is the dry film thickness, Wa is the concentration of the natural biocide in the wet paint, SPG is the specific gravity of the wet paint, and SVR is the percentage of dry paint to wet paint by volume. Shark skin mimetics One class of biomimetic antifouling coatings is inspired by the surface of shark skin, which consists of nanoscale overlapping placoid scales that exhibit parallel ridges that effectively prevent sharks from becoming fouled even when moving at slow speeds. The antifouling qualities of the shark skin-inspired designs appear highly dependent upon the engineered roughness index (ERI). Where r is the Wenzel roughness ratio, n is the number of distinct surface features in the design of the surface, and φ is the area fraction of the tops of the distinct surface features. A completely smooth surface would have an ERI = 0. Using this equation, the amount of microfouling spores per mm2 can be modeled. Similar to actual shark skin, the patterned nature of Sharklet AF shows microstructural differences in three dimensions with a corresponding ERI of 9.5. This three-dimensional patterned difference imparts a 77% reduction in microfouling settlement. Other artificial nonpatterned nanoscale rough surfaces such as 2-μm-diameter circular pillars (ERI = 5.0) or 2-μm-wide ridges (ERI = 6.1) reduce fouling settlement by 36% and 31%, respectively, while a more patterned surface composed of 2-μm-diameter circular pillars and 10-μm equilateral triangles (ERI = 8.7) reduces spore settlement by 58%. The contact angles obtained for hydrophobic surfaces are directly related to surface roughnesses by the Wenzel equation. See also Biofouling Fouling Anti-fouling Biomimicry Bionics Tributyltin (TBT) Sharklet References Bionics Fouling Paints Coatings
Biomimetic antifouling coating
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,032
[ "Paints", "Bionics", "Coatings", "Materials degradation", "Fouling" ]
30,608,537
https://en.wikipedia.org/wiki/Spt%20function
The spt function (smallest parts function) is a function in number theory that counts the sum of the number of smallest parts in each integer partition of a positive integer. It is related to the partition function. The first few values of spt(n) are: 1, 3, 5, 10, 14, 26, 35, 57, 80, 119, 161, 238, 315, 440, 589 ... Example For example, there are five partitions of 4 (with smallest parts underlined): 3 + + 2 + + + + + These partitions have 1, 1, 2, 2, and 4 smallest parts, respectively. So spt(4) = 1 + 1 + 2 + 2 + 4 = 10. Properties Like the partition function, spt(n) has a generating function. It is given by where . The function is related to a mock modular form. Let denote the weight 2 quasi-modular Eisenstein series and let denote the Dedekind eta function. Then for , the function is a mock modular form of weight 3/2 on the full modular group with multiplier system , where is the multiplier system for . While a closed formula is not known for spt(n), there are Ramanujan-like congruences including References Combinatorics Integer sequences
Spt function
[ "Mathematics" ]
276
[ "Sequences and series", "Discrete mathematics", "Integer sequences", "Mathematical structures", "Number theory stubs", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Combinatorics stubs", "Numbers", "Number theory" ]
30,610,933
https://en.wikipedia.org/wiki/Disulfurous%20acid
Disulfurous acid, metabisulfurous acid or pyrosulfurous acid is an oxoacid of sulfur with the formula . Its structure is . The salts of disulfurous acid are called disulfites or metabisulfites. Disulfurous acid is, like sulfurous acid (), a phantom acid, which does not exist in the free state. In contrast to disulfate (), disulfite has two directly connected sulfur atoms. The oxidation state of the sulfur atom bonded to three oxygen atoms is +5 and its valence is 6, while that of the other sulfur is +3 and 4 respectively. References Sulfur oxoacids Metabisulfites Hypothetical chemical compounds
Disulfurous acid
[ "Chemistry" ]
154
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds", "Theoretical chemistry stubs" ]
2,082,919
https://en.wikipedia.org/wiki/Nuclear%20Power%202010%20Program
The "Nuclear Power 2010 Program" was launched in 2002 by President George W. Bush in 2002, 13 months after the beginning of his presidency, in order to restart orders for nuclear power reactors in the U.S. by providing subsidies for a handful of Generation III+ demonstration plants. The expectation was that these plants would come online by 2010, but it was not met. In March 2017, the leading nuclear-plant maker, Westinghouse Electric Company, filed for bankruptcy due to losing over $9 billion in construction losses from working on two nuclear plants. This loss was partly caused by safety concerns due to the Fukushima disaster, Germany's Energiewende, the growth of solar and wind power, and low natural gas prices. Overview The "Nuclear Power 2010 Program" was unveiled by the U.S. Secretary of Energy Spencer Abraham on February 14, 2002, as one means towards addressing the expected need for new power plants. The program was a joint government/industry cost-shared effort to identify sites for new nuclear power plants, to develop and bring to market advanced nuclear plant technologies, evaluate the business case for building new nuclear power plants, and demonstrate untested regulatory processes leading to an industry decision in the next few years to seek Nuclear Regulatory Commission (NRC) approval to build and operate at least one new advanced nuclear power plant in the United States. Three consortia responded in 2004 to the U.S. Department of Energy's solicitation under the Nuclear Power 2010 initiative and were awarded matching funds. The Dominion-led consortium includes General Electric (GE) Energy, Hitachi America, and Bechtel Corporation, and has selected General Electric's Economic Simplified Boiling Water Reactor (ESBWR, a passively safe version of the BWR). The NuStart Energy Development, LLC consortium consists of DTE Energy, Duke Energy, EDF International North America, Entergy Nuclear, Exelon Generation, Florida Power & Light Co., Progress Energy, SCANA Corporation, Southern Company, GE Energy, Tennessee Valley Authority (TVA), and Westinghouse Electric Company and has chosen the General Electric Economic Simplified Boiling Water Reactor (ESBWR) and the Westinghouse Advanced Passive 1000 (AP1000, a PWR) reactor as candidates. The NuStart consortium was disbanded on 30 June 2012. The third consortium, led by TVA, includes General Electric, Toshiba, USEC Inc., Global Fuel-Americas, and Bechtel Power Corp., and will develop a feasibility study for a TVA site based on the General Electric Advanced Boiling Water Reactor (ABWR). On September 22, 2005, NuStart selected Port Gibson (the Grand Gulf site) and Scottsboro (the Bellefonte site) for new nuclear units. Port Gibson should host an ESBWR (a passively safe version of the BWR) and Scottsboro an AP1000 (a passively safe version of the PWR). Entergy announced to prepare its own proposal for the River Bend Station in St. Francisville. Also, Constellation Energy of Baltimore had withdrawn its Lusby and Oswego sites from the NuStart finalist list after on September 15 announcing a new joint venture, UniStar Nuclear, with Areva to offer EPR (European Pressurized Reactors) in the U.S.A. Finally, in October 2005, Progress Energy Inc announced it was considering constructing a new nuclear plant and had begun evaluating potential sites in central Florida. South Carolina Electric & Gas announced on February 10, 2006, that it chose Westinghouse for a plant to be built at the V.C. Summer plant in Jenkinsville, South Carolina. NRG Energy announced in June 2006 that it would explore building two ABWRs at the South Texas Project. Four ABWRs were already operating in Japan at that time. The original goal of bringing two new reactors online by 2010 was missed, and "of more than two dozen projects that were considered, only two showed signs of progress and even this progress was uncertain". As of March 2017, the Plant Vogtle and Vigil C. Summer plants (a total of four reactors) in the southeastern U.S. that have been under construction since the late 2000s have been left to an unknown fate. Energy Policy Act of 2005 The Energy Policy Act of 2005, signed by President George W. Bush on August 8, 2005, has a number of articles related to nuclear power, and three specifically to the 2010 Program. First, the Price-Anderson Nuclear Industries Indemnity Act was extended to cover private and Department of Energy plants and activities licensed through 2025. Also, the government would cover cost overruns due to regulatory delays, up to $500 million each for the first two new nuclear reactors, and half of the overruns due to such delays (up to $250 million each) for the next four reactors. Delays in construction due to vastly increased regulation were a primary cause of the high cost of some earlier plants. Finally, "A production tax credit of 1.8 cents per kilowatt-hour for the first 6,000 megawatt-hours from new nuclear power plants for the first eight years of their operation, subject to a $125 million annual limit. The production tax credit places nuclear energy on an equal footing with other sources of emission-free power, including wind and closed-loop biomass." The Act also funds a Next Generation Nuclear Plant project at INEEL to produce both electricity and hydrogen. This plant will be a DOE project and does not fall under the 2010 Program. Recent developments Between 2007 and 2009, 13 companies applied to the Nuclear Regulatory Commission for construction and operating licenses to build 25 new nuclear power reactors in the United States. However, the case for widespread nuclear plant construction was eroded due to abundant natural gas supplies, slow electricity demand growth in a weak U.S. economy (Financial crisis of 2007–2008), lack of financing, and uncertainty following the Fukushima nuclear disaster of 2011 in Japan after a tsunami. Many license applications for proposed new reactors were suspended or cancelled. Only a few new reactors will enter service by 2020. These will not be cheaper than coal or natural gas, but they are an attractive investment for utilities because the government mandated that taxpayers pay for construction in advance. In 2013, four aging reactors were permanently closed due to the stringent requirements of the NRC and actions by local politicians. See also Nuclear power plant Boiling water reactors Pressurized water reactors Generation III reactor List of Prospective Nuclear Reactors Global Nuclear Energy Partnership Nuclear power in the United States Nuclear safety Nuclear safety in the United States Nuclear whistleblowers Notes External links DOE Program Page BusinessWeek article of June 29, 2006 Nuclear technology Nuclear power in the United States 2002 in the United States
Nuclear Power 2010 Program
[ "Physics" ]
1,380
[ "Nuclear technology", "Nuclear physics" ]
2,083,415
https://en.wikipedia.org/wiki/Navigation%20mesh
A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000. Description A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph. Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment. Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can. Creation Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh. It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be created offline and be immutable. However, there has been some investigation of online updating of navmeshes for dynamic environments. History In robotics, using linked convex polygons in this manner has been called "meadow mapping", coined in a 1986 technical report by Ronald C. Arkin. Navigation meshes in video game artificial intelligence are usually credited to Greg Snook's 2000 article "Simplified 3D Movement and Pathfinding Using Navigation Meshes" in Game Programming Gems. In 2001, J.M.P. van Waveren described a similar structure with convex and connected 3D polygons, dubbed the "Area Awareness System", used for bots in Quake III Arena. Notes References External links UDK: Navigation Mesh Reference Unity: Navigation Overview Source Engine: Navigation Meshes Urho3D: Navigation Godot Engine Navigation Cry Engine Navigation And AI Graph data structures Video game development Computational physics Robotics engineering
Navigation mesh
[ "Physics", "Technology", "Engineering" ]
601
[ "Computer engineering", "Robotics engineering", "Computational physics" ]
2,083,906
https://en.wikipedia.org/wiki/Specific%20volume
In thermodynamics, the specific volume of a substance (symbol: , nu) is the quotient of the substance's volume () to its mass (): It is a mass-specific intrinsic property of the substance. It is the reciprocal of density (rho) and it is also related to the molar volume and molar mass: The standard unit of specific volume is cubic meters per kilogram (m3/kg), but other units include ft3/lb, ft3/slug, or mL/g. Specific volume for an ideal gas is related to the molar gas constant () and the gas's temperature (), pressure (), and molar mass (): It's based on the ideal gas law, , and the amount of substance, Applications Specific volume is commonly applied to: Molar volume Volume (thermodynamics) Partial molar volume Imagine a variable-volume, airtight chamber containing a certain number of atoms of oxygen gas. Consider the following four examples: If the chamber is made smaller without allowing gas in or out, the density increases and the specific volume decreases. If the chamber expands without letting gas in or out, the density decreases and the specific volume increases. If the size of the chamber remains constant and new atoms of gas are injected, the density increases and the specific volume decreases. If the size of the chamber remains constant and some atoms are removed, the density decreases and the specific volume increases. Specific volume is a property of materials, defined as the number of cubic meters occupied by one kilogram of a particular substance. The standard unit is the meter cubed per kilogram (m3/kg or m3·kg−1). Sometimes specific volume is expressed in terms of the number of cubic centimeters occupied by one gram of a substance. In this case, the unit is the centimeter cubed per gram (cm3/g or cm3·g−1). To convert m3/kg to cm3/g, multiply by 1000; conversely, multiply by 0.001. Specific volume is inversely proportional to density. If the density of a substance doubles, its specific volume, as expressed in the same base units, is cut in half. If the density drops to 1/10 its former value, the specific volume, as expressed in the same base units, increases by a factor of 10. The density of gases changes with even slight variations in temperature, while densities of liquid and solids, which are generally thought of as incompressible, will change very little. Specific volume is the inverse of the density of a substance; therefore, careful consideration must be taken account when dealing with situations that involve gases. Small changes in temperature will have a noticeable effect on specific volumes. The average density of human blood is 1060 kg/m3. The specific volume that correlates to that density is 0.00094 m3/kg. Notice that the average specific volume of blood is almost identical to that of water: 0.00100 m3/kg. Application examples If one sets out to determine the specific volume of an ideal gas, such as super heated steam, using the equation , where pressure is 2500 lbf/in2, R is 0.596, temperature is . In that case, the specific volume would equal 0.4672 in3/lb. However, if the temperature is changed to , the specific volume of the super heated steam would have changed to 0.2765 in3/lb, which is a 59% overall change. Knowing the specific volumes of two or more substances allows one to find useful information for certain applications. For a substance X with a specific volume of 0.657 cm3/g and a substance Y with a specific volume 0.374 cm3/g, the density of each substance can be found by taking the inverse of the specific volume; therefore, substance X has a density of 1.522 g/cm3 and substance Y has a density of 2.673 g/cm3. With this information, the specific gravities of each substance relative to one another can be found. The specific gravity of substance X with respect to Y is 0.569, while the specific gravity of Y with respect to X is 1.756. Therefore, substance X will not sink if placed on Y. Specific volume of solutions The specific volume of a non-ideal solution is the sum of the partial specific volumes of the components: M is the molar mass of the mixture. This can be used instead of volume, as this is intensive property tied to the system. Table of common specific volumes The table below displays densities and specific volumes for various common substances that may be useful. The values were recorded at standard temperature and pressure, which is defined as air at 0 °C (273.15 K, 32 °F) and 1 atm (101.325 kN/m2, 101.325 kPa, 14.7 psia, 0 psig, 30 in Hg, 760 torr). * values not taken at standard temperature and pressure References Thermodynamic properties Volume V Mechanical quantities
Specific volume
[ "Physics", "Chemistry", "Mathematics" ]
1,053
[ "Scalar physical quantities", "Thermodynamic properties", "Mechanical quantities", "Physical quantities", "Quantity", "Mass", "Intensive quantities", "Size", "Extensive quantities", "Mechanics", "Thermodynamics", "Volume", "Wikipedia categories named after physical quantities", "Mass-speci...
2,085,068
https://en.wikipedia.org/wiki/G%C3%B6del%20metric
The Gödel metric, also known as the Gödel solution or Gödel universe, is an exact solution, found in 1949 by Kurt Gödel, of the Einstein field equations in which the stress–energy tensor contains two terms: the first representing the matter density of a homogeneous distribution of swirling dust particles (see dust solution), and the second associated with a negative cosmological constant (see Lambdavacuum solution). This solution has many unusual properties—in particular, the existence of closed time-like curves that would allow time travel in a universe described by the solution. Its definition is somewhat artificial, since the value of the cosmological constant must be carefully chosen to correspond to the density of the dust grains, but this spacetime is an important pedagogical example. Definition Like any other Lorentzian spacetime, the Gödel solution represents the metric tensor in terms of a local coordinate chart. It may be easiest to understand the Gödel universe using the cylindrical coordinate system (see below), but this article uses the chart originally used by Gödel. In this chart, the metric (or, equivalently, the line element) is where is a non-zero real constant that gives the angular velocity of the surrounding dust grains about the y-axis, measured by a "non-spinning" observer riding on one of the dust grains. "Non-spinning" means that the observer does not feel centrifugal forces, but in this coordinate system, it would rotate about an axis parallel to the y-axis. In this rotating frame, the dust grains remain at constant values of x, y, and z. Their density in this coordinate diagram increases with x, but their density in their own frames of reference is the same everywhere. Properties To investigate the properties of the Gödel solution, the frame field can be assumed (dual to the co-frame read from the metric as given above), This framework defines a family of inertial observers that are 'comoving with the dust grains'. The computation of the Fermi–Walker derivatives with respect to shows that the spatial frames are spinning about with the angular velocity . It follows that the 'non spinning inertial frame' comoving with the dust particles is Einstein tensor The components of the Einstein tensor (with respect to either frame above) are Here, the first term is characteristic of a Lambdavacuum solution and the second term is characteristic of a pressureless perfect fluid or dust solution. The cosmological constant is carefully chosen to partially cancel the matter density of the dust. Topology The Gödel spacetime is a rare example of a regular (singularity-free) solution of the Einstein field equations. Gödel's original chart is geodesically complete and free of singularities. Therefore, it is a global chart, and the spacetime is homeomorphic to R4, and therefore, simply connected. Curvature invariants In any Lorentzian spacetime, the fourth rank Riemann tensor is a multilinear operator on the four-dimensional space of tangent vectors (at some event), but a linear operator on the six-dimensional space of bivectors at that event. Accordingly, it has a characteristic polynomial, whose roots are the eigenvalues. In Gödelian spacetime, these eigenvalues are very simple: triple eigenvalue zero, double eigenvalue , single eigenvalue . Killing vectors This spacetime admits a five-dimensional Lie algebra of Killing vectors, which can be generated by 'time translation' , two 'spatial translations' , plus two further Killing vector fields: and The isometry group acts 'transitively' (since we can translate into , and with the fourth vector we can move along ), so spacetime is 'homogeneous'. However, it is not 'isotropic', as can be seen. The given demonstrators show that the slices admit a transitive abelian three-dimensional transformation group, so that a quotient of the solution can be reinterpreted as a stationary cylindrically symmetric solution. The slices allow for an SL(2,R) action, and the slices admit a Bianchi III (c.f. the fourth Killing vector field). This can be rewritten as the symmetry group containing three-dimensional subgroups with examples of Bianchi types I, III, and VIII. Four of the five Killing vectors, as well as the curvature tensor do not depend on the coordinate y. The Gödel solution is the Cartesian product of a factor R with a three-dimensional Lorentzian manifold (signature −++). It can be shown that, except for the local isometry, the Gödel solution is the only perfect fluid solution of the Einstein field equation which admits a five-dimensional Lie algebra of the Killing vectors. Petrov type and Bel decomposition The Weyl tensor of the Gödel solution has Petrov type D. This means that for an appropriately chosen observer, the tidal forces are very close to those that would be felt from a point mass in Newtonian gravity. To study the tidal forces in more detail, the Bel decomposition of the Riemann tensor can be computed into three pieces, the tidal or electrogravitic tensor (which represents tidal forces), the magnetogravitic tensor (which represents spin-spin forces on spinning test particles and other gravitational effects analogous to magnetism), and the topogravitic tensor (which represents the spatial sectional curvatures). Observers comoving with the dust particles would observe that the tidal tensor (with respect to , which components evaluated in our frame) has the form That is, they measure isotropic tidal tension orthogonal to the distinguished direction . The gravitomagnetic tensor vanishes identically This is an artifact of the unusual symmetries of this spacetime, and implies that the putative "rotation" of the dust does not have the gravitomagnetic effects usually associated with the gravitational field produced by rotating matter. The principal Lorentz invariants of the Riemann tensor are The vanishing of the second invariant means that some observers measure no gravitomagnetism, which is consistent with what was just said. The fact that the first invariant (the Kretschmann invariant) is constant reflects the homogeneity of the Gödel spacetime. Rigid rotation The frame fields given above are both inertial, , but the vorticity vector of the timelike geodesic congruence defined by the timelike unit vectors is This means that the world lines of nearby dust particles are twisting about one another. Furthermore, the shear tensor of the congruence vanishes, so the dust particles exhibit rigid rotation. Optical effects If the past light cone of a given observer is studied, it can be found that null geodesics moving orthogonally to spiral inwards toward the observer, so that if one looks radially, one sees the other dust grains in progressively time-lagged positions. However, the solution is stationary, so it might seem that an observer riding on a dust grain will not see the other grains rotating about oneself. However, recall that while the first frame given above (the ) appears static in the chart, the Fermi–Walker derivatives show that it is spinning with respect to gyroscopes. The second frame (the ) appears to be spinning in the chart, but it is gyrostabilized, and a non-spinning inertial observer riding on a dust grain will indeed see the other dust grains rotating clockwise with angular velocity about his axis of symmetry. It turns out that in addition, optical images are expanded and sheared in the direction of rotation. If a non-spinning inertial observer looks along his axis of symmetry, one sees one's coaxial non-spinning inertial peers apparently non-spinning with respect to oneself, as would be expected. Shape of absolute future According to Hawking and Ellis, another remarkable feature of this spacetime is the fact that, if the inessential y coordinate is suppressed, light emitted from an event on the world line of a given dust particle spirals outwards, forms a circular cusp, then spirals inward and reconverges at a subsequent event on the world line of the original dust particle. This means that observers looking orthogonally to the direction can see only finitely far out, and also see themselves at an earlier time. The cusp is a non-geodesic closed null curve. (See the more detailed discussion below using an alternative coordinate chart.) Closed timelike curves Because of the homogeneity of the spacetime and the mutual twisting of our family of timelike geodesics, it is more or less inevitable that the Gödel spacetime should have closed timelike curves (CTCs). Indeed, there are CTCs through every event in the Gödel spacetime. This causal anomaly seems to have been regarded as the whole point of the model by Gödel himself, who was apparently striving to prove that Einstein's equations of spacetime are not consistent with what we intuitively understand time to be (i. e. that it passes and the past no longer exists, the position philosophers call presentism, whereas Gödel seems to have been arguing for something more like the philosophy of eternalism). Einstein was aware of Gödel's solution and commented in Albert Einstein: Philosopher-Scientist that if there are a series of causally-connected events in which "the series is closed in itself" (in other words, a closed timelike curve), then this suggests that there is no good physical way to define whether a given event in the series happened "earlier" or "later" than another event in the series: In that case the distinction "earlier-later" is abandoned for world-points which lie far apart in a cosmological sense, and those paradoxes, regarding the direction of the causal connection, arise, of which Mr. Gödel has spoken. Such cosmological solutions of the gravitation-equations (with not vanishing A-constant) have been found by Mr. Gödel. It will be interesting to weigh whether these are not to be excluded on physical grounds. Globally nonhyperbolic If the Gödel spacetime admitted any boundary-less temporal hyperslices (e.g. a Cauchy surface), any such CTC would have to intersect it an odd number of times, contradicting the fact that the spacetime is simply connected. Therefore, this spacetime is not globally hyperbolic. A cylindrical chart In this section, we introduce another coordinate chart for the Gödel solution, in which some of the features mentioned above are easier to see. Derivation Gödel did not explain how he found his solution, but there are in fact many possible derivations. We will sketch one here, and at the same time verify some of the claims made above. Start with a simple frame in a cylindrical type chart, featuring two undetermined functions of the radial coordinate: Here, we think of the timelike unit vector field as tangent to the world lines of the dust particles, and their world lines will in general exhibit nonzero vorticity but vanishing expansion and shear. Let us demand that the Einstein tensor match a dust term plus a vacuum energy term. This is equivalent to requiring that it match a perfect fluid; i.e., we require that the components of the Einstein tensor, computed with respect to our frame, take the form This gives the conditions Plugging these into the Einstein tensor, we see that in fact we now have . The simplest nontrivial spacetime we can construct in this way evidently would have this coefficient be some nonzero but constant function of the radial coordinate. Specifically, with a bit of foresight, let us choose . This gives Finally, let us demand that this frame satisfy This gives , and our frame becomes Appearance of the light cones From the metric tensor we find that the vector field which is spacelike for small radii, becomes null at where This is because at that radius we find that so and is therefore null. The circle at a given t is a closed null curve, but not a null geodesic. Examining the frame above, we can see that the coordinate is inessential; our spacetime is the direct product of a factor R with a signature −++ three-manifold. Suppressing in order to focus our attention on this three-manifold, let us examine how the appearance of the light cones changes as we travel out from the axis of symmetry When we get to the critical radius, the cones become tangent to the closed null curve. A congruence of closed timelike curves At the critical radius , the vector field becomes null. For larger radii, it is timelike. Thus, corresponding to our symmetry axis we have a timelike congruence made up of circles and corresponding to certain observers. This congruence is however only defined outside the cylinder . This is not a geodesic congruence; rather, each observer in this family must maintain a constant acceleration in order to hold his course. Observers with smaller radii must accelerate harder; as the magnitude of acceleration diverges, which is just what is expected, given that is a null curve. Null geodesics If we examine the past light cone of an event on the axis of symmetry, we find the following picture: Recall that vertical coordinate lines in our chart represent the world lines of the dust particles, but despite their straight appearance in our chart, the congruence formed by these curves has nonzero vorticity, so the world lines are actually twisting about each other. The fact that the null geodesics spiral inwards in the manner shown above means that when our observer, when looking radially outwards, sees nearby dust particles not at their current locations, but at their earlier locations. This is what we would expect if the dust particles are in fact rotating about one another. The null geodesics are geometrically straight; in the figure, they appear to be spirals only because the coordinates are "rotating" in order to permit the dust particles to appear stationary. The absolute future According to Hawking and Ellis (see monograph cited below), all light rays emitted from an event on the symmetry axis reconverge at a later event on the axis, with the null geodesics forming a circular cusp (which is a null curve, but not a null geodesic): This implies that in the Gödel lambda dust solution, the absolute future of each event has a character very different from what we might naively expect. Cosmological interpretation Following Gödel, we can interpret the dust particles as galaxies, so that the Gödel solution becomes a cosmological model of a rotating universe. Besides rotating, this model exhibits no Hubble expansion, so it is not a realistic model of the universe in which we live, but can be taken as illustrating an alternative universe, which would in principle be allowed by general relativity (if one admits the legitimacy of a negative cosmological constant). Less well known solutions of Gödel's exhibit both rotation and Hubble expansion and have other qualities of his first model, but traveling into the past is not possible. According to Stephen Hawking, these models could well be a reasonable description of the universe that we observe, however observational data are compatible only with a very low rate of rotation. The quality of these observations improved continually up until Gödel's death, and he would always ask "Is the universe rotating yet?" and be told "No, it isn't". We have seen that observers lying on the y axis (in the original chart) see the rest of the universe rotating clockwise about that axis. However, the homogeneity of the spacetime shows that the direction but not the position of this "axis" is distinguished. Some have interpreted the Gödel universe as a counterexample to Einstein's hopes that general relativity should exhibit some kind of Mach's principle, citing the fact that the matter is rotating (world lines twisting about each other) in a manner sufficient to pick out a preferred direction, although with no distinguished axis of rotation. Others take Mach principle to mean some physical law tying the definition of non-spinning inertial frames at each event to the global distribution and motion of matter everywhere in the universe, and say that because the non-spinning inertial frames are precisely tied to the rotation of the dust in just the way such a Mach principle would suggest, this model does accord with Mach's ideas. Many other exact solutions that can be interpreted as cosmological models of rotating universes are known. See also van Stockum dust, for another rotating dust solution with (true) cylindrical symmetry, Dust solution, an article about dust solutions in general relativity. References Notes See section 12.4 for the uniqueness theorem. See section 5.7 for a classic discussion of CTCs in the Gödel spacetime. Warning: in Fig. 31, the light cones do indeed tip over, but they also widen, so that vertical coordinate lines are always timelike; indeed, these represent the world lines of the dust particles, so they are timelike geodesics. Vukovic R. (2014): Tensor Model of the Rotating Universe, Exercise in Special Relativity . Exact solutions in general relativity Metric tensors Metric
Gödel metric
[ "Mathematics", "Engineering" ]
3,556
[ "Exact solutions in general relativity", "Tensors", "Mathematical objects", "Equations", "Metric tensors" ]
2,085,185
https://en.wikipedia.org/wiki/Artin%27s%20conjecture%20on%20primitive%20roots
In number theory, Artin's conjecture on primitive roots states that a given integer a that is neither a square number nor −1 is a primitive root modulo infinitely many primes p. The conjecture also ascribes an asymptotic density to these primes. This conjectural density equals Artin's constant or a rational multiple thereof. The conjecture was made by Emil Artin to Helmut Hasse on September 27, 1927, according to the latter's diary. The conjecture is still unresolved as of 2024. In fact, there is no single value of a for which Artin's conjecture is proved. Formulation Let a be an integer that is not a square number and not −1. Write a = a0b2 with a0 square-free. Denote by S(a) the set of prime numbers p such that a is a primitive root modulo p. Then the conjecture states S(a) has a positive asymptotic density inside the set of primes. In particular, S(a) is infinite. Under the conditions that a is not a perfect power and a0 is not congruent to 1 modulo 4 , this density is independent of a and equals Artin's constant, which can be expressed as an infinite product . The positive integers satisfying these conditions are: 2, 3, 6, 7, 10, 11, 12, 14, 15, 18, 19, 22, 23, 24, 26, 28, 30, 31, 34, 35, 38, 39, 40, 42, 43, 44, 46, 47, 48, 50, 51, 54, 55, 56, 58, 59, 60, 62, 63, … The negative integers satisfying these conditions are: 2, 4, 5, 6, 9, 10, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25, 26, 29, 30, 33, 34, 36, 37, 38, 40, 41, 42, 45, 46, 49, 50, 52, 53, 54, 56, 57, 58, 61, 62, … Similar conjectural product formulas exist for the density when a does not satisfy the above conditions. In these cases, the conjectural density is always a rational multiple of CArtin. If a is a square number or a = −1, then the density is 0, and if a is a perfect pth power for prime p, then the number needs to be multiplied by (if there are more than one such prime p, then the number needs to be multiplied by for all such primes p), and if a0 is congruent to 1 mod 4, then the number needs to be multiplied by for all prime factors p of a0, e.g. for a = 8 = 23, the density is , and for a = 5 (which is congruent to 1 mod 4), the density is . Example For example, take a = 2. The conjecture claims that the set of primes p for which 2 is a primitive root has the above density CArtin. The set of such primes is S(2) = {3, 5, 11, 13, 19, 29, 37, 53, 59, 61, 67, 83, 101, 107, 131, 139, 149, 163, 173, 179, 181, 197, 211, 227, 269, 293, 317, 347, 349, 373, 379, 389, 419, 421, 443, 461, 467, 491, ...}. It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends to CArtin) is 38/95 = 2/5 = 0.4. Partial results In 1967, Christopher Hooley published a conditional proof for the conjecture, assuming certain cases of the generalized Riemann hypothesis. Without the generalized Riemann hypothesis, there is no single value of a for which Artin's conjecture is proved. D. R. Heath-Brown proved in 1986 (Corollary 1) that at least one of 2, 3, or 5 is a primitive root modulo infinitely many primes p. He also proved (Corollary 2) that there are at most two primes for which Artin's conjecture fails. Some variations of Artin's problem Elliptic curve An elliptic curve given by , Lang and Trotter gave a conjecture for rational points on analogous to Artin's primitive root conjecture. Specifically, they said there exists a constant for a given point of infinite order in the set of rational points such that the number of primes () for which the reduction of the point denoted by generates the whole set of points in in , denoted by , is given by . Here we exclude the primes which divide the denominators of the coordinates of . Gupta and Murty proved the Lang and Trotter conjecture for with complex multiplication under the Generalized Riemann Hypothesis, for primes splitting in the relevant imaginary quadratic field. Even order Krishnamurty proposed the question how often the period of the decimal expansion of a prime is even. The claim is that the period of the decimal expansion of a prime in base is even if and only if where and is unique and p is such that . The result was proven by Hasse in 1966. See also Stephens' constant, a number that plays the same role in a generalization of Artin's conjecture as Artin's constant plays here Brown–Zassenhaus conjecture Full reptend prime Cyclic number (group theory) References Analytic number theory Algebraic number theory Conjectures about prime numbers Unsolved problems in number theory
Artin's conjecture on primitive roots
[ "Mathematics" ]
1,184
[ "Analytic number theory", "Unsolved problems in mathematics", "Unsolved problems in number theory", "Algebraic number theory", "Mathematical problems", "Number theory" ]
2,086,689
https://en.wikipedia.org/wiki/Sulfur%E2%80%93iodine%20cycle
The sulfur–iodine cycle (S–I cycle) is a three-step thermochemical cycle used to produce hydrogen. The S–I cycle consists of three chemical reactions whose net reactant is water and whose net products are hydrogen and oxygen. All other chemicals are recycled. The S–I process requires an efficient source of heat. Process description The three reactions combined to produce hydrogen are the following: I2 + SO2 + 2 H2O 2 HI + H2SO4 () (Bunsen reaction) The HI is then separated by distillation or liquid/liquid gravitic separation. 2 H2SO4 2 SO2 + 2 H2O + O2 () The water, SO2 and residual H2SO4 must be separated from the oxygen byproduct by condensation. 2 HI I2 + H2 () Iodine and any accompanying water or SO2 are separated by condensation, and the hydrogen product remains as a gas. Net reaction: 2 H2O → 2 H2 + O2 The sulfur and iodine compounds are recovered and reused, hence the consideration of the process as a cycle. This S–I process is a chemical heat engine. Heat enters the cycle in high-temperature endothermic chemical reactions 2 and 3, and heat exits the cycle in the low-temperature exothermic reaction 1. The difference between the heat entering and leaving the cycle exits the cycle in the form of the heat of combustion of the hydrogen produced. Characteristics Advantages All fluid (liquids, gases) process, therefore well suited for continuous production High thermal efficiency predicted (about 50%) Completely closed system without byproducts or effluents (besides hydrogen and oxygen) Suitable for application with solar, nuclear, and hybrid (e.g., solar-fossil) sources of heat – if high enough temperatures can be achieved More developed than competing thermochemical processes Scalable from relatively small scale to huge applications No need for expensive or toxic catalysts or additives More efficient than electrolysis of water (~70-80% efficiency) using electricity derived from a thermal power plant (~30-60% efficiency) combining to ~21-48% efficiency Waste heat suitable for district heating if cogeneration is desired Disadvantages Very high temperatures required (at least ) – unachievable or difficult to achieve with current pressurized water reactors or concentrated solar power Corrosive reagents used as intermediaries (iodine, sulfur dioxide, hydriodic acid, sulfuric acid); therefore, advanced materials needed for construction of process apparatus Significant further development required to be feasible on large scale At the proposed temperature range advanced thermal power plants can achieve efficiencies (electric output per heat input) in excess of 50% somewhat negating the efficiency advantage In case of leakage corrosive and somewhat toxic substances are released to the environment – among them volatile iodine and hydroiodic acid If hydrogen is to be used for process heat the required high temperatures make the benefits compared to direct utilization of heat questionable Unable to use non-thermal or low-grade thermal energy sources such as hydropower, wind power or most currently available geothermal power Research The S–I cycle was invented at General Atomics in the 1970s. The Japan Atomic Energy Agency (JAEA) has conducted successful experiments with the S–I cycle in the Helium cooled High Temperature Test Reactor, a reactor which reached first criticality in 1998, JAEA have the aspiration of using further nuclear very high-temperature generation IV reactors (VHTR) to produce industrial scale quantities of hydrogen. (The Japanese refer to the cycle as the IS cycle.) Plans have been made to test larger-scale automated systems for hydrogen production. Under an International Nuclear Energy Research Initiative (INERI) agreement, the French CEA, General Atomics and Sandia National Laboratories are jointly developing the sulfur-iodine process. Additional research is taking place at the Idaho National Laboratory, and in Canada, Korea and Italy. Material challenge The S–I cycle involves operations with corrosive chemicals at temperatures up to about . The selection of materials with sufficient corrosion resistance under the process conditions is of key importance to the economic viability of this process. The materials suggested include the following classes: refractory metals, reactive metals, superalloys, ceramics, polymers, and coatings. Some materials suggested include tantalum alloys, niobium alloys, noble metals, high-silicon steels, several nickel-based superalloys, mullite, silicon carbide (SiC), glass, silicon nitride (Si3N4), and others. Recent research on scaled prototyping suggests that new tantalum surface technologies may be a technically and economically feasible way to make larger scale installations. Hydrogen economy The sulfur-iodine cycle has been proposed as a way to supply hydrogen for a hydrogen-based economy. It does not require hydrocarbons like current methods of steam reforming but requires heat from combustion, nuclear reactions, or solar heat concentrators. See also Cerium(IV) oxide–cerium(III) oxide cycle Copper–chlorine cycle Hybrid sulfur cycle High-temperature electrolysis Iron oxide cycle Zinc–zinc oxide cycle Footnotes References Paul M. Mathias and Lloyd C. Brown "Thermodynamics of the Sulfur-Iodine Cycle for Thermochemical Hydrogen Production", presented at the 68 th Annual Meeting of the Society of Chemical Engineers, Japan 23 March 2003. (PDF). Atsuhiko TERADA; Jin IWATSUKI, Shuichi ISHIKURA, Hiroki NOGUCHI, Shinji KUBO, Hiroyuki OKUDA, Seiji KASAHARA, Nobuyuki TANAKA, Hiroyuki OTA, Kaoru ONUKI and Ryutaro HINO, "Development of Hydrogen Production Technology by Thermochemical Water Splitting IS Process Pilot Test Plan", Journal of Nuclear Science and Technology, Vol.44, No.3, p. 477–482 (2007). (PDF). External links Hydrogen: Our Future made with Nuclear (in MPR Profile issue 9) Use of the modular helium reactor for hydrogen production (World Nuclear Association Symposium 2003) Inorganic reactions Hydrogen production
Sulfur–iodine cycle
[ "Chemistry" ]
1,288
[ "Inorganic reactions" ]
37,179,691
https://en.wikipedia.org/wiki/Canalizations%20of%20Zenobia
The Canalizations of Zenobia or El Kanat are canals that according to traditions, were built by Queen Zenobia to channel water from the Orontes river in the Anti-Lebanon mountains to Palmyra. Remains of the ruins of the canals can be seen in places around Lebanon. History Some of the canals were built in the second century during Hadrian's time, when the region was fully under Roman rule. There are bridges over canals, still standing from Roman times. It has been suggested that one of the canals originated from a mountain near Labweh extending to Qusayr. The other extends from the village of Chawaghir, north of Hermel. The canals were cut out of solid limestone bedrock to a depth of with wells approximately every . Probably Queen Zenobia extended the original canals, in order to bring water to the nearly 200,000 inhabitants of the city and surroundings when she ruled her kingdom. The canals were used until the Arab conquest, when they were destroyed. Data One of the canals is suggested to originate from a mountain near Labweh extending to Qusayr. Labweh has several archaeological sites of interest including three old caves with Roman-Byzantine sarcophagi and the remains of a temple. There are also remains of a Byzantine bastion and a Roman dam suggested to date to the reign of Queen Zenobia. Legend suggests that channels were carved through the rock to send water to her lands in Palmyra, Syria. Another canal extends from the village of Chawaghir, north of Hermel. The canals - until the outskirts of Palmyra - were cut out of solid limestone bedrock to a depth of with wells approximately every . Archaeologist Diana Kirkbride wrote: The traditional suggestion that the canals were originally constructed during the brief reign of Zenobia has been treated as probable -but not sure- by Michael Alouf, who notes the existence of canals traces in the desert. References External links Lebanese Ministry of Tourism brochure - Hermel Assi Rafting Club - Al-Assi (Orontes) River historical tour Palmira, with archaeological detailed information Macro-engineering Geography of Lebanon Archaeological sites in Lebanon Water transport in Lebanon Cuts (earthmoving) Roman sites in Lebanon Tourist attractions in Lebanon Zenobia
Canalizations of Zenobia
[ "Engineering" ]
461
[ "Macro-engineering" ]
37,181,976
https://en.wikipedia.org/wiki/Multicanonical%20ensemble
In statistics and physics, multicanonical ensemble (also called multicanonical sampling or flat histogram) is a Markov chain Monte Carlo sampling technique that uses the Metropolis–Hastings algorithm to compute integrals where the integrand has a rough landscape with multiple local minima. It samples states according to the inverse of the density of states, which has to be known a priori or be computed using other techniques like the Wang and Landau algorithm. Multicanonical sampling is an important technique for spin systems like the Ising model or spin glasses. Motivation In systems with a large number of degrees of freedom, like spin systems, Monte Carlo integration is required. In this integration, importance sampling and in particular the Metropolis algorithm, is a very important technique. However, the Metropolis algorithm samples states according to where beta is the inverse of the temperature. This means that an energy barrier of on the energy spectrum is exponentially difficult to overcome. Systems with multiple local energy minima like the Potts model become hard to sample as the algorithm gets stuck in the system's local minima. This motivates other approaches, namely, other sampling distributions. Overview Multicanonical ensemble uses the Metropolis–Hastings algorithm with a sampling distribution given by the inverse of the density of states of the system, contrary to the sampling distribution of the Metropolis algorithm. With this choice, on average, the number of states sampled at each energy is constant, i.e. it is a simulation with a "flat histogram" on energy. This leads to an algorithm for which the energy barriers are no longer difficult to overcome. Another advantage over the Metropolis algorithm is that the sampling is independent of the temperature of the system, which means that one simulation allows the estimation of thermodynamical variables for all temperatures (thus the name "multicanonical": several temperatures). This is a great improvement in the study of first order phase transitions. The biggest problem in performing a multicanonical ensemble is that the density of states has to be known a priori. One important contribution to multicanonical sampling was the Wang and Landau algorithm, which asymptotically converges to a multicanonical ensemble while calculating the density of states during the convergence. The multicanonical ensemble is not restricted to physical systems. It can be employed on abstract systems which have a cost function F. By using the density of states with respect to F, the method becomes general for computing higher-dimensional integrals or finding local minima. Motivation Consider a system and its phase-space characterized by a configuration in and a "cost" function F from the system's phase-space to a one-dimensional space : , the spectrum of F. The computation of an average quantity over the phase-space requires the evaluation of an integral: where is the weight of each state (e.g. correspond to uniformly distributed states). When Q does not depend on the particular state but only on the particular F's value of the state , the formula for can be integrated over f by adding a dirac delta function and be written as where is the marginal distribution of F. When the system has a large number of degrees of freedom, an analytical expression for is often hard to obtain, and Monte Carlo integration is typically employed in the computation of . On the simplest formulation, the method chooses N uniformly distributed states , and uses the estimator for computing because converges almost surely to by the strong law of large numbers: One typical problem of this convergence is that the variance of Q can be very high, which leads to a high computational effort to achieve reasonable results. To improve this convergence, the Metropolis–Hastings algorithm was proposed. Generally, Monte Carlo methods' idea is to use importance sampling to improve the convergence of the estimator by sampling states according to an arbitrary distribution , and use the appropriate estimator: . This estimator generalizes the estimator of the mean for samples drawn from an arbitrary distribution. Therefore, when is a uniform distribution, it corresponds the one used on a uniform sampling above. When the system is a physical system in contact with a heat bath, each state is weighted according to the Boltzmann factor, . In Monte Carlo, the canonical ensemble is defined by choosing to be proportional to . In this situation, the estimator corresponds to a simple arithmetic average: Historically, this occurred because the original idea was to use Metropolis–Hastings algorithm to compute averages on a system in contact with a heat bath where the weight is given by the Boltzmann factor, . While it is often the case that the sampling distribution is chosen to be the weight distribution , this does not need to be the case. One situation where the canonical ensemble is not an efficient choice is when it takes an arbitrarily long time to converge. One situation where this happens is when the function F has multiple local minima. The computational cost for the algorithm to leave a specific region with a local minimum exponentially increases with the cost function's value of the minimum. That is, the deeper the minimum, the more time the algorithm spends there, and the harder it will be to leave (exponentially growing with the depth of the local minimum). One way to avoid becoming stuck in local minima of the cost function is to make the sampling technique "invisible" to local minima. This is the basis of the multicanonical ensemble. Multicanonical ensemble The multicanonical ensemble is defined by choosing the sampling distribution to be where is the marginal distribution of F defined above. The consequence of this choice is that the average number of samples with a given value of f, m(f), is given by that is, the average number of samples does not depend on f: all costs f are equally sampled regardless of whether they are more or less probable. This motivates the name "flat-histogram". For systems in contact with a heat bath, the sampling is independent of the temperature and one simulation allows to study all temperatures. Tunneling time and critical slowing down Like in any other Monte Carlo method, there are correlations of the samples being drawn from . A typical measurement of the correlation is the tunneling time. The tunneling time is defined by the number of Markov steps (of the Markov chain) the simulation needs to perform a round-trip between the minimum and maximum of the spectrum of F. One motivation to use the tunneling time is that when it crosses the spectra, it passes through the region of the maximum of the density of states, thus de-correlating the process. On the other hand using round-trips ensures that the system visits all the spectrum. Because the histogram is flat on the variable F, a multicanonic ensemble can be seen as a diffusion process (i.e. a random walk) on the one-dimensional line of F values. Detailed balance of the process dictates that there is no drift on the process. This implies that the tunneling time, in local dynamics, should scale as a diffusion process, and thus the tunneling time should scale quadratically with the size of the spectrum, N: However, in some systems (the Ising model being the most paradigmatic), the scaling suffers from critical slowing down: it is where depends on the particular system. Non-local dynamics were developed to improve the scaling to a quadratic scaling (see the Wolff algorithm), beating the critical slowing down. However, it is still an open question whether there is a local dynamics that does not suffer from critical slowing down in spin systems like the Ising model. References Monte Carlo methods Computational physics
Multicanonical ensemble
[ "Physics" ]
1,553
[ "Monte Carlo methods", "Computational physics" ]
37,183,390
https://en.wikipedia.org/wiki/Langmuir%E2%80%93Taylor%20detector
A Langmuir–Taylor detector, also called surface ionization detector or hot wire detector, is a kind of ionization detector used in mass spectrometry, developed by John Taylor based on the work of Irving Langmuir and K. H. Kingdon. Construction This detector usually consists of a heated thin filament or ribbon of a metal with a high work function (typically tungsten or rhenium). Neutral atoms or molecules that strike the filament can boil off as positive ions in a process known as surface ionization, and these may be either measured as a current or detected, individually, using an electron multiplier and particle counting electronics. Applications This detector is mostly used with alkali atoms, having a low ionization potential, with applications in mass spectrometry and atomic clocks. References Mass spectrometry Particle detectors
Langmuir–Taylor detector
[ "Physics", "Chemistry", "Technology", "Engineering" ]
177
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Particle detectors", "Mass spectrometry", "Analytical chemistry stubs", "Matter" ]
37,183,702
https://en.wikipedia.org/wiki/L.%20M.%20Narducci
Lorenzo M. Narducci (25 May 1942 – 21 July 2006) was an Italian-American physicist known for his contributions to quantum optics and the study of laser instabilities, in particular. He was the author of more than 200 scientific papers and several books including Laser Physics and Laser Instabilities. In addition to his research on the theory of laser instabilities he also contributed to the physics of emission and absorption in three-level systems, and frequency locking. Narducci received his PhD from the University of Milan for research on optical coherence in quantum electrodynamics. He served as editor of Optics Communications (1987-2006) and was member of the editorial board at Physical Review A. Narducci was a Fellow of the Optical Society and among various awards, he received the 1991 Einstein Prize for Laser Science and the 1999 Willis E. Lamb Award for Laser Science and Quantum Optics. References External links In Memory at Physical Review A, July 21, 2006. Obituary at Physics Today, August 13, 2006. In Memoriam at The Optical Society, July 21, 2006. 20th-century Italian physicists Quantum physicists Optical physicists Theoretical physicists 20th-century American physicists Fellows of Optica (society) University of Milan alumni Italian emigrants to the United States 1942 births 2006 deaths
L. M. Narducci
[ "Physics" ]
266
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
37,186,842
https://en.wikipedia.org/wiki/Saffman%E2%80%93Taylor%20instability
The Saffman–Taylor instability, also known as viscous fingering, is the formation of patterns in a morphologically unstable interface between two fluids in a porous medium or in a Hele-Shaw cell, described mathematically by Philip Saffman and G. I. Taylor in a paper of 1958. This situation is most often encountered during drainage processes through media such as soils. It occurs when a less viscous fluid is injected, displacing a more viscous fluid; in the inverse situation, with the more viscous displacing the other, the interface is stable and no instability is seen. Essentially the same effect occurs driven by gravity (without injection) if the interface is horizontal and separates two fluids of different densities, the heavier one being above the other: this is known as the Rayleigh-Taylor instability. In the rectangular configuration the system evolves until a single finger (the Saffman–Taylor finger) forms, whilst in the radial configuration the pattern grows forming fingers by successive tip-splitting. Most experimental research on viscous fingering has been performed on Hele-Shaw cells, which consist of two closely spaced, parallel sheets of glass containing a viscous fluid. The two most common set-ups are the channel configuration, in which the less viscous fluid is injected at one end of the channel, and the radial configuration, in which the less viscous fluid is injected at the centre of the cell. Instabilities analogous to viscous fingering can also be self-generated in biological systems. Derivation for a planar interface The simplest case of the instability arises at a planar interface within a porous medium or Hele-Shaw cell, and was treated by Saffman and Taylor but also earlier by other authors. A fluid of viscosity is driven in the -direction into another fluid of viscosity at some velocity . Denoting the permeability of the porous medium as a constant, isotropic, , Darcy's law gives the unperturbed pressure fields in the two fluids to bewhere is the pressure at the planar interface, working in a frame where this interface is instantaneously given by . Perturbing this interface to (decomposing into normal modes in the plane, and taking ), the pressure fields becomeAs a consequence of the incompressibility of the flow and Darcy's law, the pressure fields must be harmonic, which, coupled with the requirement that the perturbation decay as , fixes and , with the constants to be determined by continuity of pressure. Upon linearization, the kinematic boundary condition at the interface (that fluid velocity in the direction must match the velocity of the fluid interface), coupled with Darcy's law, givesand thus that and . Matching the pressure fields at the interface givesand so , leading to growth of the perturbation when - i.e. when the injected fluid is less viscous than the ambient fluid. There are problems with this basic case: namely that the most unstable mode has infinite wavenumber and grows at an infinitely fast rate, which can be rectified by the introduction of surface tension (which provides a jump condition in pressures across the fluid interface through the Young–Laplace equation), which has the effect of modifying the growth rate to with surface tension and the mean curvature. This suppresses small-wavelength (high-wavenumber) disturbances, and we would expect to see instabilities with wavenumber close to the value of which results in the maximal value of ; in this case with surface tension, there is a unique maximal value. In radial geometry The Saffman–Taylor instability is usually seen in an axisymmetric context as opposed to the simple planar case derived above. The mechanisms for the instability remain the same in this case, and the selection of the most unstable wavenumber in this case corresponds to a given number of fingers (an integer). See also Kelvin–Helmholtz instability Darcy's law Darrieus–Landau instability References Fluid dynamics Fluid dynamic instabilities
Saffman–Taylor instability
[ "Chemistry", "Engineering" ]
836
[ "Piping", "Chemical engineering", "Fluid dynamic instabilities", "Fluid dynamics" ]
29,068,806
https://en.wikipedia.org/wiki/Romeo%20Model%20Checker
Roméo is an integrated tool environment for modeling, validation and verification of real-time systems modeled as time Petri Nets or stopwatch Petri Nets, extended with parameters. The tool has been developed by the Real-Time Systems group at LS2N lab (École centrale de Nantes, University of Nantes, CNRS) in Nantes, France. References External links Web page of Roméo Web page of LS2N lab Model checkers
Romeo Model Checker
[ "Mathematics" ]
91
[ "Model checkers", "Mathematical software" ]
29,072,525
https://en.wikipedia.org/wiki/Mackey%E2%80%93Arens%20theorem
The Mackey–Arens theorem is an important theorem in functional analysis that characterizes those locally convex vector topologies that have some given space of linear functionals as their continuous dual space. According to Narici (2011), this profound result is central to duality theory; a theory that is "the central part of the modern theory of topological vector spaces." Prerequisites Let be a vector space and let be a vector subspace of the algebraic dual of that separates points on . If is any other locally convex Hausdorff topological vector space topology on , then we say that is compatible with duality between and if when is equipped with , then it has as its continuous dual space. If we give the weak topology then is a Hausdorff locally convex topological vector space (TVS) and is compatible with duality between and (i.e. ). We can now ask the question: what are all of the locally convex Hausdorff TVS topologies that we can place on that are compatible with duality between and ? The answer to this question is called the Mackey–Arens theorem. Mackey–Arens theorem See also Dual system Mackey topology Polar topology References Sources Theorems in functional analysis Lemmas Topological vector spaces Linear functionals
Mackey–Arens theorem
[ "Mathematics" ]
261
[ "Theorems in mathematical analysis", "Mathematical theorems", "Vector spaces", "Topological vector spaces", "Space (mathematics)", "Theorems in functional analysis", "Mathematical problems", "Lemmas" ]
29,078,203
https://en.wikipedia.org/wiki/Oracle%20Fusion%20Architecture
Oracle Fusion Architecture is a technology reference architecture or blueprint from Oracle Corporation for building applications. Oracle Fusion Applications is built on top of the Oracle Fusion Middleware technology stack using Oracle's Fusion Architecture as blueprint. Oracle Fusion Architecture is not a product, and can be used without licensing it from Oracle. Details Oracle Fusion Architecture provides an open architecture ecosystem, which is service- and event-enabled. Many enterprises use this open, pluggable architecture ecosystem to write Oracle Fusion Applications, or even third-party applications on top of Oracle Fusion Middleware. Oracle Fusion Architecture is based on the following core principles: Model Driven: For applications, business processes and business information Service & Event- enabled: For extensible, modular, flexible applications and processes Information Centric: For complete and consistent, actionable, real-time intelligence Grid-Ready: Must be scalable, available, secure, manageable on low-cost hardware Standards-based: Must be open, pluggable in a heterogeneous environment Oracle Fusion Applications that can be written on Oracle Fusion Middleware using the Oracle Fusion Architecture ecosystem, were released in September, 2010. See also Oracle Fusion Middleware Oracle Fusion Applications Interface (computing) Enterprise service bus Fusion CRM References External links Oracle Fusion website Fusion Service-oriented architecture-related products
Oracle Fusion Architecture
[ "Engineering" ]
265
[ "Software engineering", "Software engineering stubs" ]
29,078,317
https://en.wikipedia.org/wiki/Soap%20scum
Soap scum or lime soap is the white solid composed of calcium stearate, magnesium stearate, and similar alkaline earth metal derivatives of fatty acids. These materials result from the addition of soap and other anionic surfactants to hard water. Hard water contains calcium and magnesium ions, which react with the surfactant anion to give these metallic or lime soaps. 2 C17H35COO−Na+ + Ca2+ → (C17H35COO)2Ca + 2 Na+ In this reaction, the sodium cation in soap is replaced by calcium to form calcium stearate. Lime soaps build deposits on fibres, washing machines, and sinks. Synthetic surfactants are less susceptible to the effects of hard water. Most detergents contain builders that prevent the formation of lime soaps. See also Water softening Tadelakt, a form of lime-soap-based waterproof plaster Qadad, another form of lime-soap waterproof plaster References Water treatment Soaps Calcium compounds Magnesium compounds
Soap scum
[ "Chemistry", "Engineering", "Environmental_science" ]
218
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
29,078,639
https://en.wikipedia.org/wiki/Positive%20material%20identification
Positive material identification (PMI) is the analysis of a material, this can be any material but is generally used for the analysis of metallic alloy to establish composition by reading the quantities by percentage of its constituent elements. Typical methods for PMI include X-ray fluorescence (XRF) and optical emission spectrometry (OES). PMI is a portable method of analysis and can be used in the field on components. X-ray fluorescence (XRF) PMI can not detect small elements such as carbon. This means that when undertaking analysis of stainless steels such as grades 304 and 316 the low carbon 'L' variant can not be determined. This however can be analysed with optical emission spectrometry (OES) References Elemental analysis Chemical tests Quality control
Positive material identification
[ "Chemistry" ]
163
[ "Elemental analysis", "Chemical tests" ]
35,819,612
https://en.wikipedia.org/wiki/Osculant
In mathematical invariant theory, the osculant or tacinvariant or tact invariant is an invariant of a hypersurface that vanishes if the hypersurface touches itself, or an invariant of several hypersurfaces that osculate, meaning that they have a common point where they meet to unusually high order. References Invariant theory
Osculant
[ "Physics" ]
72
[ "Invariant theory", "Group actions", "Symmetry" ]
35,822,400
https://en.wikipedia.org/wiki/7-simplex%20honeycomb
In seven-dimensional Euclidean geometry, the 7-simplex honeycomb is a space-filling tessellation (or honeycomb). The tessellation fills space by 7-simplex, rectified 7-simplex, birectified 7-simplex, and trirectified 7-simplex facets. These facet types occur in proportions of 2:2:2:1 respectively in the whole honeycomb. A7 lattice This vertex arrangement is called the A7 lattice or 7-simplex lattice. The 56 vertices of the expanded 7-simplex vertex figure represent the 56 roots of the Coxeter group. It is the 7-dimensional case of a simplectic honeycomb. Around each vertex figure are 254 facets: 8+8 7-simplex, 28+28 rectified 7-simplex, 56+56 birectified 7-simplex, 70 trirectified 7-simplex, with the count distribution from the 9th row of Pascal's triangle. contains as a subgroup of index 144. Both and can be seen as affine extensions from from different nodes: The A lattice can be constructed as the union of two A7 lattices, and is identical to the E7 lattice. ∪ = . The A lattice is the union of four A7 lattices, which is identical to the E7* lattice (or E). ∪ ∪ ∪ = + = dual of . The A lattice (also called A) is the union of eight A7 lattices, and has the vertex arrangement to the dual honeycomb of the omnitruncated 7-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 7-simplex. ∪ ∪ ∪ ∪ ∪ ∪ ∪ = dual of . Related polytopes and honeycombs Projection by folding The 7-simplex honeycomb can be projected into the 4-dimensional tesseractic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement: See also Regular and uniform honeycombs in 7-space: 7-cubic honeycomb 7-demicubic honeycomb Truncated 7-simplex honeycomb Omnitruncated 7-simplex honeycomb E7 honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley–Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3–45] Honeycombs (geometry) 8-polytopes
7-simplex honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
634
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
35,823,031
https://en.wikipedia.org/wiki/Function-spacer-lipid%20Kode%20construct
Function-Spacer-Lipid (FSL) Kode constructs (Kode Technology) are amphiphatic, water dispersible biosurface engineering constructs that can be used to engineer the surface of cells, viruses and organisms, or to modify solutions and non-biological surfaces with bioactives. FSL Kode constructs spontaneously and stably incorporate into cell membranes. FSL Kode constructs with all these aforementioned features are also known as Kode Constructs. The process of modifying surfaces with FSL Kode constructs is known as "koding" and the resultant "koded" cells, viruses and liposomes are respectively known as kodecytes, and kodevirions. Technology description All living surfaces are decorated with a diverse range of complex molecules, which are key modulators of chemical communications and other functions such as protection, adhesion, infectivity, apoptosis, etc. Functional-Spacer-Lipid (FSL) Kode constructs can be synthesized to mimic the bioactive components present on biological surfaces, and then re-present them in novel ways. The architecture of an FSL Kode construct, as implicit in the name, consists of three components - a functional head group, a spacer, and a lipid tail. This structure is analogous to a Lego minifigure in that, they have three structural components, with each component having a separate purpose. In the examples shown in all the figures, a Lego 'minifig' has been used for the analogy. However, it should be appreciated that this is merely a representation and the true structural similarity is significantly varied between Lego minifigures and FSL Kode constructs (fig 1). The functional group of an FSL is equivalent to a Lego minifigure head, with both being at the extremity and carrying the character functional components. The spacer of the FSL is equivalent to the body of the Lego minifigure and the arms on the minifigure are representative of substitutions which may be engineered into the chemical makeup of the spacer. The lipid of the FSL anchors it to lipid membranes and gives the FSL construct its amphiphatic nature which can cause it to self-assemble. Because the lipid tail can act directly as an anchor it is analogous to the legs of a Lego minifigure. Flexible design The functional group, the spacer and the lipid tail components of the FSL Kode construct can each be individually designed resulting in FSL Kode constructs with specific biological functions. The functional head group is usually the bioactive component of the construct and the various spacers and lipids influence and effect its presentation, orientation and location on a surface. Critical to the definition of an FSL Kode construct is the requirement to be dispersible in water, and spontaneously and stably incorporate into cell membranes. Other lipid bioconjugates that include components similar to FSLs but do not have these features are not termed as Function-Spacer-Lipid Kode constructs. Functional groups Source: A large range of functional groups have already been made into FSL Kode constructs. These include: Carbohydrates – ranging from monosaccharides to polysaccharides and including blood group antigens, hyaluronic acid oligomers and sialic acid residues Peptide/protein – ranging from single amino acids to proteins as large as antibodies Labels – including fluorophores, radioisotopes, biotin, etc. Other – chemical moieties such as maleimide, click residues, PEG, charged compounds Note 1: Multimeric – the presentation of the F residue can be as multimers with controlled spacing and be variable. Note 2: Mass – the mass that can be anchored by an FSL Kode constructs can range from 200 to >1x106 Da Spacers Source: The spacer is an integral part of the FSL Kode construct and gives it several important characteristics including water dispersibility. Length – the spacer can be varied in length, for example 1.9 nm (Ad), 7.2 nm (CMG2), 11.5 nm (CMG4), allowing for enhanced presentation of Functional groups at the biosurface. Optimizes 'F' presentation – The presentation of the bioactive (functional group) on a spacer reduces steric hindrance and increases the bioactive surfaces exposed and available for interactions Rigidity – the spacer can be modified to be either flexible or rigid depending upon desired characteristics Substitutions (represented by the leaves on the stalk) – the spacer can be modified both in charge, and polarity. Branches – usually the spacer is linear, but it can also be branched including specific spacing of the branches to optimize presentation and interaction of the F group. Inert – important to the design of FSL Kode constructs is the biologically inert nature of the spacer. Importantly this feature means the S-L components of the constructs are unreactive with undiluted serum. Consequently, the constructs are compatible in vivo use, and can improve diagnostic assay sensitivity by allowing for the use of undiluted serum. Lipids Source: The lipid tail is essential for enabling lipid membrane insertion and retention but also for giving the construct amphiphilic characteristics that enable hydrophilic surface coating (due to formation of bilipid layers). Different membrane lipids that can be used to create FSLs have different membrane physiochemical characteristics and thus can affect biological function of the FSL. Lipids in FSL Kode constructs include: Diacyl/diakyl e.g. DOPE Sterols e.g. cholesterol Ceramides Optimising functional group (F) presentation One of the important functions of an FSL construct is that it can optimise the presentation of antigens, both on cell surfaces and solid-phase membranes. This optimisation is achieved primarily by the spacer, and secondarily by the lipid tail. In a typical immunoassay, the antigen is deposited directly onto the microplate surface and binds to the surface either in a random fashion, or in a preferred orientation depending on the residues present on the surface of this antigen. Usually this deposition process is uncontrolled. In contrast, the FSL Kode construct bound to a microplate presents the antigen away from the surface in an orientation with a high level of exposure to the environment. Furthermore, typical immunoassays use recombinant peptides rather than discrete peptide antigens. As the recombinant peptide is many times bigger than the epitope of interest, a lot of undesired and unwanted peptide sequences are also represented on the microplate. These additional sequences may include unwanted microbial related sequences (as determined by a BLAST analysis) that can cause issues of low level cross-reactivity. Often the mechanism by which an immunoassay is able to overcome this low level activity is to dilute the serum so that the low level microbial reactive antibodies are not seen, and only high-level specific antibodies result in an interpretable result. In contrast, FSL Kode constructs usually use specifically selected peptide fragments (up to 40 amino acids), thereby overcoming cross-reactivity with microbial sequences, and allowing for the use of undiluted serum (which increases sensitivity). The F component can be further enhanced by presentation of it in multimeric formats and with specific spacing. The four types of multimeric format include linear repeating units, linear repeating units with spacing, clusters, and branching (Fig. 4). Mechanisms of interaction Amphiphilic FSL Kode construct The FSL Kode construct by nature of its composition in possessing both hydrophobic and hydrophilic regions are amphiphilic (or amphipathic). This characteristic determines the way in which the construct will interact with surfaces. When present in a solution they may form simple micelles or adopt more complex bilayer structures with two simplistic examples shown in Fig. 5a. More complex structures are expected. The actual nature of FSL micelles has not been determined. However, based on normal structural function of micelles, it is expected that it will be determined in part by the combination of functional group, spacer and lipid together with temperature, concentration, size and hydrophobicity/hydrophilicity for each FSL Kode construct type. Surface coatings will occur via two theoretical mechanisms, the first being direct hydrophobic interaction of the lipid tail with a hydrophobic surface resulting in a monolayer of FSL at the surface (Fig. 5b). Hydrophobic binding of the FSL will be via its hydrophobic lipid tail interacting directly with the hydrophobic (lipophilic) surface. The second surface coating will be through the formation of bilayers as the lipid tail is unable to react with the hydrophilic surface. In this case the lipids will induce the formation of a bilayer, the surface of which will be hydrophilic. This hydrophilic membrane will then interact directly with the hydrophilic surface and will probably encapsulate fibres. This hydrophilic bilayer binding is the expected mechanism by which FSLs are able to bind to fibrous membranes such as paper and glass fibres (Fig. 5c) and (Fig. 9). Lipid membrane modification After labeling of the surface with the selected F bioactives, the constructs will be present and oriented at the membrane surface. It is expected that the FSL will be highly mobile within the membrane and the choice of lipid tail will effect is relative partitioning within the membrane. The construct unless it has flip-flop behavior is expected to remain surface presented. However, the modification is not permanent in living cells and constructs will be lost (consumed) at a rate proportional to the activity at the membrane and division rate of the cell (with dead cells remaining highly labeled). Additionally, when present in vivo with serum lipids FSLs will elute from the membrane into the plasma at a rate of about 1% per hour. In fixed cells or inactive cells (e.g. red cells) stored in serum free media the constructs are retained normally. Liposomes are easy koded by simply adding FSL Kode constructs into the preparation. Contacting koded liposomes with microplates or other surfaces can cause the labeling of the microplate surface. Non-biologic surface interaction Non-biologic surface coatings will occur via two mechanisms, the first being direct hydrophobic interaction of the lipid tail with a hydrophobic surface resulting in a monolayer of FSL at the surface. The second surface coating will be through the formation of bilayers, which probably either encapsulate fibres or being via the hydrophilic F group. This is the expected mechanism by which FSLs bind to fibrous membranes such as paper and glass fibres. A recent study has found that when FSL Kode constructs are optimised, could in a few seconds glycosylate almost any non-biological surface including metals, glass, plastics, rubbers, and other polymers. Technology features The technological features of FSL Kode constructs and the koding process can be summarized as follows: Rapid and simple – simple contact for 10–120 minutes and constructs spontaneously and stably incorporate – no washing required. Replicable – same variables (time, temperature, concentration) equals the same result. Toxicity – FSL constructs are biocompatible, disperse into biological solutions without solvents, detergents. They label non-covalently and are non-genetic. Normal vitality and functionality is maintained in modified cells/virions/organisms. Toxicity/vitality experiments in small laboratory animals, zebrafish, cell cultures, spermatozoa and embryos find no toxic effects within physiological ranges. Amphiphilic – the amphiphilic nature of the FSL Kode construct makes them water dispersible (clear solution of micelles), yet once interacted with a membrane they insert/coat and become water resistant Variable design – a single F can be presented in more than 100 ways by varying the spacer and lipid. High biovisibility – as the spacer holds the F moiety away for the membrane it is able to achieve increased sensitivity, specificity and reactivity can be optimized by use of multiple and variable biomarker presentations on the same surface. Additive – FSL modification is compatible with other technologies allowing users to add additional features to cells/viruses/organisms/surfaces already modified by more traditional methods. Multiple FSL constructs may be added to a surface simultaneously by simply creating a mix of FSL Kode constructs. Constructs insert into living or fixed cell (glutaraldehyde) membranes. Simple FSL peptide synthesis – there is a reactive-functional-group FSL Kode construct with maleimide as its functional group which can be used for preparation of FSLs from cysteine-containing peptides, proteins or any other thiols of biological interest. The effective synthetic approach is based on the well-known Michael nucleophilic addition to maleimides (Fig. 7). Synthetic "Gylcolipids" – one family of the FSL constructs are synthetic glycolipids with well-defined hydrophobic tails and carbohydrate head groups Koded membranes surfaces and solutions FSL constructs have a wide range of uses and they have been used to modify the following: Cells – blood cells, culture lines, embryos, spermatozoa Viruses – influenza, measles, varicella Organisms – parasites, microbes, zebrafish Liposomes – also micelles, lipid particles Surfaces/fibres – hydrophobic or hydrophilic membranes/fibres, paper, nitrocellulose, cotton, silk, glass, Teflon, silica, magnetic beads (microspheres) etc. Solutions – saline, plasma/serum, culture media Methodology for FSL use (koding) FSL constructs, when in solution (saline) and in contact, will spontaneously incorporate into cell and virus membranes. The methodology involves simply preparing a solution of FSL constructs in the range of 1–1000 μg/mL. The actual concentration will depend on the construct and the quantity of construct required in the membrane. One part of FSL solution is added to one part of cells (up to 100% suspension) and they are incubated at a set temperature within the range of depending on temperature compatibility of the cells being modified. The higher the temperature, the faster the rate of FSL insertion into the membrane. For red blood cells, at 37 °C incubation for 2 hours achieves >95% insertion with at least 50% insertion being achieved within 20 minutes. In general, FSL insertion time of 4 hours at room temperature or 20 hours at 4 °C gives results similar to 1 hour at 37 °C for carbohydrate based FSLs inserting into red blood cells. The resultant kodecytes or kodevirions do not required to be washed, however this option should be considered if an excess of FSL construct is used in the koding process. Applications FSL Kode constructs have been used for research and development, diagnostic products, and are currently being investigated as potential therapeutic agents. Kodecytes FSL have been used to create human red cell kodecytes that have been used to detect and identify blood group allo-antibodies as ABO sub-group mimics, ABO quality control systems, serologic teaching kits and a syphilis diagnostic. Kodecytes have also demonstrated that FSL-FLRO4 is a suitable reagent for labelling packed red blood cells (PRBC) at any point during routine storage and look to facilitate the development of immunoassays and transfusion models focused on addressing the mechanisms involved in tansfusion-related immunomodulation (TRIM). Murine kodecytes have been experimentally used to determine in vivo cell survival, and create model transfusion reactions. Zebrafish kodecytes have been used to determine real time in vivo cell migration. Kodecytes have been used to create influenza diagnostics. Kodecytes which have been modified with FSL-GB3 were unable to be infected with the HIV virus. Kodevirions Kodevirions are FSL modified viruses. Several FSL Kode constructs have been used to label viruses to assist in their flow-cytometric visualisation and to track them real time distribution in animal models. They have also been used to modify the surface of viruses with the intention of targeting them to be used to attach tumors (oncolytic). Kodesomes Kodesomes are liposomes that have been decorated with FSL Kode constructs. These have been used to deposit FSL constructs onto microplates to create diagnostic assays. They also have the potential for therapeutic use. Koded solutions These are solutions containing FSL Kode constructs where the construct will exist as a clear micellular dispersion. FSL-GB3 as a solution/gel has been used to inhibit HIV infection and to neutralise Shiga toxin. FSL blood group A as a solution has been used to neutralise circulating antibodies in a mouse model and allow incompatible blood group A (murine kodecytes) transfusion. This model experiment was used to demonstrate the potential of FSLs to neutralise circulating antibody and allow for incompatible blood transfusion or organ transplantation. Koded surfaces All FSL Kode constructs disperse in water and are therefore compatible with inkjet printers. FSL constructs can be printed with a standard desktop inkjet printer directly onto paper to create immunoassays. An empty ink cartridge is filled with an FSL construct and words, barcodes, or graphics are printed. A Perspex template is adhered to the surface to create reaction wells. The method is then a standard EIA procedure, but blocking of serum is not required and undiluted serum can be used. A typical procedure is as follows: add serum, incubate, wash by immersion, add secondary EIA conjugate, incubate, wash, add NBT/BCIP precipitating substrate and stop the reaction when developed by washing (Fig. 9). The result is stable for years. See also Kodevirion Kodecyte External links FSL Constructs: A Simple Method for Modifying Cell/Virion Surfaces with a Range of Biological Markers Without Affecting their Viability – Journal of Visualised Experiments (JOVE) free video article kodecyte.org - the academic resource for Kode Technology References Biochemistry Biotechnology Laboratory techniques Molecular biology techniques Protein methods
Function-spacer-lipid Kode construct
[ "Chemistry", "Biology" ]
3,926
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Biotechnology", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
35,827,110
https://en.wikipedia.org/wiki/Luminous%20flame
A luminous flame is a burning flame which is brightly visible. Much of its output is in the form of visible light, as well as heat or light in the non-visible wavelengths. An early study of flame luminosity was conducted by Michael Faraday and became part of his series of Royal Institution Christmas Lectures, The Chemical History of a Candle. Luminosity In the simplest case, the yellow flame is luminous due to small soot particles in the flame which are heated to incandescence. Producing a deliberately luminous flame requires either a shortage of combustion air (as in a Bunsen burner) or a local excess of fuel (as for a kerosene torch). Because of this dependency upon relatively inefficient combustion, luminosity is associated with diffusion flames and is lessened with premixed flames. The flame is yellow because of its temperature. To produce enough soot to be luminous, the flame is operated at a lower temperature than its efficient heating flame (see Bunsen burner). The colour of simple incandescence is due to black-body radiation. By Planck's law, as the temperature decreases, the peak of the black-body radiation curve moves to longer wavelengths, i.e. from the blue to the yellow. However, the blue light from a gas burner's premixed flame is primarily a product of molecular emission (Swan bands) rather than black-body radiation. Other factors, particularly the fuel chemistry and its propensity for forming soot, have an influence on luminosity. Bunsen burner One of the most familiar instances of a luminous flame is produced by a Bunsen burner. This burner has a controllable air supply and a constant gas jet: when the air supply is reduced, a highly luminous, and thus visible, orange 'safety flame' is produced. For heating work, the air inlet is opened and the burner produces a much hotter blue flame. Combustion efficiency Efficient combustion relies on the complete combustion of the fuel. Production of soot and/or carbon monoxide represents a waste of fuel (further burning was possible) the potential problem of soot build-up in burners. Heating burners are thus usually designed to produce a non-luminous flame. Oil lamps Lamps for illumination rather than heat may use a deliberately luminous flame. A more efficient method overall uses a mantle instead. Like the incandescent soot in a luminous flame, the mantle is heated and then glows. The flame does not provide much light itself, and so a more heat-efficient non-luminous flame is preferred. Unlike simple soot, a mantle uses rare-earth elements to provide a bright white glow; the colour of the glow comes from the spectral lines of these elements, not from simple black-body radiation. Flame testing When performing a flame test, the colour of a flame is affected by external materials added to it. A non-luminous flame is used, to avoid masking the test colour by the flame's colour. References Combustion engineering Fire Light
Luminous flame
[ "Physics", "Chemistry", "Engineering" ]
624
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Industrial engineering", "Combustion engineering", "Waves", "Combustion", "Light", "Fire" ]
35,827,608
https://en.wikipedia.org/wiki/Nanotechnology%20Industries%20Association
The Nanotechnology Industries Association (NIA) is the sector-independent expert, membership and advocacy organisation providing a responsible voice for the industrial nanotechnologies supply chains. The NIA works with regulators and stakeholders on the national, European and international levels so as to secure a supportive environment for the continuing advancement and establishment of nanotechnologies. Members of the NIA are represented on globally influential fora, such the OECD Working Party on Manufactured Nanomaterials, and the OECD Working Party on Nanotechnology, International Organization for Standardization (ISO), European Committee for Standardization (CEN) as well as national and international advisory groups. The NIA was originally founded in London in 2005 as a limited company with initial funding from the UK Government's MNT (micro and nanotechnology) funding scheme with the aim of creating an industrial association that could support companies working in the area of nanotechnology. The NIA's board was drawn from some of the founding companies, including Oxonica and QinetiQ Nanomaterials Limited (QNL) and its founding Director General was Dr Steffi Friedrichs. In 2009 the NIA moved to Brussels, the centre of chemical regulations in Europe and established its new Belgium registered aisbl. The NIA stands for The Nanotechnology Industries Association, NIA, promotes a responsible and sustainable innovation in nanotechnology following an approach based on scientific and technical evidence. The NIA stands for: a framework of shared principles for the safe, sustainable and socially supportive development and use of nanotechnologies, a publicly and regulatory supportive environment for the continuing advancement and establishment of nanotechnology innovation. How the NIA operates The NIA's Corporate Membership is open to all companies or organisations involved in, or seeking to be involved in the research, development, manufacturing, marketing or sale of nanotechnology products and processes as well as those who provide the equipment or the technical and engineering services required for their production (including all forms of information technology). The activities of the NIA are driven and conducted by its collective membership, supported by broad-based Associate Membership and Affiliate Membership with expertises in matters such as scientific research, IP protection, regulation, legislation, insurance, and finance. External links Nanotechnology Industries Association (NIA) aisbl Nanotechnology institutions
Nanotechnology Industries Association
[ "Materials_science" ]
479
[ "Nanotechnology", "Nanotechnology institutions" ]
35,827,897
https://en.wikipedia.org/wiki/Spudcan
Spudcans are the base cones on mobile-drilling jack-up platform. These inverted cones are mounted at the base of the jack-up and provide stability to lateral forces on the jack-up rig when deployed into ocean-bed systems. Important calculations for spudcan design include the response to vertical, horizontal and torsional forces on the jack-up leg. References Vlahos, Cassidy and Byrne, The behaviour of spudcan footings on clay subjected to combined cyclic loading. Report OUEL 2286/05 Oil platforms
Spudcan
[ "Chemistry", "Engineering" ]
111
[ "Oil platforms", "Structural engineering", "Petroleum stubs", "Petroleum technology", "Petroleum", "Natural gas technology" ]
23,003,272
https://en.wikipedia.org/wiki/Grauert%E2%80%93Riemenschneider%20vanishing%20theorem
In mathematics, the Grauert–Riemenschneider vanishing theorem is an extension of the Kodaira vanishing theorem on the vanishing of higher cohomology groups of coherent sheaves on a compact complex manifold, due to . Grauert–Riemenschneider conjecture The Grauert–Riemenschneider conjecture is a conjecture related to the Grauert–Riemenschneider vanishing theorem: This conjecture was proved by using the Riemann–Roch type theorem (Hirzebruch–Riemann–Roch theorem) and by using Morse theory. Note References Theorems in algebraic geometry
Grauert–Riemenschneider vanishing theorem
[ "Mathematics" ]
133
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
23,003,504
https://en.wikipedia.org/wiki/Transport%20length
The transport length in a strongly diffusing medium (noted l*) is the length over which the direction of propagation of the photon is randomized. It is related to the mean free path l by the relation: with g: the asymmetry coefficient. or averaging of the scattering angle θ over a high number of scattering events. g can be evaluated with the Mie theory. If g=0, l=l*. A single scattering is already isotropic. If g→1, l*→infinite. A single scattering doesn't deviate the photons. Then the scattering never gets isotropic. This length is useful for renormalizing a non-isotropic scattering problem into an isotropic one in order to use classical diffusion laws (Fick law and Brownian motion). The transport length might be measured by transmission experiments and backscattering experiments. References External links Illustrated description (movies) of multiple light scattering and application to colloid stability Scattering, absorption and radiative transfer (optics) Colloids
Transport length
[ "Physics", "Chemistry", "Materials_science" ]
219
[ " absorption and radiative transfer (optics)", "Scattering stubs", "Colloids", "Scattering", "Chemical mixtures", "Condensed matter physics" ]
24,500,545
https://en.wikipedia.org/wiki/Smart%20polymer
Smart polymers, stimuli-responsive polymers or functional polymers are high-performance polymers that change according to the environment they are in. Such materials can be sensitive to a number of factors, such as temperature, humidity, pH, chemical compounds, the wavelength or intensity of light or an electrical or magnetic field and can respond in various ways, such as altering color or transparency, becoming conductive or permeable to water or changing shape (shape memory polymers). Usually, slight changes in the environment are sufficient to induce large changes in the polymer's properties. Applications Smart polymers appear in highly specialized applications and everyday products alike. They are used for sensors and actuators such as artificial muscles, the production of hydrogels, biodegradable packaging, and to a great extent in biomedical engineering . One example is a polymer that undergoes conformational change in response to pH change, which can be used in drug delivery. Another is a humidity-sensitive polymer used in self-adaptive wound dressings that automatically regulate moisture balance in and around the wound. The nonlinear response of smart polymers is what makes them so unique and effective. A significant change in structure and properties can be induced by a very small stimulus. Once that change occurs, there is no further change, meaning a predictable all-or-nothing response occurs, with complete uniformity throughout the polymer. Smart polymers may change conformation, adhesiveness or water retention properties, due to slight changes in pH, ionic strength, temperature, ultrasound, or other triggers. For example, Kubota et al designed and loaded ultrasound-responsive hydrogel microbeads with silica nanoparticles that were released under ultrasonic stimulation. Another factor in the effectiveness of smart polymers lies in the inherent nature of polymers in general. The strength of each molecule's response to changes in stimuli is the composite of changes of individual monomer units which, alone, would be weak. However, these weak responses, compounded hundreds or thousands of times, create a considerable force for driving biological processes. The pharmacy industry has been directly related to the polymer’s advances. In this field, polymers are playing a significant role, and their advances are helping entire populations around the world. The human body is a machine with a complex system and works as a response to chemical signals. Polymers play the role of drug delivery technology that can control the release of therapeutic agents in periodic doses. Polymers are capable of molecular recognition and directing intracellular delivery. Smart polymers get into the field to play and take advantage of molecular recognition and finally produced awareness systems and polymer carriers to facilitate drug delivery in the body system. Stimuli Several polymer systems respond to temperature, undergoing a lower critical solution temperature phase transition. One of the better-studied such polymers is poly(N-isopropylacryamide), with a transition temperature of approximately 33 °C. Several homologous N-alkyl acrylamides also show LCST behavior, with the transition temperature depending on the length of the hydrophobic side chain. Above their transition temperature, these polymers become insoluble in water. This behavior is believed to be entropy driven. Classification and chemistry Currently, the most prevalent use for smart polymers in biomedicine is for specifically targeted drug delivery. Since the advent of timed-release pharmaceuticals, scientists have been faced with the problem of finding ways to deliver drugs to a particular site in the body without having them first degrade in the highly acidic stomach environment. Prevention of adverse effects on healthy bone and tissue is also an important consideration. Researchers have devised ways to use smart polymers to control the release of drugs until the delivery system has reached the desired target. This release is controlled by either a chemical or physiological trigger. Linear and matrix smart polymers exist with a variety of properties depending on reactive functional groups and side chains. These groups might be responsive to pH, temperature, ionic strength, electric or magnetic fields, and light. Some polymers are reversibly cross-linked by noncovalent bonds that can break and reform depending on external conditions. Nanotechnology has been fundamental in the development of certain nanoparticle polymers such as dendrimers and fullerenes, that have been applied for drug delivery. Traditional drug encapsulation has been done using lactic acid polymers. More recent developments have seen the formation of lattice-like matrices that hold the drug of interest integrated or entrapped between the polymer strands. Smart polymer matrices release drugs by a chemical or physiological structure-altering reaction, often a hydrolysis reaction resulting in cleavage of bonds and release of drug as the matrix breaks down into biodegradable components. The use of natural polymers has given way to artificially synthesized polymers such as polyanhydrides, polyesters, polyacrylic acids, poly(methyl methacrylates), poly(phthalaldehyde), and polyurethanes. Hydrophilic, amorphous, low-molecular-weight polymers containing heteroatoms (i.e., atoms other than carbon) have been found to degrade fastest. Scientists control the rate of drug delivery by varying these properties thus adjusting the rate of degradation. A graft-and-block copolymer is two different polymers grafted together. A number of patents already exist for various combinations of polymers with different reactive groups. The product exhibits properties of both individual components which adds a new dimension to an intelligent polymer structure and may be useful for certain applications. Cross-linking hydrophobic and hydrophilic polymers result in the formation of micelle-like structures that can protectively assist drug delivery through aqueous medium until conditions at the target location cause the simultaneous breakdown of both polymers. A graft-and-block approach might be useful for solving problems encountered by the use of a common bioadhesive polymer, polyacrylic acid (PAA). PAA adheres to mucosal surfaces but will swell and degrade rapidly at pH 7.4, resulting in the rapid release of drugs entrapped in its matrix. A combination of PAAc with another polymer that is less sensitive to changes at neutral pH might increase the residence time and slow the release of the drug, thus improving bioavailability and effectiveness. Hydrogels are polymer networks that do not dissolve in water but swell or collapse in changing aqueous environments. They are useful in biotechnology for phase separation because they are reusable or recyclable. New ways to control the flow, or catch and release of target compounds, in hydrogels, are being investigated. Highly specialized hydrogels have been developed to deliver and release drugs into specific tissues. Hydrogels made from PAAc are especially common because of their bioadhesive properties and tremendous absorbency. Enzyme immobilization in hydrogels is a fairly well-established process. Reversibly cross-linked polymer networks and hydrogels can be similarly applied to a biological system where the response and release of a drug are triggered by the target molecule itself. Alternatively, the response might be turned on or off by the product of an enzyme reaction. This is often done by incorporating an enzyme, receptor or antibody, that binds to the molecule of interest, into the hydrogel. Once bound, a chemical reaction takes place that triggers a reaction from the hydrogel. The trigger can be oxygen, sensed using oxidoreductase enzymes or a pH-sensing response. An example of the latter is the combined entrapment of glucose oxidase and insulin in a pH-responsive hydrogel. In the presence of glucose, the formation of gluconic acid by the enzyme triggers the release of insulin from the hydrogel. Two criteria for this technology to work effectively are enzyme stability and rapid kinetics (quick response to the trigger and recovery after removal of the trigger). Several strategies have been tested in type 1 diabetes research, involving the use of similar types of smart polymers that can detect changes in blood glucose levels and trigger the production or release of insulin. Likewise, there are many possible applications of similar hydrogels as drug delivery agents for other conditions and diseases. Other Applications Smart polymers are not just for drug delivery. Their properties make them especially suited for bioseparations. The time and costs involved in purifying proteins might be reduced significantly by using smart polymers that undergo rapid reversible changes in response to a change in medium properties. Conjugated systems have been used for many years in physical and affinity separations and immunoassays. Microscopic changes in the polymer structure are manifested as precipitate formation, which may be used to aid the separation of trapped proteins from solution. These systems work when a protein or other molecule that is to be separated from a mix, forms a bioconjugate with the polymer and precipitates with the polymer when its environment undergoes a change. The precipitate is removed from the media, thus separating the desired component of the conjugate from the rest of the mixture. Removal of this component from the conjugate depends on the recovery of the polymer and a return to its original state, thus hydrogels are very useful for such processes. Another approach to controlling biological reactions using smart polymers is to prepare recombinant proteins with built-in polymer binding sites close to ligand or cell binding sites. This technique has been used to control ligand and cell binding activity, based on a variety of triggers including temperature and light. Smart polymers play an essential part in the technology of self-adaptive wound dressings. The dressing design presents proprietary super-absorbent synthetic smart polymers immobilized in the 3-dimensional fiber matrix with added hydration functionality achieved by embedding hydrogel into the core of the material. The dressing's mode of action relies on the ability of the polymers to sense and adapt to the changing humidity and fluid content in all areas of the wound simultaneously and to automatically and reversibly switch from absorption to hydration. The smart polymer action ensures the active synchronized response of the dressing material to changes in and around the wound to support the optimal moist healing environment at all times. Future applications It has been suggested that polymers might be developed that can learn and self-correct behavior over time. Although this might be a far-distant possibility, there are other more feasible applications that appear to be coming in the near future. One of these is the idea of smart toilets that analyze urine and help identify health problems. In environmental biotechnology, smart irrigation systems have been also proposed. It would be incredibly useful to have a system that turns on and off, and controls fertilizer concentrations, based on soil moisture, pH, and nutrient levels. Many creative approaches to targeted drug delivery systems that self-regulate based on their unique cellular surroundings, are also under investigation. There are obvious possible problems associated with the use of smart polymers in biomedicine. The most worrisome is the possibility of toxicity or incompatibility of artificial substances in the body, including degradation products and byproducts. However, smart polymers have enormous potential in biotechnology and biomedical applications if these obstacles can be overcome. See also Programmable matter Smart material Covalent adaptable networks / Vitrimers References Polymer material properties Smart materials
Smart polymer
[ "Chemistry", "Materials_science", "Engineering" ]
2,280
[ "Polymer material properties", "Smart materials", "Materials science", "Polymer chemistry" ]
24,500,855
https://en.wikipedia.org/wiki/SENSE%20lab
The full name of SENSE Lab is Sensory Encoding and Neuro- Sensory Engineering Lab in Halifax, Nova Scotia, Canada. It integrates Engineering and physiologic sciences in hearing (communication) and balance. The lab have a clinical focus on disorders of the ear, audition and balance. They are a physiologic and engineering focus on all aspects of communication, balance regulation and auditory and vestibular perception. They are particularly interested in multisensory integration, environmental-sensory interactions, and cognitive-sensory interactions. Their research encompasses novel transducers, novel measurement devices and novel complex probes of sensory and cognitive function. They also develop commercial and general consumer applications in these areas. The research facility is located in the Division of Otolaryngology, Department of Surgery, Capital District Health Authority, Halifax, and is affiliated with the Department of Biomed Engineering, School of Human Communication Disorders, the Department of Psychology, School of Physiotherapy, and the Department of Anatomy & Neurobiology, Dalhousie University. References External links Cognitive science research institutes Biomedical engineering Dalhousie University
SENSE lab
[ "Engineering", "Biology" ]
232
[ "Biological engineering", "Bioengineering stubs", "Biomedical engineering", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
24,503,316
https://en.wikipedia.org/wiki/Eurocode%207%3A%20Geotechnical%20design
In the Eurocode series of European standards (EN) related to construction, Eurocode 7: Geotechnical design (abbreviated EN 1997 or, informally, EC 7) describes how to design geotechnical structures, using the limit state design philosophy. It is published in two parts; "General rules" and "Ground investigation and testing". It was approved by the European Committee for Standardization (CEN) on 12 June 2006. Like other Eurocodes, it became mandatory in member states in March 2010. Eurocode 7 is intended to: be used in conjunction with EN 1990, which establishes the principles and requirements for safety and serviceability, describes the basis of design and verification and gives guidelines for related aspects of structural reliability, be applied to the geotechnical aspects of the design of buildings and civil engineering works and it is concerned with the requirements for strength, stability, serviceability and durability of structures. Eurocode 7 is composed of the following parts Part 1: General rules EN 1997-1 is intended to be used as a general basis for the geotechnical aspects of the design of buildings and civil engineering works. Contents General Basis of design Geotechnical data Supervision of construction, monitoring and maintenance Fill, dewatering, ground improvement and reinforcement Spread foundations Deep foundation (pile foundations) Anchorages Retaining structures Hydraulic failure Overall stability Embankments EN 1997-1 is accompanied by Annexes A to J, which provide: Annex A Recommended partial safety factor values; different values of the partial factors may be set by the National annex. Annexes B to J Supplementary informative guidance such as internationally applied calculation methods. Part 2: Ground investigation and testing EN 1997-2 is intended to be used in conjunction with EN 1997-1 and provides rules supplementary to EN 1997-1 related to planning and reporting of ground investigations, general requirements for a range of commonly used laboratory and field tests, interpretation and evaluation of test results and derivation of values of geotechnical parameters and coefficients. Part 3: Design assisted by field testing There is no longer a Part 3. It was amalgamated into EN 1997-2 References External links Eurocodes: Building the Future The European Commission Website on the EN Eurocodes Eurocodes Expert UK construction industry website with comprehensive information and support resources for implementation of the BS EN Eurocodes. A Designers' Simple Guide to BS EN 1997 UK design guide with several worked examples using EN 1997. EN 1997: Geotechnical design EN 1997: Geotechnical design - "Eurocodes: Background and applications" workshop Geology Geotechnical engineering Civil engineering 01997 Reinforced concrete 7 2010 in the European Union
Eurocode 7: Geotechnical design
[ "Engineering" ]
526
[ "Construction", "Civil engineering", "Geotechnical engineering" ]
24,505,830
https://en.wikipedia.org/wiki/Society%20of%20Petroleum%20Evaluation%20Engineers
The Society of Petroleum Evaluation Engineers (SPEE) is a non-profit professional organization with the objectives to promote the profession of petroleum evaluation engineering, to foster the spirit of scientific research among its Members, and to disseminate facts pertaining to petroleum evaluation engineering among its Members and the public. External links www.spee.org Journal of SPEE, 2008 Volume 2, has the history of the Society Engineering societies based in the United States Petroleum engineering Organizations based in Houston
Society of Petroleum Evaluation Engineers
[ "Engineering" ]
96
[ "Petroleum engineering", "Energy engineering" ]
24,508,639
https://en.wikipedia.org/wiki/Characteristic%20property
A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that aren't characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, Crystal structure and crystal shape. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit. Identifying a substance Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead. See also Intensive and extensive properties References Physical quantities
Characteristic property
[ "Physics", "Mathematics" ]
249
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
5,234,576
https://en.wikipedia.org/wiki/Environmental%20radioactivity
Environmental radioactivity is part of the overall background radiation and is produced by radioactive materials in the human environment. While some radioisotopes, such as strontium-90 (90Sr) and technetium-99 (99Tc), are only found on Earth as a result of human activity, and some, like potassium-40 (40K), are only present due to natural processes, a few isotopes, such as tritium (3H), result from both natural processes and human activities. The concentration and location of some natural isotopes, particularly uranium-238 (238U), can be affected by human activity, such as nuclear weapons testing, which caused a global fallout, with up to 2.4 million deaths by 2020. Background level in soils Radioactivity is present everywhere, and has been since the formation of the Earth. Natural radioactivity detected in soil is predominantly due to the following four natural radioisotopes: 40K, 226Ra, 238U, and 232Th. In one kilogram of soil, the potassium-40 amounts to an average 370 Bq of radiation, with a typical range of 100–700 Bq; the others each contribute some 25 Bq, with typical ranges of 10–50 Bq (7–50 Bq for the 232Th). Some soils may vary greatly from these norms. Sea and river silt A recent report on the Sava river in Serbia suggests that many of the river silts contain about 100 Bq kg−1 of natural radioisotopes (226Ra, 232Th, and 238U). According to the United Nations the normal concentration of uranium in soil ranges between 300 μg kg−1 and 11.7 mg kg−1. It is well known that some plants, called hyperaccumulators, are able to absorb and concentrate metals within their tissues; iodine was first isolated from seaweed in France, which suggests that seaweed is an iodine hyperaccumulator. Synthetic radioisotopes also can be detected in silt. Busby quotes a report on the plutonium activity in Welsh intertidal sediments by Garland et al. (1989), which suggests that the closer a site is to Sellafield, the higher is the concentration of plutonium in the silt. Some relationship between distance and activity can be seen in their data, when fitted to an exponential curve, but the scatter of the points is large (R2 = 0.3683). Man-made The additional radioactivity in the biosphere caused by human activity due to the releases of man-made radioactivity and of Naturally Occurring Radioactive Materials (NORM) can be divided into several classes. Normal licensed releases which occur during the regular operation of a plant or process handling man-made radioactive materials. For instance the release of 99Tc from a nuclear medicine department of a hospital which occurs when a person given a Tc imaging agent expels the agent. Releases of man-made radioactive materials which occur during an industrial or research accident. For instance the Chernobyl accident. Releases which occur as a result of military activity. For example, a nuclear weapons test, which have caused a global fallout, peaking in 1963 (the Bomb pulse), and up to 2.4 million deaths by 2020. Releases which occur as a result of a crime. For example, the Goiânia accident where thieves, unaware of its radioactive content, stole some medical equipment and as a result a number of people were exposed to radiation. Releases of naturally occurring radioactive materials (NORM) as a result of mining etc. For example, the release of the trace quantities of uranium and thorium in coal, when it is burned in power stations. Farming and the transfer to humans of deposited radioactivity Just because a radioisotope lands on the surface of the soil, does not mean it will enter the human food chain. After release into the environment, radioactive materials can reach humans in a range of different routes, and the chemistry of the element usually dictates the most likely route. Cows Jiří Hála claims in his textbook "Radioactivity, Ionizing Radiation and Nuclear Energy" that cattle only pass a minority of the strontium, caesium, plutonium and americium they ingest to the humans who consume milk and meat. Using milk as an example, if the cow has a daily intake of 1000 Bq of the preceding isotopes then the milk will have the following activities. 90Sr, 2 Bq/L 137Cs, 5 Bq/L 239Pu, 0.001 Bq/L 241Am, 0.001 Bq/L Soil Jiří Hála's textbook states that soils vary greatly in their ability to bind radioisotopes, the clay particles and humic acids can alter the distribution of the isotopes between the soil water and the soil. The distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioactivity is tightly bonded to by the minerals in the soil then less radioactivity can be absorbed by crops and grass growing in the soil. Cs-137 Kd = 1000 Pu-239 Kd = 10000 to 100000 Sr-90 Kd = 80 to 150 I-131 Kd = 0.007 to 50 The Trinity test One dramatic source of man-made radioactivity is a nuclear weapons test. The glassy trinitite created by the first atom bomb contains radioisotopes formed by neutron activation and nuclear fission. In addition some natural radioisotopes are present. A recent paper reports the levels of long-lived radioisotopes in the trinitite. The trinitite was formed from feldspar and quartz which were melted by the heat. Two samples of trinitite were used, the first (left-hand-side bars in the graph) was taken from between 40 and 65 meters of ground zero while the other sample was taken from further away from the ground zero point. The 152Eu (half life 13.54 year) and 154Eu (half life 8.59 year) were mainly formed by the neutron activation of the europium in the soil, it is clear that the level of radioactivity for these isotopes is highest where the neutron dose to the soil was larger. Some of the 60Co (half life 5.27 year) is generated by activation of the cobalt in the soil, but some was also generated by the activation of the cobalt in the steel (100 foot) tower. This 60Co from the tower would have been scattered over the site reducing the difference in the soil levels. The 133Ba (half life 10.5 year) and 241Am (half life 432.6 year) are due to the neutron activation of barium and plutonium inside the bomb. The barium was present in the form of the nitrate in the chemical explosives used while the plutonium was the fissile fuel used. The 137Cs level is higher in the sample that was further away from the ground zero point – this is thought to be because the precursors to the 137Cs (137I and 137Xe) and, to a lesser degree, the caesium itself are volatile. The natural radioisotopes in the glass are about the same in both locations. Activation products The action of neutrons on stable isotopes can form radioisotopes, for instance the neutron bombardment (neutron activation) of nitrogen-14 forms carbon-14. This radioisotope can be released from the nuclear fuel cycle; this is the radioisotope responsible for the majority of the dose experienced by the population as a result of the activities of the nuclear power industry. Nuclear bomb tests have increased the specific activity of carbon, whereas the use of fossil fuels has decreased it. See the article on radiocarbon dating for further details. Fission products Discharges from nuclear plants within the nuclear fuel cycle introduce fission products to the environment. The releases from nuclear reprocessing plants tend to be medium to long-lived radioisotopes; this is because the nuclear fuel is allowed to cool for several years before being dissolved in the nitric acid. The releases from nuclear reactor accidents and bomb detonations will contain a greater amount of the short-lived radioisotopes (when the amounts are expressed in activity Bq)). Short lived An example of a short-lived fission product is iodine-131, this can also be formed as an activation product by the neutron activation of tellurium. In both bomb fallout and a release from a power reactor accident, the short-lived isotopes cause the dose rate on day one to be much higher than that which will be experienced at the same site many days later. This holds true even if no attempts at decontamination are made. In the graphs below, the total gamma dose rate and the share of the dose due to each main isotope released by the Chernobyl accident are shown. Medium lived An example of a medium lived is 137Cs, which has a half-life of 30 years. Caesium is released in bomb fallout and from the nuclear fuel cycle. A paper has been written on the radioactivity in oysters found in the Irish Sea, these were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. In addition, a zinc activation product (65Zn) was found, this is thought to be due to the corrosion of magnox fuel cladding in cooling ponds. The concentration of all these isotopes in the Irish Sea attributable to nuclear facilities such as Sellafield has significantly decreased in recent decades. An important part of the Chernobyl release was the caesium-137, this isotope is responsible for much of the long term (at least one year after the fire) external exposure which has occurred at the site. The caesium isotopes in the fallout have had an effect on farming. A large amount of caesium was released during the Goiânia accident where a radioactive source (made for medical use) was stolen and then smashed open during an attempt to convert it into scrap metal. The accident could have been stopped at several stages; first, the last legal owners of the source failed to make arrangements for the source to be stored in a safe and secure place; and second, the scrap metal workers who took it did not recognise the markings which indicated that it was a radioactive object. Soudek et al. reported in 2006 details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes. Caesium binds tightly to clay minerals such as illite and montmorillonite; hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also, after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long term gamma dose to humans due to 137Cs as the gamma photons will be attenuated by their passage through the soil. The more remote the trench is from humans and the deeper the trench is the better the degree of protection which will be afforded to the human population. In livestock farming, an important countermeasure against 137Cs is to feed to animals a little prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of prussian blue per day. The prussian blue reduces the biological half-life (not to be confused with the nuclear half-life) of the caesium). The physical or nuclear half-life of 137Cs is about 30 years, which is a constant and can not be changed; however, the biological half-life will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence, it prevents the caesium from being recycled. The form of prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Long lived Examples of long-lived isotopes include iodine-129 and Tc-99, which have nuclear half-lives of 15 million and 200,000 years, respectively. Plutonium and the other actinides In popular culture, plutonium is credited with being the ultimate threat to life and limb which is wrong; while ingesting plutonium is not likely to be good for one's health, other radioisotopes such as radium are more toxic to humans. Regardless, the introduction of the transuranium elements such as plutonium into the environment should be avoided wherever possible. Currently, the activities of the nuclear reprocessing industry have been subject to great debate as one of the fears of those opposed to the industry is that large amounts of plutonium will be either mismanaged or released into the environment. In the past, one of the largest releases of plutonium into the environment has been nuclear bomb testing. Those tests in the air scattered some plutonium over the entire globe; this great dilution of the plutonium has resulted in the threat to each exposed person being very small as each person is only exposed to a very small amount. The underground tests tend to form molten rock, which rapidly cools and seals the actinides into the rock, so rendering them unable to move; again the threat to humans is small unless the site of the test is dug up. The safety trials where bombs were subject to simulated accidents pose the greatest threat to people; some areas of land used for such experiments (conducted in the open air) have not been fully released for general use despite in one case an extensive decontamination. Natural Activation products from cosmic rays Cosmogenic isotopes (or cosmogenic nuclides) are rare isotopes created when a high-energy cosmic ray interacts with the nucleus of an in situ atom. These isotopes are produced within earth materials such as rocks or soil, in Earth's atmosphere, and in extraterrestrial items such as meteorites. By measuring cosmogenic isotopes, scientists are able to gain insight into a range of geological and astronomical processes. There are both radioactive and stable cosmogenic isotopes. Some of these radioisotopes are tritium, carbon-14 and phosphorus-32. Production modes Here is a list of radioisotopes formed by the action of cosmic rays on the atmosphere; the list also contains the production mode of the isotope. These data were obtained from the SCOPE50 report, see table 1.9 of chapter 1. Transfer to ground The level of beryllium-7 in the air is related to the Sun spot cycle, as radiation from the Sun forms this radioisotope in the atmosphere. The rate at which it is transferred from the air to the ground is controlled in part by the weather. Applications in geology listed by isotope Applications of dating Because cosmogenic isotopes have long half-lives (anywhere from thousands to millions of years), scientists find them useful for geologic dating. Cosmogenic isotopes are produced at or near the surface of the Earth, and thus are commonly applied to problems of measuring ages and rates of geomorphic and sedimentary events and processes. Specific applications of cosmogenic isotopes include: exposure dating of earth surfaces, including glacially scoured bedrock, fault scarps, landslide debris burial dating of sediment, bedrock, ice measurement of steady-state erosion rates absolute dating of organic matter (radiocarbon dating) absolute dating of water masses, measurement of groundwater transport rates absolute dating of meteorites, lunar surfaces Methods of measurement for the long-lived isotopes To measure cosmogenic isotopes produced within solid earth materials, such as rock, samples are generally first put through a process of mechanical separation. The sample is crushed and desirable material, such as a particular mineral (quartz in the case of Be-10), is separated from non-desirable material by using a density separation in a heavy liquid medium such as lithium sodium tungstate (LST). The sample is then dissolved, a common isotope carrier added (Be-9 carrier in the case of Be-10), and the aqueous solution is purified down to an oxide or other pure solid. Finally, the ratio of the rare cosmogenic isotope to the common isotope is measured using accelerator mass spectrometry. The original concentration of cosmogenic isotope in the sample is then calculated using the measured isotopic ratio, the mass of the sample, and the mass of carrier added to the sample. Radium and radon from the decay of long-lived actinides Radium and radon are in the environment because they are decay products of uranium and thorium. The radon (222Rn) released into the air decays to 210Pb and other radioisotopes, and the levels of 210Pb can be measured. The rate of deposition of this radioisotope is dependent on the weather. Below is a graph of the deposition rate observed in Japan. Uranium-lead dating Uranium-lead dating is usually performed on the mineral zircon (ZrSiO4), though other materials can be used. Zircon incorporates uranium atoms into its crystalline structure as substitutes for zirconium, but strongly rejects lead. It has a high blocking temperature, is resistant to mechanical weathering and is chemically inert. Zircon also forms multiple crystal layers during metamorphic events, which each may record an isotopic age of the event. These can be dated by a SHRIMP ion microprobe. One of the advantages of this method is that any sample provides two clocks, one based on uranium-235's decay to lead-207 with a half-life of about 703 million years, and one based on uranium-238's decay to lead-206 with a half-life of about 4.5 billion years, providing a built-in crosscheck that allows accurate determination of the age of the sample even if some of the lead has been lost. See also Journal of Environmental Radioactivity Naturally occurring radioactive material Radioecology Radium in the environment Uranium in the environment References References about cosmogenic isotope dating Gosse, John C., and Phillips, Fred M. (2001). "Terrestrial in situ cosmogenic nuclides: Theory and application". Quaternary Science Reviews 20, 1475–1560. Granger, Darryl E., Fabel, Derek, and Palmer, Arthur N. (2001). "Pliocene-Pleistocene incision of the Green River, Kentucky, determined from radioactive decay of cosmogenic 26Al and 10Be in Mammoth Cave sediments". Geological Society of America Bulletin 113 (7), 825–836. Further reading Radioactivity, Ionizing Radiation and Nuclear Energy, by J. Hala and J.D. Navratil A review of the subject has been published by Scientific Committee on Problems of the Environment (SCOPE) in the report SCOPE 50 Radioecology after chernobyl. External links Purdue University Prime Lab, "Cosmogenic nuclides" "Cosmogenic Exposure Dating and the Age of the Earth" Cosmogenic Isotope Laboratory, University of Washington Environmental isotopes Radioactivity
Environmental radioactivity
[ "Physics", "Chemistry" ]
4,194
[ "Environmental isotopes", "Isotopes", "Radioactivity", "Nuclear physics" ]
5,234,734
https://en.wikipedia.org/wiki/Logic%20Trunked%20Radio
Logic Trunked Radio (LTR) is a radio system developed in the late 1970s by the E. F. Johnson Company. LTR is distinguished from some other common trunked radio systems in that it does not have a dedicated control channel. LTR systems are limited to 20 channels (repeaters) per site and each site stands alone (not linked). is Each repeater has its own controller and all of these controllers are coordinated together. Even though each controller monitors its own channel, one of the channel controllers is assigned to be a master and all the other controllers report to it. Typically on LTR systems, each of these controllers periodically sends out a data burst (approximately every 10 seconds on LTR Standard systems) so that the subscriber units know that the system is there and which channels are in use or available. The idle data burst can be turned off if desired by the system operator. Some systems will broadcast idle data bursts only on channels used as home channels and not on those used for "overflow" conversations. To a listener, the idle data burst will sound like a short blip of static like someone keyed up and unkeyed a radio within about 1/4 second. This data burst is not sent at the same time by all the channels but happen randomly throughout all the system channels. References External links Logic Trunked System article from 'Monitoring Times' E.F. Johnson Company website LTR description page at the MRA Company Website Radio electronics Radio resource management Radio networks
Logic Trunked Radio
[ "Engineering" ]
306
[ "Radio electronics" ]
5,235,067
https://en.wikipedia.org/wiki/Coherent%20risk%20measure
In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk measure is a function that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance. Properties Consider a random outcome viewed as an element of a linear space of measurable functions, defined on an appropriate probability space. A functional → is said to be coherent risk measure for if it satisfies the following properties: Normalized That is, the risk when holding no assets is zero. Monotonicity That is, if portfolio always has better values than portfolio under almost all scenarios then the risk of should be less than the risk of . E.g. If is an in the money call option (or otherwise) on a stock, and is also an in the money call option with a lower strike price. In financial risk management, monotonicity implies a portfolio with greater future returns has less risk. Sub-additivity Indeed, the risk of two portfolios together cannot get any worse than adding the two risks separately: this is the diversification principle. In financial risk management, sub-additivity implies diversification is beneficial. The sub-additivity principle is sometimes also seen as problematic. Positive homogeneity Loosely speaking, if you double your portfolio then you double your risk. In financial risk management, positive homogeneity implies the risk of a position is proportional to its size. Translation invariance If is a deterministic portfolio with guaranteed return and then The portfolio is just adding cash to your portfolio . In particular, if then . In financial risk management, translation invariance implies that the addition of a sure amount of capital reduces the risk by the same amount. Convex risk measures The notion of coherence has been subsequently relaxed. Indeed, the notions of Sub-additivity and Positive Homogeneity can be replaced by the notion of convexity: Convexity Examples of risk measure Value at risk It is well known that value at risk is not a coherent risk measure as it does not respect the sub-additivity property. An immediate consequence is that value at risk might discourage diversification. Value at risk is, however, coherent, under the assumption of elliptically distributed losses (e.g. normally distributed) when the portfolio value is a linear function of the asset prices. However, in this case the value at risk becomes equivalent to a mean-variance approach where the risk of a portfolio is measured by the variance of the portfolio's return. The Wang transform function (distortion function) for the Value at Risk is . The non-concavity of proves the non coherence of this risk measure. Illustration As a simple example to demonstrate the non-coherence of value-at-risk consider looking at the VaR of a portfolio at 95% confidence over the next year of two default-able zero coupon bonds that mature in 1 years time denominated in our numeraire currency. Assume the following: The current yield on the two bonds is 0% The two bonds are from different issuers Each bond has a 4% probability of defaulting over the next year The event of default in either bond is independent of the other Upon default the bonds have a recovery rate of 30% Under these conditions the 95% VaR for holding either of the bonds is 0 since the probability of default is less than 5%. However if we held a portfolio that consisted of 50% of each bond by value then the 95% VaR is 35% (= 0.5*0.7 + 0.5*0) since the probability of at least one of the bonds defaulting is 7.84% (= 1 - 0.96*0.96) which exceeds 5%. This violates the sub-additivity property showing that VaR is not a coherent risk measure. Average value at risk The average value at risk (sometimes called expected shortfall or conditional value-at-risk or ) is a coherent risk measure, even though it is derived from Value at Risk which is not. The domain can be extended for more general Orlitz Hearts from the more typical Lp spaces. Entropic value at risk The entropic value at risk is a coherent risk measure. Tail value at risk The tail value at risk (or tail conditional expectation) is a coherent risk measure only when the underlying distribution is continuous. The Wang transform function (distortion function) for the tail value at risk is . The concavity of proves the coherence of this risk measure in the case of continuous distribution. Proportional Hazard (PH) risk measure The PH risk measure (or Proportional Hazard Risk measure) transforms the hazard rates using a coefficient . The Wang transform function (distortion function) for the PH risk measure is . The concavity of if proves the coherence of this risk measure. g-Entropic risk measures g-entropic risk measures are a class of information-theoretic coherent risk measures that involve some important cases such as CVaR and EVaR. The Wang risk measure The Wang risk measure is defined by the following Wang transform function (distortion function) . The coherence of this risk measure is a consequence of the concavity of . Entropic risk measure The entropic risk measure is a convex risk measure which is not coherent. It is related to the exponential utility. Superhedging price The superhedging price is a coherent risk measure. Set-valued In a situation with -valued portfolios such that risk can be measured in of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs. Properties A set-valued coherent risk measure is a function , where and where is a constant solvency cone and is the set of portfolios of the reference assets. must have the following properties: Normalized Translative in M Monotone Sublinear General framework of Wang transform Wang transform of the cumulative distribution function A Wang transform of the cumulative distribution function is an increasing function where and . This function is called distortion function or Wang transform function. The dual distortion function is . Given a probability space , then for any random variable and any distortion function we can define a new probability measure such that for any it follows that Actuarial premium principle For any increasing concave Wang transform function, we could define a corresponding premium principle : Coherent risk measure A coherent risk measure could be defined by a Wang transform of the cumulative distribution function if and only if is concave. Set-valued convex risk measure If instead of the sublinear property,R is convex, then R is a set-valued convex risk measure. Dual representation A lower semi-continuous convex risk measure can be represented as such that is a penalty function and is the set of probability measures absolutely continuous with respect to P (the "real world" probability measure), i.e. . The dual characterization is tied to spaces, Orlitz hearts, and their dual spaces. A lower semi-continuous risk measure is coherent if and only if it can be represented as such that . See also Risk metric - the abstract concept that a risk measure quantifies RiskMetrics - a model for risk management Spectral risk measure - a subset of coherent risk measures Distortion risk measure Conditional value-at-risk Entropic value at risk Financial risk References Actuarial science Financial risk modeling
Coherent risk measure
[ "Mathematics" ]
1,542
[ "Applied mathematics", "Actuarial science" ]
5,235,137
https://en.wikipedia.org/wiki/Dynamics%20of%20Markovian%20particles
Dynamics of Markovian particles (DMP) is the basis of a theory for kinetics of particles in open heterogeneous systems. It can be looked upon as an application of the notion of stochastic process conceived as a physical entity; e.g. the particle moves because there is a transition probability acting on it. Two particular features of DMP might be noticed: (1) an ergodic-like relation between the motion of particle and the corresponding steady state, and (2) the classic notion of geometric volume appears nowhere (e.g. a concept such as flow of "substance" is not expressed as liters per time unit but as number of particles per time unit). Although primitive, DMP has been applied for solving a classic paradox of the absorption of mercury by fish and by mollusks. The theory has also been applied for a purely probabilistic derivation of the fundamental physical principle: conservation of mass; this might be looked upon as a contribution to the old and ongoing discussion of the relation between physics and probability theory. Sources Bergner—DMP, a kinetics of macroscopic particles in open heterogeneous systems Dynamics (mechanics) Markov models
Dynamics of Markovian particles
[ "Physics" ]
245
[ "Physical phenomena", "Classical mechanics stubs", "Classical mechanics", "Motion (physics)", "Dynamics (mechanics)" ]
5,235,216
https://en.wikipedia.org/wiki/Wehnelt%20cylinder
A Wehnelt cylinder (also known as Wehnelt cap, grid cap or simply Wehnelt) is an electrode in the electron gun assembly of some thermionic devices, used for focusing and control of the electron beam. It is named after Arthur Rudolph Berthold Wehnelt, a German physicist, who invented it during the years 1902 and 1903. Wehnelt cylinders are found in the electron guns of cathode ray tubes and electron microscopes, and in other applications where a thin, well-focused electron beam is required. Structure A Wehnelt cap has the shape of a topless, hollow cylinder. The bottom side of the cylinder has an aperture (through hole) located at its center, with a diameter that typically ranges from 200 to 1200 μm. The bottom face of the cylinder is often made from platinum or tantalum foil. Operation A Wehnelt acts as a control grid and it also serves as a convergent electrostatic lens. An electron emitter is positioned directly above the Wehnelt aperture, and an anode is located below the Wehnelt. The anode is biased to a high positive voltage (typically +1 kV to +30 kV) relative to the emitter so as to accelerate electrons from the emitter towards the anode, thus creating an electron beam that passes through the Wehnelt aperture. The Wehnelt is biased to a negative voltage (typically −200V to −300V) relative to the emitter, which is usually a tungsten filament or Lanthanum hexaboride (LaB6) hot cathode with a V-shaped (or otherwise pointed) tip. This bias voltage creates a repulsive electrostatic field that suppresses emission of electrons from most areas of the cathode. The emitter tip is positioned near the Wehnelt aperture so that, when appropriate bias voltage is applied to the Wehnelt, a small region of the tip has a net electric field (due to both anode attraction and Wehnelt repulsion) that allows emission from only that area of the tip. The Wehnelt bias voltage determines the tip's emission area, which in turn determines both the beam current and effective size of the beam's electron source. As the Wehnelt's negative bias voltage increases, the tip's emitting area (and along with it, the beam diameter and beam current) will decrease until it becomes so small that the beam is "pinched" off. In normal operation, the bias is typically set slightly more positive than the pinch bias, and determined by a balance between desired beam quality and beam current. The Wehnelt bias controls beam focusing as well as the effective size of the electron source, which is essential for creating an electron beam that is to be focussed into a very small spot (for scanning electron microscopy) or a very parallel beam (for diffraction). Although a smaller source can be imaged to a smaller spot, or a more parallel beam, one obvious trade off is a smaller total beam current. References Vacuum tubes Electrodes
Wehnelt cylinder
[ "Physics", "Chemistry" ]
632
[ "Vacuum tubes", "Electrodes", "Vacuum", "Electrochemistry", "Matter" ]
5,235,231
https://en.wikipedia.org/wiki/Railway%20engineering
Railway engineering is a multi-faceted engineering discipline dealing with the design, construction and operation of all types of rail transport systems. It encompasses a wide range of engineering disciplines, including civil engineering, computer engineering, electrical engineering, mechanical engineering, industrial engineering and production engineering. A great many other engineering sub-disciplines are also called upon. History With the advent of the railways in the early nineteenth century, a need arose for a specialized group of engineers capable of dealing with the unique problems associated with railway engineering. As the railways expanded and became a major economic force, a great many engineers became involved in the field, probably the most notable in Britain being Richard Trevithick, George Stephenson and Isambard Kingdom Brunel. Today, railway systems engineering continues to be a vibrant field of engineering. Subfields Mechanical engineering Command, control & railway signalling Office systems design Data center design SCADA Network design Electrical engineering Energy electrification Third rail Fourth rail Overhead contact system Civil engineering Permanent way engineering Light rail systems On-track plant Rail systems integration Train control systems Cab signalling Railway vehicle engineering Rolling resistance Curve resistance Wheel–rail interface Hunting oscillation Railway systems engineering Railway signalling Fare collection CCTV Public address Intrusion detection Access control Systems integration Professional organisations In the UK: The Railway Division of the Institution of Mechanical Engineers (IMechE). In the US: The American Railway Engineering and Maintenance-of-Way Association (AREMA) In the Philippines: Philippine Railway Engineers' Association, (PREA) Inc. Worldwide: The Institute of Railway Signal Engineers (IRSE) See also Association of American Railroads Exsecant Degree of curvature List of engineering topics List of engineers Minimum railway curve radius Radius of curvature (applications) Track transition curve Transition curve External links Institution of Mechanical Engineers - Railway Division AAR References Engineering disciplines Rail technologies Transportation engineering
Railway engineering
[ "Engineering" ]
365
[ "Transportation engineering", "Civil engineering", "nan", "Industrial engineering" ]
5,235,575
https://en.wikipedia.org/wiki/Control%20panel%20%28engineering%29
A control panel is a flat, often vertical, area where control or monitoring instruments are displayed or it is an enclosed unit that is the part of a system that users can access, such as the control panel of a security system (also called control unit). They are found in factories to monitor and control machines or production lines and in places such as nuclear power plants, ships, aircraft and mainframe computers. Older control panels are most often equipped with push buttons and analog instruments, whereas nowadays in many cases touchscreens are used for monitoring and control purposes. Gallery Flat panels Enclosed control unit See also Control stand Dashboard Electric switchboard Fire alarm control panel Front panel Graphical user interface Control panel (computer) Dashboard (software) virtual Lighting control console Mixing console Patch board Plugboard Telephone switchboard References Control devices
Control panel (engineering)
[ "Engineering" ]
162
[ "Control devices", "Control engineering" ]
5,235,648
https://en.wikipedia.org/wiki/Frost%20line%20%28astrophysics%29
In astronomy or planetary science, the frost line, also known as the snow line or ice line, is the minimum distance from the central protostar of a solar nebula where the temperature is low enough for volatile compounds such as water, ammonia, methane, carbon dioxide and carbon monoxide to condense into solid grains, which will allow their accretion into planetesimals. Beyond the line, otherwise gaseous compounds (which are much more abundant) can be quite easily condensed to allow formation of gas and ice giants; while within it, only heavier compounds can be accreted to form the typically much smaller rocky planets. The term itself is borrowed from the notion of "frost line" in soil science, which describes the maximum depth from the surface that groundwater can freeze. Each volatile substance has its own frost line (e.g. carbon monoxide, nitrogen, and argon), so it is important to always specify which material's frost line is referred to, though omission is common, especially for the water frost line. A tracer gas may be used for materials that are otherwise difficult to detect; for example diazenylium for carbon monoxide. Location Different volatile compounds have different condensation temperatures at different partial pressures (thus different densities) in the protostar nebula, so their frost lines will differ. The actual temperature and distance for the snow line of water ice depend on the physical model used to calculate it and on the theoretical solar nebula model: this tells us nothing for the temperature in degrees 170 K at 2.7 AU (Hayashi, 1981) 143 K at 3.2 AU to 150 K at 3 AU (Podolak and Zucker, 2010) 3.1 AU (Martin and Livio, 2012) ≈150 K for μm-size grains and ≈200 K for km-size bodies (D'Angelo and Podolak, 2015) The location of the frost line changes over time, potentially reaching a maximum radius of for a solar-mass star before decreasing after that. Current snow line versus formation snow line The radial position of the condensation/evaporation front varies over time, as the nebula evolves. Occasionally, the term snow line is also used to represent the present distance at which water ice can be stable (even under direct sunlight). This current snow line distance is different from the formation snow line distance during the formation of the Solar System, and approximately equals 5 AU. The reason for the difference is that during the formation of the Solar System, the solar nebula was an opaque cloud where temperatures were lower close to the Sun, and the Sun itself was less energetic. After formation, the ice got buried by infalling dust and it has remained stable a few meters below the surface. If ice within 5 AU is exposed, e.g. by a crater, then it sublimates on short timescales. However, out of direct sunlight ice can remain stable on the surface of asteroids (and the Moon and Mercury) if it is located in permanently shadowed polar craters, where temperature may remain very low over the age of the Solar System (e.g. 30–40 K on the Moon). Observations of the asteroid belt, located between Mars and Jupiter, suggest that the water snow line during formation of the Solar System was located within this region. The outer asteroids are icy C-class objects (e.g. Abe et al. 2000; Morbidelli et al. 2000) whereas the inner asteroid belt is largely devoid of water. This implies that when planetesimal formation occurred the snow line was located at around 2.7 AU from the Sun. For example, the dwarf planet Ceres with semi-major axis of 2.77 AU lies almost exactly on the lower estimation for water snow line during the formation of the Solar System. Ceres appears to have an icy mantle and may even have a water ocean below the surface. Planet formation The lower temperature in the nebula beyond the frost line makes many more solid grains available for accretion into planetesimals and eventually planets. The frost line therefore separates terrestrial planets from giant planets in the Solar System. However, giant planets have been found inside the frost line around several other stars (so-called hot Jupiters). They are thought to have formed outside the frost line, and later migrated inwards to their current positions. Earth, which lies less than a quarter of the distance to the frost line but is not a giant planet, has adequate gravitation for keeping methane, ammonia, and water vapor from escaping it. Methane and ammonia are rare in the Earth's atmosphere only because of their instability in an oxygen-rich atmosphere that results from life forms (largely green plants) whose biochemistry suggests plentiful methane and ammonia at one time, but of course liquid water and ice, which are chemically stable in such an atmosphere, form much of the surface of Earth. Researchers Rebecca Martin and Mario Livio have proposed that asteroid belts may tend to form in the vicinity of the frost line, due to nearby giant planets disrupting planet formation inside their orbit. By analysing the temperature of warm dust found around some 90 stars, they concluded that the dust (and therefore possible asteroid belts) was typically found close to the frost line. The underlying mechanism may be the thermal instability of snow line on the timescales of 1,000 - 10,000 years, resulting in periodic deposition of dust material in relatively narrow circumstellar rings. See also Circumstellar habitable zone Nebular hypothesis Solar System belts Solar nebula References External links The thermal structure and the location of the snow line in the protosolar nebula: axisymmetric models with full 3-D radiative transfer by M. Min, C.P. Dullemond, M. Kama, C. Dominik On the Snow Line in Dusty Protoplanetary Disks by D. D. Sasselov and M. Lecar Concepts in astrophysics Cold Planetary science
Frost line (astrophysics)
[ "Physics", "Astronomy" ]
1,222
[ "Planetary science", "Astronomical sub-disciplines", "Concepts in astrophysics", "Astrophysics" ]
5,237,092
https://en.wikipedia.org/wiki/Vinylsilane
Vinylsilane refers to an organosilicon compound with chemical formula CH2=CHSiH3. It is a derivative of silane (SiH4). The compound, which is a colorless gas, is mainly of theoretical interest. Substituted vinylsilanes More commonly used than the parent vinylsilane are vinyl-substituted silanes with other substituents on silicon. In the area of organic synthesis, vinylsilanes are useful intermediates. In the area of polymer chemistry and materials science, vinyltrimethoxysilane or vinyltriethoxysilane serve as monomers and coupling agents. Preparation Vinylsilanes are often prepared by hydrosilylation of alkynes. They can be made by the reaction of alkenyl lithium and Grignard reagents with chlorosilanes. In some cases dehydrogenative silylation is another method. References Carbosilanes Monomers Vinyl compounds
Vinylsilane
[ "Chemistry", "Materials_science" ]
199
[ "Monomers", "Polymer chemistry" ]
5,238,501
https://en.wikipedia.org/wiki/Merck%20molecular%20force%20field
Merck molecular force field (MMFF) is a family of chemistry force fields developed by Merck Research Laboratories. They are based on the MM3 force field. MMFF is not optimized for one use, such as simulating proteins or small molecules, but tries to perform well for a wide range of organic chemistry calculations. The parameters in the force field have been derived from computational data consisting of approximately 2800 structures spanning a wide range of chemical classes. The first published force field in the family is MMFF94. A set of molecular structures and the corresponding output of Halgren's MMFF94 implementation is provided at the Computational Chemistry List for validating other MMFF implementations. One variant of MMFF94 is MMFF94s, which has different out-of-plane bending and dihedral torsion parameters in order to planarize delocalized trigonal nitrogen atoms, e.g. in aniline. The "s" in MMFF94s stands for "static", as MMFF94s better reflects time-averaged geometries than MMFF94. See also Comparison of force-field implementations References Force fields (chemistry)
Merck molecular force field
[ "Chemistry" ]
245
[ "Theoretical chemistry stubs", "Molecular dynamics", "Computational chemistry", "Computational chemistry stubs", "Physical chemistry stubs", "Force fields (chemistry)" ]
5,241,974
https://en.wikipedia.org/wiki/Asphalt%20shingle
An asphalt shingle is a type of wall or roof shingle that uses asphalt for waterproofing. It is one of the most widely used roofing covers in North America because it has a relatively inexpensive up-front cost and is fairly simple to install. History Asphalt shingles are an American invention by Henry Reynolds of Grand Rapids, Michigan. They were first used in 1903, in general use in parts of the United States by 1911 and by 1939 11 million squares () of shingles were being produced. A U.S. National Board of Fire Underwriters campaign to eliminate the use of wood shingles on roofs was a contributing factor in the growth in popularity of asphalt shingles during the 1920s. The forerunner of these shingles was first developed in 1893 and called asphalt prepared roofing, which was similar to asphalt roll roofing without the surface granules. In 1897 slate granules were added to the surface to make the material more durable. Types of granules tested have included mica, oyster shells, slate, dolomite, fly-ash, silica and clay. In 1901 this material was first cut into strips for use as one-tab and multi-tab shingles. All shingles were organic at first with the base material, called felt, being primarily cotton rag until the 1920s when cotton rag became more expensive and alternative materials were used. Other organic materials used as the felt included wool, jute or manila, and wood pulp. In 1926 the Asphalt Shingle and Research Institute with the National Bureau of Standards tested 22 types of experimental felts and found no significant differences in performance. In the 1950s self-sealing and manually applied adhesives began to be used to help prevent wind damage to shingle roofs. The design standard was for the self-sealing strips of adhesive to be fully adhered after sixteen hours at . Also in the 1950s testing on the use of staples rather than roofing nails was carried out showing they could perform as well as nails but with six staples compared with four nails. In 1960 fiberglass mat bases were introduced with limited success; the lighter, more flexible fiberglass shingles proved to be more susceptible to wind damage particularly at freezing temperatures. Later generations of shingles constructed using fiberglass in place of asbestos provided acceptable durability and fireproofing. Also in the 1960s research into hail damage found that it occurs when hail reaches a size larger than . The Asphalt Roofing Manufacturers Association (ARMA) formed the High Wind Task Force in 1990 to continue research to improve shingle wind resistance. In 1996, a partnership between members of the U.S. property insurance industry, the Institute of Business and Home Safety, and the Underwriter's Laboratory (UL) was established to create an impact resistance classification system for roofing materials. The system, known as UL 2218, established a national standard for impact resistance. Subsequently, insurers offered discounted premiums for policies on structures using shingles that carried the highest impact classification (class 4). In 1998, Texas Insurance Commissioner Elton Bomer mandated that Texas provide premium discounts to policyholders that installed class 4 roofs. Types Two types of base materials are used to make asphalt shingles, organic and fiberglass. Both are made in a similar manner, with an asphalt-saturated base covered on one or both sides with asphalt or modified-asphalt, the exposed surface impregnated with slate, schist, quartz, vitrified brick, stone, or ceramic granules, and the under-side treated with sand, talc or mica to prevent shingles from sticking to one-another before use. The top surface granules block ultra-violet light, which causes the shingles to deteriorate, provides some physical protection of the asphalt core, and provides color – lighter shades preferred for their heat reflectivity in sunny climates, darker in cooler ones for their absorption. Some shingles have copper or other biocides added to the surface to help prevent algae growth. Self-sealing strips are standard on the underside of shingles to provide resistance to lifting in high winds. This material is typically limestone or fly-ash-modified resins, or polymer-modified bitumen. American Society of Civil Engineers ASTM D7158 is the standard most United States residential building codes use as their wind resistance standard for most discontinuous, steep-slope roof coverings (including asphalt shingles) with the following class ratings: Class D – Passed at basic wind speeds up to and including ; Class G – Passed at basic wind speeds up to and including ; and Class H – Passed at basic wind speeds up to and including . An additive known as styrene-butadiene-styrene (SBS), sometimes called modified or rubberized asphalt, is sometimes added to the asphalt mixture to make shingles more pliable, resistant to thermal cracking, and more resistant to damage from hail impacts. Some manufacturers use a fabric backing known as a scrim on the back side of shingles to make them more impact resistant. Most insurance companies offer discounts to homeowners for using Class 4 impact rated shingles. Organic Organic shingles are made with a base mat of organic materials such as waste paper, cellulose, wood fiber, or other materials. This is saturated with asphalt to make it waterproof, then a top coating of adhesive asphalt is applied, covered with solid granules. Such shingles contain around 40% more asphalt per unit area than fiberglass shingles. Their organic core leaves them more prone to fire damage, resulting in a maximum class "B" FM fire rating. They are also less brittle than fiberglass shingles in cold weather. The early wood material-based versions were very durable and hard to tear, an important quality before self-sealing materials were added to the underside of shingles to bond them to the layer beneath. Also, some organic shingles produced before the early 1980s may contain asbestos. Almost all major asphalt shingle manufacturers stopped production of organic shingles during the mid-to-late 2000's, with Building Products of Canada being the last manufacturer to make organic shingles, finally ceasing production in 2011. Fiberglass Fiberglass reinforcement was devised as the replacement for asbestos in organic mat shingles. Fiberglass shingles have a base layer of glass fiber reinforcing mat made from wet, random-laid glass fibers bonded with urea-formaldehyde resin. The mat is then coated with asphalt containing mineral fillers to make it waterproof. Such shingles resist fire better than those with organic/paper mats, making them eligible for as high as a class "A" rating. Area density typically ranges from . Fiberglass shingles gradually began to replace organic felt shingles, and by 1982 overtook them in use. Widespread hurricane damage in Florida during the 1990s prompted the industry to adhere to a 1700-gram tear value on finished asphalt shingles. Per 2003 International Building Code Sections 1507.2.1 and 1507.2.2, asphalt shingles shall only be used on roof slopes of two units vertical in 12 units horizontal (17% slope) or greater. Asphalt shingles shall be fastened to solidly sheathed decks. Shallower slopes require asphalt rolled roofing, or other roofing treatment. Architectural or three-tab Asphalt shingles come in two standard design options: architectural (also known as dimensional) shingles, and three-tab shingles. Three-tab are essentially flat simple shingles with a uniform shape and size. They use less material and are thinner than architectural shingles, and are therefore lighter and lower cost for both the material and the installation. They also do not last as long or offer manufacturer's warranties as long as good architectural asphalt shingles. Three-tab are still the most commonly installed in lower-value homes, such as those used as rental properties. However, they are declining in popularity in favor of the architectural style. Dimensional, or architectural shingles are thicker and stronger, vary in shape and size, and offer more aesthetic appeal; casting more distinct, random shadow lines better mimics the appearance of traditional roofing materials such as wood shake shingles. The result is a more natural, traditional look. While more expensive to install, they come with longer manufacturer's warranties, sometimes up to 50 years - typically prorated, as virtually all asphalt shingle roofs are replaced before such an expiration could be reached. While three-tab shingles typically need to be replaced after 15–18 years, Dimensional typically last 24–30 years. Qualities Asphalt shingles have varying qualities which help them survive wind, hail, or fire damage and discoloration. The American Society of Testing Materials (ASTM) has developed specifications for roof shingles: ASTM D 225-86 (Asphalt Shingles (Organic Felt) Surfaced with Mineral Granules) and ASTM D3462-87 (Asphalt Shingles Made from Glass Felt and Surfaced with Mineral Granules), ASTM D3161, Standard Test Method for Wind-Resistance of Asphalt Shingles (2005), Many shapes and textures of asphalt shingles are available: 3-tab, jet, "signature cut", Art-Loc, t-lock, tie lock, etc. Architectural (laminated) shingles are a multi-layer, laminated shingle which gives more varied, contoured visual effect to a roof surface and add more resistance for water. These shingles are designed to avoid repetitive patterns in the shingle appearance. Hip and ridge lines can have standard three-tab shingles cut to fit. Manufacturers also make specialized shingles for these areas. Starter shingles are also required and, because they are not visible after installation is complete, the use of extra shingles (commonly referred to as 'waste') are used here. However, manufacturers also make a specialized starter row shingle. The use of specialized ridge/hip shingles and the use of specialized starter row shingles, results in decreased labor expenses in exchange for an increase in material cost. Laminated shingles are heavier and more durable than traditional three-tab shingle designs. Solar reflecting shingles help reduce air conditioning costs in hot climates by being a better reflective surface. Wind damage: Asphalt shingles come in varying resistance to wind damage. Shingles with the highest fastener pull through resistance, bond strength of the self-seal adhesive, properly nailed will resist wind damage the best. Extra precautions can be taken in high wind areas to fasten a durable underlayment and/or seal the plywood seams in the event the shingles are blown off. UL 997 Wind Resistance of Prepared Roof Covering Materials class 1 is best Wind Resistance roof standard and ASTM D 3161 class F is best for bond strength. Hail damage: Hail storms can damage asphalt shingles. For impact resistance UL 2218 Class 4 is best. This increases survivability from hailstorms, but the shingles become more susceptible to hail damage with age. Fire resistance: Forest fires and other exterior fires risk roofs catching on fire. Fiberglass shingles have a better, class A, flame spread rating based on UL 790, and ASTM E 108 testing. Organic shingles have a class C rating. Algae resistance Algae is not believed to damage asphalt shingles but it may be objectionable aesthetically. Different treatment methods are used to prevent discoloration from algae growth on the roof. Moss feeds on algae and any other debris on the roof. Some manufactures offer a 5- to 10-year warranty against algae growth on their algae resistant shingles. Locking shingles: Special asphalt shingles are designed to lock together called tie lock or T lock. Durability Shingle durability is ranked by warranted life, ranging from 20 years to lifetime warranties are available. However, a stated warranty is not a guarantee of durability. A shingle manufacturer's warranty may pro-rate repair costs, cover materials only, have different warranty periods for different types of damage, and transfer to another owner. Shingles tend to last longer where the weather stays consistent, either consistently warm, or consistently cool. Thermal shock can damage shingles, when the ambient temperature changes dramatically within a very short period of time. "Experiments...have noted that the greatest cause of asphalt shingle aging is thermal loading." Over time the asphalt becomes oxidized and becomes brittle. Roof orientation and ventilation can extend the service life of a roof by reducing temperatures. Shingles should not be applied when temperatures are below 10 °C (50 °F), as each shingle must seal to the layer below it to form a monolithic structure. The underlying exposed asphalt must be softened by sunlight and heat to form a proper bond. The protective nature of asphalt shingles primarily comes from the long-chain hydrocarbons impregnating the paper. Over time in the hot sun, the hydrocarbons soften and when rain falls the hydrocarbons are gradually washed out of the shingles and down onto the ground. Along eaves and complex rooflines more water is channeled so in these areas the loss occurs more quickly. Eventually the loss of the heavy oils causes the fibers to shrink, exposing the nail heads under the shingle flaps. The shrinkage also breaks up the surface coating of sand adhered to the surface of the paper, and eventually causes the paper to begin to tear itself apart. Once the nail heads are exposed, water running down the roof can seep into the building around the nail shank, resulting in rotting of roof building materials and causing moisture damage to ceilings and paint inside. Maintenance Cycles of wet and dry environmental conditions, as well as organic growths such as algae and foliose lichen and woody debris which remains on the shingles, will cause premature deterioration through both chemical and physical processes. Performed regularly, physical removal of debris, and physical or chemical removal of organic growth (for example, using a copper sulfate, zinc chloride, or other solution carefully applied and thoroughly rinsed), can prolong the life of asphalt roofing materials. Algae and moss growth may be prevented through installation of zinc or copper strips or wire at the ridge and every four to six feet down the roof; black algae growth can be removed with a bleach solution. Disposal and recycling Disposal methods According to a 2007 study conducted for the United States Environmental Protection Agency (EPA), approximately of asphalt shingle waste is generated each year in the United States, with the most common disposal method being landfilling. Waste asphalt shingles, however, can be recycled. Recycling Reclaimed asphalt shingles (RAS) can be broken down and incorporated into asphalt concrete mixtures, which are used to form pavements and road surfaces. RAS are an attractive component in recycled asphalt mixes, primarily due to their relatively high content of asphalt cement, which acts as the binding element in asphalt concrete. There are two forms of RAS: post-manufacturer shingles that are reclaimed from factory waste, and post-consumer shingles that are reclaimed at the end of their service life (also referred to as “tear-offs”). The majority of asphalt shingle waste is post-consumer. Post-consumer RAS have fewer appealing properties for recycling, primarily because the asphalt cement component in shingles naturally hardens during its service life, resulting in higher stiffness, melting point, and susceptibility to fatigue cracking. Post-consumer RAS also require additional processing, such as the removal of nails and other metal waste through the use of a magnetic sieve. Recycled asphalt mixtures may contain post-manufacture and/or post-consumer RAS, provided that the quality of the asphalt cement binder is accessed and accounted for. Aged binders are typically combined with soft virgin asphalt binder and/or rejuvenating additives to produce a binder that is workable and resistant to fatigue cracking. Standard practice for accessing the binder quality in RAS and blending it with virgin binder has been established by The American Association of State Highway and Transportation Officials (AASHTO). When RAS binder is combined with low-grade virgin binder, it has been demonstrated to provide some beneficial properties, such as increased resistance to rutting. In 2019, an estimated 1.1 million tons of RAS were accepted by asphalt plants. Of those accepted, approximately 423 thousand tons were pre-processed, 334 thousand tons were unprocessed post-manufacturer shingles, and 277 thousand tons were unprocessed post-consumer shingles. Health and safety concerns The use of RAS in recycled asphalt mixes is entirely prohibited in 10 states, and the majority of states that allow the use of RAS restrict it to certain sectors and pavement types. The primary reason restrictions on the use of RAS exist is the rare presence of asbestos in asphalt shingles manufactured before the early 1980s. Although the lifespan of a typical asphalt shingled roof is approximately 25 years, concerns remain due to the practice of layering newly installed shingles on top of old ones. In addition to shingles, asbestos has also been found present in felt paper, roll roofing, roof paints/coatings, caulking, and mastic, all of which may be present in the post-consumer shingle waste accepted by asphalt plants. Still, testing has demonstrated the percentage of asbestos-containing post-consumer shingles to be low. A 2007 survey of over 27,000 samples tested at 9 different facilities detected asbestos in less than 1.6% of samples. The National Asphalt Pavement Association continues to recommend that all post-consumer RAS be inspected for asbestos and that all recycling operations have an asbestos management plan in place. Asphalt also naturally contains polycyclic aromatic hydrocarbons (PAHs) which may leach out of RAS stockpiles or be emitted when RAS are heated. Some PAHs are carcinogenic and may put workers at risk. The recycling of RAS may lead to PAH emissions, however, there is no evidence to show that PAH emissions are lower when virgin asphalt is used in place of RAS. See also Metal roof Rubber shingle roof References External links Canadian Asphalt Manufacturers Association technical bulletins Asphalt Roofing Manufacturers Association Residential/Steep Slope Technical Bulletins National Roofing Contractors Association information on asphalt shingles Abstract on ASTM E108 Standard Test Methods for Fire Tests of Roof Coverings UL Preamble to Guide TFWZ on Prepared Roof-covering Materials Roofs Roofing materials Building materials American inventions
Asphalt shingle
[ "Physics", "Technology", "Engineering" ]
3,759
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Construction", "Materials", "Roofs", "Matter", "Building materials" ]
5,243,299
https://en.wikipedia.org/wiki/Batrachochytrium%20dendrobatidis
Batrachochytrium dendrobatidis ( ), also known as Bd or the amphibian chytrid fungus, is a fungus that causes the disease chytridiomycosis in amphibians. Since its discovery in 1998 by Lee Berger and species description in 1999 by Joyce E. Longcore, the disease devastated amphibian populations around the world, in a global decline towards multiple extinctions, part of the Holocene extinction. A recently described second species, B. salamandrivorans, also causes chytridiomycosis and death in salamanders. The fungal pathogens that cause the disease chytridiomycosis are known to damage the skin of frogs, toads, and other amphibians, disrupting their balance of water and salt and eventually causing heart failure, Nature reports. Some amphibian species appear to have an innate capacity to withstand chytridiomycosis infection due to symbiosis with Janthinobacterium lividum. Even within species that generally succumb, some populations survive, possibly demonstrating that these traits or alleles of species are being subjected to evolutionary selection. Etymology The generic name is derived from the Greek words batrachos (frog) and chytra (earthen pot), while the specific epithet is derived from the genus of frogs from which the original confirmation of pathogenicity was made (Dendrobates), dendrobatidis is from the Greek dendron, "tree" and bates, "one who climbs", referring to a genus of poison dart frogs. Systematics Batrachochytrium dendrobatidis was until recently considered the single species of the genus Batrachochytrium. The initial classification of the pathogen as a chytrid was based on zoospore ultrastructure. DNA analysis of the SSU-rDNA has corroborated the view, with the closest match to Chytridium confervae. A second species of Batrachochytrium was discovered in 2013: B. salamandrivorans, which mainly affects salamanders and also causes chytridiomycosis. B. salamandrivorans differs from B. dendrobatidis primarily in the formation of germ tubes in vitro, the formation of colonial thalli with multiple sporangia in vivo, and a lower thermal preference. Morphology B. dendrobatidis infects the keratinized skin of amphibians. The fungus in the epidermis has a thallus bearing a network of rhizoids and smooth-walled, roughly spherical, inoperculate (without an operculum) sporangia. Each sporangium produces a single tube to discharge spores. Zoospore structure Zoospores of B. dendrobatidis, which are typically 3–5 μm in size, have an elongate–ovoid body with a single, posterior flagellum (19-20 μm long), and possess a core area of ribosomes often with membrane-bound spheres of ribosomes within the main ribosomal mass. A small spur has been observed, located at the posterior of the cell body, adjacent to the flagellum, but this may be an artifact in the formalin-fixed specimens. The core area of ribosomes is surrounded by a single cisterna of endoplasmic reticulum, two to three mitochondria, and an extensive microbody–lipid globule complex. The microbodies closely appose and almost surround four to six lipid globules (three anterior and one to three laterally), some of which appear bound by a cisterna. Some zoospores appear to contain more lipid globules (this may have been a result of a plane-of-sectioning effect, because the globules were often lobed in the zoospores examined). A rumposome has not been observed. Flagellum structure A nonfunctioning centriole lies adjacent to the kinetosome. Nine interconnected props attach the kinetosome to the plasmalemma, and a terminal plate is present in the transitional zone. An inner ring-like structure attached to the tubules of the flagellar doublets within the transitional zone has been observed in transverse section. No roots associated with the kinetosome have been observed. In many zoospores, the nucleus lies partially within the aggregation of ribosomes and was invariably situated laterally. Small vacuoles and a Golgi body with stacked cisternae occurred within the cytoplasm outside the ribosomal area. Mitochondria, which often contain a small number of ribosomes, are densely staining with discoidal cristae. Life cycle B. dendrobatidis has two primary life stages: a sessile, reproductive zoosporangium and a motile, uniflagellated zoospore released from the zoosporangium. The zoospores are known to be active only for a short period of time, and can travel short distances of one to two centimeters. However, the zoospores are capable of chemotaxis, and can move towards a variety of molecules that are present on the amphibian surface, such as sugars, proteins and amino acids. B. dendrobatidis also contains a variety of proteolytic enzymes and esterases that help it digest amphibian cells and use amphibian skin as a nutrient source. Once the zoospore reaches its host, it forms a cyst underneath the surface of the skin, and initiates the reproductive portion of its life cycle. The encysted zoospores develop into zoosporangia, which may produce more zoospores that can reinfect the host, or be released into the surrounding aquatic environment. The amphibians infected with these zoospores are shown to die from cardiac arrest. Besides amphibians B. dendrobatidis also infects crayfish (Procambarus alleni, P. clarkii, Orconectes virilis, and O. immunis) but not mosquitofish (Gambusia holbrooki). Physiology B. dendrobatidis can grow within a wide temperature range (4-25 °C), with optimal temperatures being between 17 °C and 25 °C. The wide temperature range for growth, including the ability to survive at 4 °C gives the fungus the ability to overwinter in its hosts, even where temperatures in the aquatic environments are low. The species does not grow well above temperatures of 25 °C, and growth is halted above 28 °C. Infected red-eyed treefrogs (Litoria chloris) recovered from their infections when incubated at a temperature of 37 °C. Varying forms B. dendrobatidis has occasionally been found in forms distinct from its traditional zoospore and sporangia stages. For example, before the 2003 European heat wave that decimated populations of the water frog Rana lessonae through chytridiomycosis, the fungus existed on the amphibians as spherical, unicellular organisms, confined to minute patches (80-120 μm across). These organisms, unknown at the time, were subsequently identified as B. dendrobatidis. Characteristics of the organisms were suggestive of encysted zoospores; they may have embodied a resting spore, a saprobe, or a parasitic form of the fungus that is non-pathogenic. Habitat and relationship to amphibians The fungus grows on amphibian skin and produces aquatic zoospores. It is widespread and ranges from lowland forests to cold mountain tops. It is sometimes a non-lethal parasite and possibly a saprophyte. The fungus is associated with host mortality in highlands or during winter, and becomes more pathogenic at lower temperatures. Geographic distribution It has been suggested that B. dendrobatidis originated in Africa or Asia and subsequently spread to other parts of the world by trade in African clawed frogs (Xenopus laevis). In this study, 697 archived specimens of three species of Xenopus, previously collected from 1879 to 1999 in southern Africa, were examined. The earliest case of chytridiomycosis was found in a X. laevis specimen from 1938. The study also suggests that chytridiomycosis had been a stable infection in southern Africa from 23 years prior to finding any infected outside of Africa. There is more recent information that the species originated on the Korean peninsula and was spread by the trade in frogs. American bullfrogs (Lithobates catesbeianus), also widely distributed, are also thought to be carriers of the disease due to their inherent low susceptibility to B. dendrobatidis infection. The bullfrog often escapes captivity and can establish feral populations where it may introduce the disease to new areas. It has also been shown that B. dendrobatidis can survive and grow in moist soil and on bird feathers, suggesting that B. dendrobatidis may also be spread in the environment by birds and transportation of soils. Infections have been linked to mass mortalities of amphibians in North America, South America, Central America, Europe and Australia. B. dendrobatidis has been implicated in the extinction of the sharp-snouted day frog (Taudactylus acutirostris) in Australia. A wide variety of amphibian hosts have been identified as being susceptible to infection by B. dendrobatidis, including wood frogs (Lithobates sylvatica), the mountain yellow-legged frog (Lithobates muscosa), the southern two-lined salamander (Eurycea cirrigera), San Marcos Salamander (Eurycea nana), Texas Salamander (Eurycea neotenes), Blanco River Springs Salamander (Eurycea pterophila), Barton Springs Salamander (Eurycea sosorum), Jollyville Plateau Salamander (Eurycea tonkawae), Ambystoma jeffersonianum, the western chorus frog (Pseudacris triseriata), the southern cricket frog (Acris gryllus), the eastern spadefoot toad (Scaphiopus holbrooki), the southern leopard frog (Lithobates sphenocephala), the Rio Grande Leopard frog (Lithobates berlandieri), the Sardinian newt (Euproctus platycephalus), and endemic frog species, the Beysehir frog in Turkey (Pelophylax caralitanus). Southeast Asia While most studies concerning B. dendrobatidis have been performed in various locations across the world, the presence of the fungus in Southeast Asia remains a relatively recent development. The exact process through which the fungus was introduced to Asia is not known, however, as mentioned above, it has been suggested transportation of asymptomatic carrier species (e.g. Lithobates catesbeianus, the American Bullfrog) may be a key component in the dissemination of the fungus, at least in China. Initial studies demonstrated the presence of the fungus on island states/countries such as Hong Kong, Indonesia, Taiwan, and Japan. Soon thereafter, mainland Asian countries such as Thailand, South Korea, and China reported incidents of B. dendrobatidis among their amphibian populations. Much effort has been put into classifying herpetofauna in countries like Cambodia, Vietnam, and Laos where new species of frogs, toads, and other amphibians and reptiles are being discovered on a frequent basis. Scientists simultaneously are swabbing herpetofauna in order to determine if these newly discovered animals possess traces of the fungus. In Cambodia, a study showed B. dendrobatidis to be prevalent throughout the country in areas near Phnom Penh (in a village <5 km), Sihanoukville (frogs collected from the local market), Kratie (frogs collected from streets around the town), and Siem Reap (frogs collected from a national preserve: Angkor Centre for Conservation of Biodiversity). Another study in Cambodia questioned the potential anthropological impact in the dissemination of B. dendrobatidis on local amphibian populations in three different areas in relation to human interaction: low (an isolated forest atop a mountain people rarely visit), medium (a forest road ~15 km from a village that is used at least once a week), and high (a small village where humans interact with their environment on a daily basis). Using quantitative PCR, evidence of B. dendrobatidis was found in all three sites with the highest percentage of amphibians positive for the fungus from the forest road (medium impact; 50%), followed by the mountain forest (low impact; 44%) and village (high impact; 36%). Human influence most likely explains detection of the fungus in the medium and high areas, however it does not provide an adequate explanation why even isolated amphibians were positive for B. dendrobatidis. This may go unanswered until more research is performed on transmission of the fungus across landscapes. However, recent evidence suggests mosquitoes may be a possible vector which may help spread B. dendrobatidis. Another study in French Guiana reports widespread infection, with 8 of 11 sites sampled being positive for B. dendrobatidis infection for at least one species. This study suggests that Bd (Batrachochytrium dendrobatidis) is more widespread than previously thought. Effect on amphibians Worldwide amphibian populations have been on a steady decline due to an increase in the disease chytridiomycosis, caused by the Bd fungus. Bd can be introduced to an amphibian primarily through water exposure, colonizing the digits and ventral surfaces of the animal's body most heavily and spreading throughout the body as the animal matures. Potential effects of this pathogen are hyperkeratosis, epidermal hyperplasia, ulcers, and most prominently the change in osmotic regulation often leading to cardiac arrest. The death toll on amphibians is dependent on a variety of factors but most crucially on the intensity of infection. Certain frogs adopt skin sloughing as a defense mechanism for B. dendrobatidis; however, this is not always effective, as mortality fluctuates between species. For example, the Fletcher frog, despite practising skin sloughing, suffers from a particularly high mortality rate when infected with the disease compared to similar species like Lim. peronii and Lim. tasmaniensis. Some amphibian species have been found to adapt to infection after an initial die-off with survival rates of infected and non-infected individuals being equal. According to a study by the Australian National University, the Bd fungus has caused the decline of 501 amphibian species—about 6.5 percent of the world's known total. Of these, 90 species have been entirely wiped out and another 124 species have declined by more than 90 percent, and the odds of the affected species recovering to a healthy population are doubtful. However, these conclusions were criticized by later studies, which proposed that Bd was not the primary driver of amphibian declines as suggested by the previous study. One amphibian particular affected by Bd is Lithobates clamitans. Bd kills this frog by interfering with external water exchange, causing an imbalance in ion exchange that leads to heart failure. Immunity Some amphibian species are immune to Bd or possess biological protections against the fungus. One such species is the alpine salamander (Salamandra atra), which includes several subspecies that share a common trait: toxicity. A 2012 study demonstrated that none of the alpine salamanders in the area were infected with Bd, despite the fungus' prevalence. Alpine salamanders produce alkaloid compounds or toxic peptides that may protect them against microbial infections. See also Pathogenic fungi Decline in amphibian populations Ranavirus References Further reading External links Chytrid Fungi Online at University of Alabama Aquatic fungi Chytridiomycota Fungi described in 1999 Parasitic fungi Fungus species
Batrachochytrium dendrobatidis
[ "Biology" ]
3,417
[ "Fungi", "Fungus species" ]
21,525,057
https://en.wikipedia.org/wiki/Racetrack%20problem
A racetrack problem is a specific instance of a type of race condition. A racetrack problem is a flaw in a system or process whereby the output and/or result of the process is unexpectedly and critically dependent on the sequence or timing of other events that run in a circular pattern. This problem is semantically different from a race condition because of the circular nature of the problem. The term originates with the idea of two signals racing each other in a circular motion to influence the output first. Racetrack problems can occur in electronics systems, especially logic circuits, and in computer software, especially multithreaded or distributed programs. See also Concurrency control Deadlock Synchronization Therac-25 External links Starvation and Critical Race Analyzers for Ada Paper "Algorithms for the Optimal State Assignment of Asynchronous State Machines" by Robert M. Fuhrer, Bill Lin and Steven M. Nowick Paper "A Novel Framework for Solving the State Assignment Problem for Event-Based Specifications" by Luciano Lavagno, Cho W. Moon, Robert K. Brayton and Alberto Sangiovanni-Vincentelli Article "Secure programmer: Prevent race conditions—Resource contention can be used against you" by David A. Wheeler Chapter "Avoid Race Conditions" (Secure Programming for Linux and Unix HOWTO) Race conditions, security, and immutability in Java, with sample source code and comparison to C code, by Chiral Software Computer security exploits Concurrency (computer science) Software bugs Logic gates Logic in computer science
Racetrack problem
[ "Mathematics", "Technology" ]
301
[ "Mathematical logic", "Logic in computer science", "Computer security exploits" ]
21,526,661
https://en.wikipedia.org/wiki/Test%20validity
Test validity is the extent to which a test (such as a chemical, physical, or scholastic test) accurately measures what it is supposed to measure. In the fields of psychological testing and educational testing, "validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests". Although classical models divided the concept into various "validities" (such as content validity, criterion validity, and construct validity), the currently dominant view is that validity is a single unitary construct. Validity is generally considered the most important issue in psychological and educational testing because it concerns the meaning placed on test results. Though many textbooks present validity as a static construct, various models of validity have evolved since the first published recommendations for constructing psychological and education tests. These models can be categorized into two primary groups: classical models, which include several types of validity, and modern models, which present validity as a single construct. The modern models reorganize classical "validities" into either "aspects" of validity or "types" of validity-supporting evidence Test validity is often confused with reliability, which refers to the consistency of a measure. Adequate reliability is a prerequisite of validity, but a high reliability does not in any way guarantee that a measure is valid. Historical background Although psychologists and educators were aware of several facets of validity before World War II, their methods for establishing validity were commonly restricted to correlations of test scores with some known criterion. Under the direction of Lee Cronbach, the 1954 Technical Recommendations for Psychological Tests and Diagnostic Techniques attempted to clarify and broaden the scope of validity by dividing it into four parts: (a) concurrent validity, (b) predictive validity, (c) content validity, and (d) construct validity. Cronbach and Meehl's subsequent publication grouped predictive and concurrent validity into a "criterion-orientation", which eventually became criterion validity. Over the next four decades, many theorists, including Cronbach himself, voiced their dissatisfaction with this three-in-one model of validity. Their arguments culminated in Samuel Messick's 1995 article that described validity as a single construct, composed of six "aspects". In his view, various inferences made from test scores may require different types of evidence, but not different validities. The 1999 Standards for Educational and Psychological Testing largely codified Messick's model. They describe five types of validity-supporting evidence that incorporate each of Messick's aspects, and make no mention of the classical models’ content, criterion, and construct validities. Validation process According to the 1999 Standards, validation is the process of gathering evidence to provide "a sound scientific basis" for interpreting the scores as proposed by the test developer and/or the test user. Validation therefore begins with a framework that defines the scope and aspects (in the case of multi-dimensional scales) of the proposed interpretation. The framework also includes a rational justification linking the interpretation to the test in question. Validity researchers then list a series of propositions that must be met if the interpretation is to be valid. Or, conversely, they may compile a list of issues that may threaten the validity of the interpretations. In either case, the researchers proceed by gathering evidence – be it original empirical research, meta-analysis or review of existing literature, or logical analysis of the issues – to support or to question the interpretation's propositions (or the threats to the interpretation's validity). Emphasis is placed on quality, rather than quantity, of the evidence. A single interpretation of any test result may require several propositions to be true (or may be questioned by any one of a set of threats to its validity). Strong evidence in support of a single proposition does not lessen the requirement to support the other propositions. Evidence to support (or question) the validity of an interpretation can be categorized into one of five categories: Evidence based on test content Evidence based on response processes Evidence based on internal structure Evidence based on relations to other variables Evidence based on consequences of testing Techniques to gather each type of evidence should only be employed when they yield information that would support or question the propositions required for the interpretation in question. Each piece of evidence is finally integrated into a validity argument. The argument may call for a revision to the test, its administration protocol, or the theoretical constructs underlying the interpretations. If the test, and/or the interpretations of the test's results are revised in any way, a new validation process must gather evidence to support the new version. See also Validity scale References Measurement
Test validity
[ "Physics", "Mathematics" ]
930
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
21,529,260
https://en.wikipedia.org/wiki/Eigencolloid
Eigencolloid is a term derived from the German language (eigen: own) and used to designate colloids made of pure phases, also known as intrinsic colloids. Eigencolloids are metal oxyhydroxide colloids on the nanometer scale formed by aggregation of hydrolyzed metal ions. They are characterized by a very large specific surface area (up to 2000 m2/g) and a high reactivity. They hold promise for the development of new industrial catalysts. Many such colloids are formed by the hydrolysis of heavy metals cations or radionuclides, such as, for example, Tc(OH)4, Th(OH)4, U(OH)4, Pu(OH)4, or Am(OH)3. The term 'eigencolloid' or 'intrinsic colloid', is often used in distinction to a pseudocolloid. A pseudocolloid is one in which elements (colloids or cations) become adsorbed onto pre-existing groundwater colloids due to their affinity to these colloids or to the hydrophobic properties of the dispersing medium. In environmental chemistry, enhanced migration of heavy metal and radioactive metal contaminants in ground and surface waters is often facilitated by eigencolloid formation. Actinide eigencolloids Eigencolloid formation occurs readily in groundwater upon storage of radioactive waste. Colloid-facilitated transport is a mechanism responsible for the mobilisation of radionuclides into the wider environment, causing radioactive contamination. This is a public health concern since elevated radioactivity in the environment is mutagenic and can lead to cancer. Eigencolloids have been implicated in the long-range transport of plutonium on the Nevada Test Site. See also Cations hydrolysis Colloid-facilitated transport References Breynaert E. (2008). PhD Thesis. Catholic University of Leuven. Formation, stability and applications of eigencolloids of technetium. Actinides Colloids Colloidal chemistry
Eigencolloid
[ "Physics", "Chemistry", "Materials_science" ]
424
[ "Colloidal chemistry", "Surface science", "Colloids", "Chemical mixtures", "Condensed matter physics" ]
21,530,463
https://en.wikipedia.org/wiki/Model%20lipid%20bilayer
A model lipid bilayer is any bilayer assembled in vitro, as opposed to the bilayer of natural cell membranes or covering various sub-cellular structures like the nucleus. They are used to study the fundamental properties of biological membranes in a simplified and well-controlled environment, and increasingly in bottom-up synthetic biology for the construction of artificial cells. A model bilayer can be made with either synthetic or natural lipids. The simplest model systems contain only a single pure synthetic lipid. More physiologically relevant model bilayers can be made with mixtures of several synthetic or natural lipids. There are many different types of model bilayers, each having experimental advantages and disadvantages. The first system developed was the black lipid membrane or “painted” bilayer, which allows simple electrical characterization of bilayers but is short-lived and can be difficult to work with. Supported bilayers are anchored to a solid substrate, increasing stability and allowing the use of characterization tools not possible in bulk solution. These advantages come at the cost of unwanted substrate interactions which can denature membrane proteins. Black lipid membranes (BLM) The earliest model bilayer system developed was the “painted” bilayer, also known as a “black lipid membrane.” The term “painted” refers to the process by which these bilayers are made. First, a small aperture is created in a thin layer of a hydrophobic material such as Teflon. Typically the diameter of this hole is a few tens of micrometers up to hundreds of micrometers. To form a BLM, the area around the aperture is first "pre-painted" with a solution of lipids dissolved in a hydrophobic solvent by applying this solution across the aperture with a brush, syringe, or glass applicator. The solvent used must have a very high partition coefficient and must be relatively viscous to prevent immediate rupture. The most common solvent used is a mixture of decane and squalene. After allowing the aperture to dry, salt solution (aqueous phase) is added to both sides of the chamber. The aperture is then "painted" with a lipid solution (generally the same solution that was used for pre-painting). A lipid monolayer spontaneously forms at the interface between the organic and aqueous phases on either side of the lipid/solvent droplet. Because the walls of the aperture are hydrophobic the lipid/solvent solution wets this interface, thinning the droplet in the center. Once the two sides of the droplet come close enough together, the lipid monolayers fuse, rapidly excluding the small remaining volume of solution. At this point a bilayer is formed in the center of the aperture, but a significant annulus of solvent remains at the perimeter. This annulus is required to maintain stability by acting as a bridge between the ~5 nm bilayer and the tens of micrometers thick sheet in which the aperture is made. The term “black” bilayer refers to the fact that they are dark in reflected light because the thickness of the membrane is only a few nanometers, so light reflecting off the back face destructively interferes with light reflecting off the front face. Indeed, this was one of the first clues that this technique produced a membrane of molecular-scale thickness. Black lipid membranes are also well suited to electrical characterization because the two chambers separated by the bilayer are both accessible, allowing simple placement of large electrodes. For this reason, electrical characterization is one of the most important methods used in conjunction with painted lipid bilayers. Simple measurements indicate when a bilayer forms and when it breaks, as an intact bilayer has a large resistance (>GΩ) and a large capacitance (~2 μF/cm2). More advanced electrical characterization has been particularly important in the study of voltage gated ion channels. Membrane proteins such as ion channels typically cannot be incorporated directly into the painted bilayer during formation because immersion in an organic solvent would denature the protein. Instead, the protein is solubilized with a detergent and added to the aqueous solution after the bilayer is formed. The detergent coating allows these proteins to spontaneously insert into the bilayer over a period of minutes. Additionally, initial experiments have been performed which combine electrophysiological and structural investigations of black lipid membranes. In another variation of the BLM technique, termed the bilayer punch, a glass pipet (inner diameter ~10-40 μm) is used as the electrode on one side of the bilayer in order to isolate a small patch of membrane. This modification of the patch clamp technique enables low noise recording, even at high potentials (up to 600 mV), at the expense of additional preparation time. The main problems associated with painted bilayers are residual solvent and limited lifetime. Some researchers believe that pockets of solvent trapped between the two bilayer leaflets can disrupt normal protein function. To overcome this limitation, Montal and Mueller developed a modified deposition technique that eliminates the use of a heavy non-volatile solvent. In this method, the aperture starts out above the water surface, completely separating the two fluid chambers. On the surface of each chamber, a monolayer is formed by applying lipids in a volatile solvent such as chloroform and waiting for the solvent to evaporate. The aperture is then lowered through the air-water interface and the two monolayers from the separate chambers are folded down against each other, forming a bilayer across the aperture. The stability issue has proven more difficult to solve. Typically, a black lipid membrane will survive for less than an hour, precluding long-term experiments. This lifetime can be extended by precisely structuring the supporting aperture, chemically crosslinking the lipids or gelling the surrounding solution to mechanically support the bilayer. Work is ongoing in this area and lifetimes of several hours will become feasible. Supported lipid bilayers (SLB) Unlike a vesicle or a cell membrane in which the lipid bilayer is rolled into an enclosed shell, a supported bilayer is a planar structure sitting on a solid support. Because of this, only the upper face of the bilayer is exposed to free solution. This layout has advantages and drawbacks related to the study of lipid bilayers. One of the greatest advantages of the supported bilayer is its stability. SLBs will remain largely intact even when subject to high flow rates or vibration and, unlike black lipid membranes, the presence of holes will not destroy the entire bilayer. Because of this stability, experiments lasting weeks and even months are possible with supported bilayers while BLM experiments are usually limited to hours. Another advantage of the supported bilayer is that, because it is on a flat hard surface, it is amenable to a number of characterization tools which would be impossible or would offer lower resolution if performed on a freely floating sample. One of the clearest examples of this advantage is the use of mechanical probing techniques which require a direct physical interaction with the sample. Atomic force microscopy (AFM) has been used to image lipid phase separation, formation of transmembrane nanopores followed by single protein molecule adsorption, and protein assembly with sub-nm accuracy without the need for a labeling dye. More recently, AFM has also been used to directly probe the mechanical properties of single bilayers and to perform force spectroscopy on individual membrane proteins. These studies would be difficult or impossible without the use of supported bilayers since the surface of a cell or vesicle is relatively soft and would drift and fluctuate over time. Another example of a physical probe is the use of the quartz crystal microbalance (QCM) to study binding kinetics at the bilayer surface. Dual polarisation interferometry is a high resolution optical tool for characterising the order and disruption in lipid bilayers during interactions or phase transitions providing complementary data to QCM measurements. Many modern fluorescence microscopy techniques also require a rigidly-supported planar surface. Evanescent field methods such as total internal reflection fluorescence microscopy (TIRF) and surface plasmon resonance (SPR) can offer extremely sensitive measurement of analyte binding and bilayer optical properties but can only function when the sample is supported on specialized optically functional materials. Another class of methods applicable only to supported bilayers is those based on optical interference such as fluorescence interference contrast microscopy (FLIC) and reflection interference contrast microscopy (RICM) or interferometric scattering microscopy (iSCAT). When the bilayer is supported on top of a reflective surface, variations in intensity due to destructive interference from this interface can be used to calculate with angstrom accuracy the position of fluorophores within the bilayer. Both evanescent and interference techniques offer sub-wavelength resolution in only one dimension (z, or vertical). In many cases, this resolution is all that is needed. After all, bilayers are very small only in one dimension. Laterally, a bilayer can extend for many micrometres or even millimeters. But certain phenomena like dynamic phase rearrangement do occur in bilayers on a lateral sub-micrometre length scale. A promising approach to studying these structures is near field scanning optical microscopy (NSOM). Like AFM, NSOM relies on the scanning of a micromachined tip to give a highly localized signal. But unlike AFM, NSOM uses an optical rather than physical interaction with the sample, potentially perturbing delicate structures to a lesser extent. Another important capability of supported bilayers is the ability to pattern the surface to produce multiple isolated regions on the same substrate. This phenomenon was first demonstrated using scratches or metallic “corrals” to prevent mixing between adjacent regions while still allowing free diffusion within any one region. Later work extended this concept by integrating microfluidics to demonstrate that stable composition gradients could be formed in bilayers, potentially allowing massively parallel studies of phase segregation, molecular binding and cellular response to artificial lipid membranes. Creative utilization of the corral concept has also allowed studies of the dynamic reorganization of membrane proteins at the synaptic interface. One of the primary limitations of supported bilayers is the possibility of unwanted interactions with the substrate. Although supported bilayers generally do not directly touch the substrate surface, they are separated by only a very thin water gap. The size and nature of this gap depends on the substrate material and lipid species but is generally about 1 nm for zwitterionic lipids supported on silica, the most common experimental system. Because this layer is so thin there is extensive hydrodynamic coupling between the bilayer and the substrate, resulting in a lower diffusion coefficient in supported bilayers than for free bilayers of the same composition. A certain percentage of the supported bilayer will also be completely immobile, although the exact nature of and reason for these “pinned” sites is still uncertain. For high quality liquid phase supported bilayers the immobile fraction is typically around 1-5%. To quantify the diffusion coefficient and mobile fraction, researchers studying supported bilayers will often report FRAP data. Unwanted substrate interactions are a much greater problem when incorporating integral membrane proteins, particularly those with large domains sticking out beyond the core of the bilayer. Because the gap between bilayer and substrate is so thin these proteins will often become denatured on the substrate surface and therefore lose all functionality. One approach to circumvent this problem is the use of polymer tethered bilayers. In these systems the bilayer is supported on a loose network of hydrated polymers or hydrogel which acts as a spacer and theoretically prevents denaturing substrate interactions. In practice, some percentage of the proteins will still lose mobility and functionality, probably due to interactions with the polymer/lipid anchors. Research in this area is ongoing. Tethered bilayer lipid membranes (t-BLM) The use of a tethered bilayer lipid membrane (t-BLM) further increases the stability of supported membranes by chemically anchoring the lipids to the solid substrate.Gold can be used as a substrate because of its inert chemistry and thiolipids for covalent binding to the gold. Thiolipids are composed of lipid derivatives, extended at their polar head-groups by hydrophilic spacers which terminate in a thiol or disulphide group that forms a covalent bond with gold, forming self assembled monolayers (SAM). The limitation of the intra-membrane mobility of supported lipid bilayers can be overcome by introducing half-membrane spanning tether lipids with benzyl disulphide (DPL) and synthetic archaea analogue full membrane spanning lipids with phytanoly chains to stabilize the structure and polyethyleneglycol units as a hydrophilic spacer. Bilayer formation is achieved by exposure of the lipid coated gold substrate to outer layer lipids either in an ethanol solution or in liposomes. The advantage of this approach is that because of the hydrophilic space of around 4 nm, the interaction with the substrate is minimal and the extra space allows the introduction of protein ion channels into the bilayer. Additionally the spacer layer creates an ionic reservoir that readily enables ac electrical impedance measurement across the bilayer. Vesicles A vesicle is a lipid bilayer rolled up into a spherical shell, enclosing a small amount of water and separating it from the water outside the vesicle. Because of this fundamental similarity to the cell membrane, vesicles have been used extensively to study the properties of lipid bilayers. Another reason vesicles have been used so frequently is that they are relatively easy to make. If a sample of dehydrated lipid is exposed to water it will spontaneously form vesicles. These initial vesicles are typically multilamellar (many-walled) and are of a wide range of sizes from tens of nanometers to several micrometres. Methods such as sonication or extrusion through a membrane are needed to break these initial vesicles into smaller, single-walled vesicles of uniform diameter known as small unilamellar vesicles (SUVs). SUVs typically have diameters between 50 and 200 nm. Alternatively, rather than synthesizing vesicles it is possible to simply isolate them from cell cultures or tissue samples. Vesicles are used to transport lipids, proteins and many other molecules within the cell as well as into or out of the cell. These naturally isolated vesicles are composed of a complex mixture of different lipids and proteins so, although they offer greater realism for studying specific biological phenomena, simple artificial vesicles are preferred for studies of fundamental lipid properties. Since artificial SUVs can be made in large quantities they are suitable for bulk material studies such as x-ray diffraction to determine lattice spacing and differential scanning calorimetry to determine phase transitions. Dual polarisation interferometry can measure unilamelar and multilamelar structures and insertion into and disruption of the vesicles in a label free assay format. Vesicles can also be labeled with fluorescent dyes to allow sensitive FRET-based fusion assays. In spite of the fluorescent labeling, it is often difficult to perform detailed imaging on SUVs simply because they are so small. To combat this problem, researchers use giant unilamellar vesicles (GUVs). GUVs are large enough (1 - 200 μm) to be studied using traditional fluorescence microscopy and are within the same size range as most biological cells. Thus, they are used as mimicries of cell membranes for in vitro studies in molecular and cell biology. Many of the studies of lipid rafts in artificial lipid systems have been performed with GUVs for this reason. Compared to supported bilayers, GUVs present a more “natural” environment since there is no rigid surface that might induce defects, affect the properties of the membrane or denature proteins. Therefore, GUVs are frequently used to study membrane-remodeling and other protein-membrane interactions in vitro. A variety of methods exist to encapsulate proteins or other biological reactants within such vesicles, making GUVs an ideal system for the in vitro recreation (and investigation) of cell functions in cell-like model membrane environments. These methods include microfluidic methods, which allow for a high-yield production of vesicles with consistent sizes. Droplet Interface Bilayers Droplet Interface Bilayers (DIBs) are phospholipid-encased droplets that form bilayers when they are put into contact. The droplets are surrounded by oil and phospholipids are dispersed in either the water or oil. As a result, the phospholipids spontaneously form a monolayer at each of the oil-water interfaces. DIBs can be formed to create tissue-like material with the ability to form asymmetric bilayers, reconstitute proteins and protein channels or made for use in studying electrophysiology. Extended DIB networks can be formed either by employing droplet microfluidic devices or using droplet printers. Micelles, bicelles and nanodiscs Detergent micelles are another class of model membranes that are commonly used to purify and study membrane proteins, although they lack a lipid bilayer. In aqueous solutions, micelles are assemblies of amphipathic molecules with their hydrophilic heads exposed to solvent and their hydrophobic tails in the center. Micelles can solubilize membrane proteins by partially encapsulating them and shielding their hydrophobic surfaces from solvent. Bicelles are a related class of model membrane, typically made of two lipids, one of which forms a lipid bilayer while the other forms an amphipathic, micelle-like assembly shielding the bilayer center from surrounding solvent molecules. Bicelles can be thought of as a segment of bilayer encapsulated and solubilized by a micelle. Bicelles are much smaller than liposomes, and so can be used in experiments such as NMR spectroscopy where the larger vesicles are not an option. Nanodiscs consist of a segment of bilayer encapsulated by an amphipathic protein coat, rather than a lipid or detergent layer. Nanodiscs are more stable than bicelles and micelles at low concentrations, and are very well-defined in size (depending on the type of protein coat, between 10 and 20 nm). Membrane proteins incorporated into and solubilized by Nanodiscs can be studied by a wide variety of biophysical techniques. References Membrane biology
Model lipid bilayer
[ "Chemistry" ]
3,885
[ "Membrane biology", "Molecular biology" ]
21,533,163
https://en.wikipedia.org/wiki/Inhibitor%20of%20apoptosis%20domain
The inhibitor of apoptosis domain -- also known as IAP repeat, Baculovirus Inhibitor of apoptosis protein Repeat, or BIR -- is a structural motif found in proteins with roles in apoptosis, cytokine production, and chromosome segregation. Proteins containing BIR are known as inhibitor of apoptosis proteins (IAPs), or BIR-containing proteins (BIRPs or BIRCs), and include BIRC1 (NAIP), BIRC2 (cIAP1), BIRC3 (cIAP2), BIRC4 (xIAP), BIRC5 (survivin) and BIRC6. BIR domains belong to the zinc-finger domain family and characteristically have a number of invariant amino acid residues, including 3 conserved cysteines and one conserved histidine, which coordinate a zinc ion. They are typically composed of 4-5 alpha helices and a three-stranded beta sheet. External links References Protein structural motifs Protein domains
Inhibitor of apoptosis domain
[ "Biology" ]
210
[ "Protein structural motifs", "Protein domains", "Protein classification" ]