id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
6,390,798
https://en.wikipedia.org/wiki/Synthetic%20vaccine
A synthetic vaccine is a vaccine consisting mainly of synthetic peptides, carbohydrates, or antigens. They are usually considered to be safer than vaccines from bacterial cultures. Creating vaccines synthetically has the ability to increase the speed of production. This is especially important in the event of a pandemic. History The world's first synthetic vaccine was created in 1976 from diphtheria toxin by Louis Chedid (scientist) from the Pasteur Institute and Michael Sela from the Weizmann Institute.[] In 1986, Manuel Elkin Patarroyo created the SPf66, the first version of a synthetic vaccine for Malaria. During the H1N1 outbreak in 2009, vaccines only became available in large quantities after the peak of human infections. This was a learning experience for vaccination companies. Novartis Vaccine and Diagnostics, among other companies, developed a synthetic approach that very rapidly generates vaccine viruses from sequence data in order to be able to administer vaccinations early in the pandemic outbreak. Philip Dormatizer, the leader of viral vaccine research at Novartis, says they have "developed a way of chemically synthesizing virus genomes and growing them in tissue culture cells". Phase I data of UB-311, a synthetic peptide vaccine targeting amyloid beta, showed that the drug was able to generate antibodies to specific amyloid beta oligomers and fibrils with no decrease in antibody levels in patients of advanced age. Results from the Phase II trial are expected in the second half of 2018. References External links Article on synthetic Hib vaccine CRISP Thesaurus entry on Synthetic Vaccines Web Health Centre: History of Vaccines Vaccines Synthetic biology
Synthetic vaccine
[ "Engineering", "Biology" ]
347
[ "Synthetic biology", "Biological engineering", "Bioinformatics", "Molecular genetics", "Vaccination", "Vaccines" ]
6,391,934
https://en.wikipedia.org/wiki/Nitroethane
Nitroethane is an organic compound having the chemical formula C2H5NO2. Similar in many regards to nitromethane, nitroethane is an oily liquid at standard temperature and pressure. Pure nitroethane is colorless and has a fruity odor. Preparation Nitroethane is produced industrially by treating propane with nitric acid at 350–450 °C. This exothermic reaction produces four industrially significant nitroalkanes: nitromethane, nitroethane, 1-nitropropane, and 2-nitropropane. The reaction involves free radicals, such as CH3CH2CH2O., which arise via homolysis of the corresponding nitrite ester. These alkoxy radicals are susceptible to C—C fragmentation reactions, which explains the formation of a mixture of products. Alternatively, nitroethane can be produced by the Victor Meyer reaction of haloethanes such as chloroethane, bromoethane, or iodoethane with silver nitrite in diethyl ether or THF. The Kornblum modification of this reaction uses sodium nitrite in either a dimethyl sulfoxide or dimethylformamide solvent. Uses Via condensations like the Henry reaction, nitroethane converts to several compounds of commercial interest. Condensation with 3,4-dimethoxybenzaldehyde affords the precursor to the antihypertensive drug methyldopa; condensation with unsubstituted benzaldehyde yields phenyl-2-nitropropene, a precursor for amphetamine drugs. Nitroethane condenses with two equivalents of formaldehyde to give, after hydrogenation, 2-amino-2-methyl-1,3-propanediol, which in turn condenses with oleic acid to give an oxazoline, which protonates to give a cationic surfactant. Like some other nitrated organic compounds, nitroethane is also used as a fuel additive and Nitroethane is a useful solvent for polymers such as polystyrene and is particularly useful for dissolving cyanoacrylate adhesives. In cosmetics applications, it has been used as a component in artificial nail remover and in overhead ceiling sealant sprays. Nitroethane was previously used successfully as a chemical feedstock (precursor ingredient) in laboratories for the synthesis of multitudes of substances and consumer goods. For example, the medicine Pervitin (methamphetamine) was commonly used in the 19th and 20th century, and was especially popular during WWII by troops of both sides for mood elevation, appetite and sleep suppression and increasing focus and alertness). Nitroalkanes were one of many ingredients used in the synthesis of many phenethylamines, including medications such as Pervitin and the racemic compound Benzedrine (amphetamine), used as an anorectic medicine for obesity. Toxicity Nitroethane is suspected to cause genetic damage and be harmful to the nervous system. Typical TLV/TWA is 100 ppm. Typical STEL is 150 ppm. Skin contact causes dermatitis in humans. In animal studies, nitroethane exposure was observed to cause lacrimation, dyspnea, pulmonary rales, edema, liver and kidney injury, and narcosis. Children have been poisoned by accidental ingestion of artificial nail remover. The for rats is reported as 1100 mg/kg. References External links WebBook page for C2H5NO2 CDC - NIOSH Pocket Guide to Chemical Hazards Nitroalkanes Nitro solvents Fuels Rocket fuels Liquid explosives Drag racing
Nitroethane
[ "Chemistry" ]
785
[ "Fuels", "Chemical energy sources" ]
6,393,146
https://en.wikipedia.org/wiki/Deceleration%20parameter
The deceleration parameter in cosmology is a dimensionless measure of the cosmic acceleration of the expansion of space in a Friedmann–Lemaître–Robertson–Walker universe. It is defined by: where is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if (recent measurements suggest it is), and in this case the deceleration parameter will be negative. The minus sign and name "deceleration parameter" are historical; at the time of definition was expected to be negative, so a minus sign was inserted in the definition to make positive in that case. Since the evidence for the accelerating universe in the 1998–2003 era, it is now believed that is positive therefore the present-day value is negative (though was positive in the past before dark energy became dominant). In general varies with cosmic time, except in a few special cosmological models; the present-day value is denoted . The Friedmann acceleration equation can be written as where the sum extends over the different components, matter, radiation and dark energy, is the equivalent mass density of each component, is its pressure, and is the equation of state for each component. The value of is 0 for non-relativistic matter (baryons and dark matter), 1/3 for radiation, and −1 for a cosmological constant; for more general dark energy it may differ from −1, in which case it is denoted or simply . Defining the critical density as and the density parameters , substituting in the acceleration equation gives where the density parameters are at the relevant cosmic epoch. At the present day is negligible, and if (cosmological constant) this simplifies to where the density parameters are present-day values; with ΩΛ + Ωm ≈ 1, and ΩΛ = 0.7 and then Ωm = 0.3, this evaluates to for the parameters estimated from the Planck spacecraft data. (Note that the CMB, as a high-redshift measurement, does not directly measure ; but its value can be inferred by fitting cosmological models to the CMB data, then calculating from the other measured parameters as above). The time derivative of the Hubble parameter can be written in terms of the deceleration parameter: Except in the speculative case of phantom energy (which violates all the energy conditions), all postulated forms of mass-energy yield a deceleration parameter Thus, any non-phantom universe should have a decreasing Hubble parameter, except in the case of the distant future of a Lambda-CDM model, where will tend to −1 from above and the Hubble parameter will asymptote to a constant value of . The above results imply that the universe would be decelerating for any cosmic fluid with equation of state greater than (any fluid satisfying the strong energy condition does so, as does any form of matter present in the Standard Model, but excluding inflation). However observations of distant type Ia supernovae indicate that is negative; the expansion of the universe is accelerating. This is an indication that the gravitational attraction of matter, on the cosmological scale, is more than counteracted by the negative pressure of dark energy, in the form of either quintessence or a positive cosmological constant. Before the first indications of an accelerating universe, in 1998, it was thought that the universe was dominated by matter with negligible pressure, This implied that the deceleration parameter would be equal to , e.g. for a universe with or for a low-density zero-Lambda model. The experimental effort to discriminate these cases with supernovae actually revealed negative , evidence for cosmic acceleration, which has subsequently grown stronger. References Physical cosmology
Deceleration parameter
[ "Physics", "Astronomy" ]
784
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
6,393,385
https://en.wikipedia.org/wiki/Ultracold%20atom
In condensed matter physics, an ultracold atom is an atom with a temperature near absolute zero. At such temperatures, an atom's quantum-mechanical properties become important. To reach such low temperatures, a combination of several techniques typically has to be used. First, atoms are trapped and pre-cooled via laser cooling in a magneto-optical trap. To reach the lowest possible temperature, further cooling is performed using evaporative cooling in a magnetic or optical trap. Several Nobel prizes in physics are related to the development of the techniques to manipulate quantum properties of individual atoms (e.g. 1989, 1996, 1997, 2001, 2005, 2012, 2018). Experiments with ultracold atoms study a variety of phenomena, including quantum phase transitions, Bose–Einstein condensation (BEC), bosonic superfluidity, quantum magnetism, many-body spin dynamics, Efimov states, Bardeen–Cooper–Schrieffer (BCS) superfluidity and the BEC–BCS crossover. Some of these research directions utilize ultracold atom systems as quantum simulators to study the physics of other systems, including the unitary Fermi gas and the Ising and Hubbard models. Ultracold atoms could also be used for realization of quantum computers. History Samples of ultracold atoms are typically prepared through the interactions of a dilute gas with a laser field. Evidence for radiation pressure, force due to light on atoms, was demonstrated independently by Lebedev, and Nichols and Hull in 1901. In 1933, Otto Frisch demonstrated the deflection of individual sodium particles by light generated from a sodium lamp. The invention of the laser spurred the development of additional techniques to manipulate atoms with light. Using laser light to cool atoms was first proposed in 1975 by taking advantage of the Doppler effect to make the radiation force on an atom dependent on its velocity, a technique known as Doppler cooling. Similar ideas were also proposed to cool samples of trapped ions. Applying Doppler cooling in three dimensions will slow atoms to velocities that are typically a few cm/s and produce what is known as an optical molasses. Typically, the source of neutral atoms for these experiments were thermal ovens which produced atoms at temperatures of a few hundred kelvins. The atoms from these oven sources are moving at hundred of meters per second. One of the major technical challenges in Doppler cooling was increasing the amount of time an atom can interact with the laser light. This challenge was overcome by the introduction of a Zeeman Slower. A Zeeman Slower uses a spatially varying magnetic field to maintain the relative energy spacing of the atomic transitions involved in Doppler cooling. This increases the amount of time the atom spends interacting with the laser light. Experiments can also use metal dispensers, which are pure metal (typically alkali metals) rods that can emit when heated up (the vapor pressure is higher) with electrical current. The development of the first magneto-optical trap (MOT) by Raab et al. in 1987 was an important step towards the creation of samples of ultracold atoms. Typical temperatures achieved with a MOT are tens to hundreds of microkelvins. In essence, a magneto optical trap confines atoms in space by applying a magnetic field so that lasers not only provide a velocity dependent force but also a spatially varying force. The 1997 Nobel prize in physics was awarded for development of methods to cool and trap atoms with laser light and was shared by Steven Chu, Claude Cohen-Tannoudji and William D. Phillips. Evaporative cooling was used in experimental efforts to reach lower temperatures in an effort to discover a new state of matter predicted by Satyendra Nath Bose and Albert Einstein known as a Bose–Einstein condensate (BEC). In evaporative cooling, the hottest atoms in a sample are allowed to escape which reduces the average temperature of the sample. The Nobel Prize in 2001 was awarded to Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman for the achievement of Bose–Einstein condensate in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates. In recent years a variety of sub-Doppler cooling techniques, including polarization gradient cooling, gray molasses cooling, and Raman sideband cooling, have enabled the cooling and trapping of single atoms in optical tweezers. Experimental platforms leveraging ultracold neutral atoms in optical tweezers and optical lattices have become an increasingly popular setting for studying quantum computing, quantum simulation, and precision metrology. Atoms with closed cycling transitions, capable of scattering many photons with a low probability of decay into other states, are common choices of species for ultracold neutral atom experiments. The lowest-energy fine structure transitions in alkali atoms enable fluorescence imaging, while a combination of hyperfine and Zeeman sublevels can be used for implementing sub-Doppler cooling. Alkaline earth atoms have also gained popularity owing to narrow-linewidth cooling transitions and ultra-narrow optical clock transitions. Applications Ultracold atoms have a variety of applications owing to their unique quantum properties and the great experimental control available in such systems. For instance, ultracold atoms have been proposed as a platform for quantum computation and quantum simulation, accompanied by very active experimental research to achieve these goals. Quantum simulation is of great interest in the context of condensed matter physics, where it may provide valuable insights into the properties of interacting quantum systems. The ultracold atoms are used to implement an analogue of the condensed matter system of interest, which can then be explored using the tools available in the particular implementation. Since these tools may differ greatly from those available in the actual condensed matter system, one can thus experimentally probe otherwise inaccessible quantities. Furthermore, ultracold atoms may even allow to create exotic states of matter, which cannot otherwise be observed in nature. All atoms are identical, making ensembles of atoms ideal for universal timekeeping. In 1967, the SI definition of the second was changed to reference a hyperfine transition frequency in Cesium atoms. Atomic clocks based on alkaline earth atoms or alkaline earth like ions (such as Al) have now been developed making use of narrow-line optical transitions. To achieve high numbers of non-interacting atoms, which assists in the precision of these clocks, neutral atoms can be trapped in optical lattices. On the other hand, ion traps permit long interrogation times. Ultracold atoms are also used in experiments for precision measurements enabled by the low thermal noise and, in some cases, by exploiting quantum mechanics to exceed the standard quantum limit. In addition to potential technical applications, such precision measurements may serve as tests of our current understanding of physics. See also Bose–Einstein condensate Cold Atom Laboratory Quantum simulator References Sources Quantum mechanics Thermodynamics Atoms he:אטומים קרים
Ultracold atom
[ "Physics", "Chemistry", "Mathematics" ]
1,427
[ "Theoretical physics", "Quantum mechanics", "Thermodynamics", "Atoms", "Matter", "Dynamical systems" ]
6,394,160
https://en.wikipedia.org/wiki/Embedded%20atom%20model
In computational chemistry and computational physics, the embedded atom model, embedded-atom method or EAM, is an approximation describing the energy between atoms and is a type of interatomic potential. The energy is a function of a sum of functions of the separation between an atom and its neighbors. In the original model, by Murray Daw and Mike Baskes, the latter functions represent the electron density. The EAM is related to the second moment approximation to tight binding theory, also known as the Finnis-Sinclair model. These models are particularly appropriate for metallic systems. Embedded-atom methods are widely used in molecular dynamics simulations. Model simulation In a simulation, the potential energy of an atom, , is given by , where is the distance between atoms and , is a pair-wise potential function, is the contribution to the electron charge density from atom of type at the location of atom , and is an embedding function that represents the energy required to place atom of type into the electron cloud. Since the electron cloud density is a summation over many atoms, usually limited by a cutoff radius, the EAM potential is a multibody potential. For a single element system of atoms, three scalar functions must be specified: the embedding function, a pair-wise interaction, and an electron cloud contribution function. For a binary alloy, the EAM potential requires seven functions: three pair-wise interactions (A-A, A-B, B-B), two embedding functions, and two electron cloud contribution functions. Generally these functions are provided in a tabularized format and interpolated by cubic splines. See also Interatomic potential Lennard-Jones potential Bond order potential Force field (chemistry) References Chemical bonding Computational chemistry
Embedded atom model
[ "Physics", "Chemistry", "Materials_science" ]
357
[ "Theoretical chemistry stubs", "Theoretical chemistry", "Computational chemistry", "Computational chemistry stubs", "Condensed matter physics", "nan", "Chemical bonding", "Physical chemistry stubs" ]
6,395,054
https://en.wikipedia.org/wiki/Pallet%20jack
A pallet jack, also known as a pallet truck or pallet pump, is a tool used to lift and move pallets. Pallet jacks are the most basic form of a forklift and are intended to move palletized loads within warehouses, distribution centers, retail stores, and construction sites. Operational principle The jack is steered by a tiller-like lever called a 'tow bar' that also acts on the pump piston for raising the forks. A small lever on the tow bar's steering handle releases the hydraulic fluid, causing the forks to lower. The steering wheels are located directly below the tow bar and support the jacking mechanism. The front wheels inside the end of the forks are mounted on push rods attached to linkages that go to levers attached to the jack cylinder. As the hydraulic jack at the 'tiller' end is raised, the links force the wheels down, raising the forks vertically above the front wheels, raising the load upward until it clears the floor. The pallet is only lifted enough to clear the floor for subsequent travel. Oftentimes, pallet jacks are used to move and organize pallets inside a trailer, especially when there is no forklift truck access or availability. History Manual pallet jacks have existed since at least 1918, following the introduction of pallets. Early iterations used mechanical linkages and other rudimentary systems to lift the forks, whereas more modern pallet jacks use a hand-pumped hydraulic jack or electricity. Types Manual pallet jack A manual pallet jack is a hand-powered jack most commonly seen in retail and personal warehousing operations. They are used predominantly for lifting, lowering and steering pallets from one place to another. Powered pallet jack Powered pallet jacks, also known as electric pallet trucks, walkies, single or double pallet jacks, or power jack, are motorized to allow lifting and moving of heavier and stacked pallets. Some contain a platform for the user to stand while moving pallets. The powered pallet jack is generally moved by a throttle on the handle to move forward or in reverse and steered by swinging the handle in the intended direction. Some contain a type of dead man's switch rather than a brake to stop the machine should the user need to stop quickly or leave the machine while it is in use. Others use a system known as "plugging" wherein the driver turns the throttle from forward to reverse (or vice versa) to slow and stop the machine, as the dead man's switch is used in emergencies only. Rough terrain pallet jack Rough terrain pallet jacks are designed specifically for use on uneven ground. They are made using heavy-duty frames and robust pneumatic tyres so that they can be manoeuvred over rough surfaces with ease. Many manufacturers opt for watertight wheel bearings, a hydraulic elevator or a built-in pump to ensure their rough terrain pallet jacks are easy and comfortable to use, even in the harshest conditions. Operational limitations Reversible pallets cannot be used. Double-faced non-reversible pallets cannot have deck-boards where the front wheels extend to the floor. Enables only two-way entry into a four-way notched-stringer pallet, because the forks cannot be inserted into the notches. Power jacks have difficulty in confined spaces (coolers) and narrow openings. Operational risks Pallet jacks are classed as material-handling equipment (MHE). Under most health and safety law, training is required in their use (particularly for powered pallet jacks) and, as the loads carried are heavy, there is a substantial risk of accidents resulting in injuries. Typical dimensions Industry seems to have 'standardized' pallet jacks in several ways. Width of each of two forks: Fork width, i.e. The dimension between the outer edges of the forks: Available as Fork length: Available as Lowered height: Raised height: At least , but some will raise higher In Eurasia the overall dimensions are similar, as modern container palletization has forced standardization in the dimensional domain globally. See also Unit load EUR-pallet Manual handling of loads References External links Tips for renting and working with pallet jacks OSHA hazards of working with pallet jacks Freight transport Lifting equipment Material handling
Pallet jack
[ "Physics", "Technology" ]
885
[ "Machines", "Material handling", "Lifting equipment", "Physical systems", "Materials", "Matter" ]
6,396,233
https://en.wikipedia.org/wiki/Mars%20regional%20atmospheric%20modeling%20system
The Mars Regional Atmospheric Modeling System (MRAMS) is a computer program that simulates the circulations of the Martian atmosphere at regional and local scales. MRAMS, developed by Scot Rafkin and Timothy Michaels, is derived from the Regional Atmospheric Modeling System (RAMS) developed by William R. Cotton and Roger A. Pielke to study atmospheric circulations on the Earth. Key features of MRAMS include a non-hydrostatic, fully compressible dynamics, explicit bin dust, water, and carbon dioxide ice atmospheric physics model, and a fully prognostic regolith model that includes carbon dioxide deposition and sublimation. Several Mars exploration projects, including the Mars Exploration Rovers, the Phoenix Scout Mission, and the Mars Science Laboratory have used MRAMS to study a variety of atmospheric circulations. The MRAMS operates at the mesoscale and microscale, modeling and simulating the Martian atmosphere. The smaller scale modeling of the MRAMS gives it higher resolution data points and models over complex terrain and topography. It is able to identify topography driven flows like katabatic and anabatic winds through valleys and mountains that produce changes in atmospheric circulation. Structure Dynamic Core The dynamic core's role is to solve fluid mechanic equations related to atmospheric dynamics. The equations in the dynamic core of the MRAMS are based on primitive grid-volume Reynolds-averaged equations. The related equations are meant to solve for momentum, thermodynamics, tracers, and conservation of mass. The MRAMS dynamical core integrates equations for momentum, thermodynamics (atmosphere-surface heat exchange), tracers, and conservation of mass. Parameterizations The MRAMS dynamical core was developed from RAMS and has been changed excessively to account for the large difference in atmospheres between Mars and Earth. Some MRAMS models parameterize numerous features including dust and dust lifting, cloud microphysics, radiative transfer, and steep topography. Grid The MRAMS operates on the mesoscale and therefore is a regional tool not global, making it accurate for data collection around complex terrain and changing topography. The computational grid types developed for the MRAMS are of the Arakawa C-type. The grid spacing is irregular and requires the use of scaling. The high resolution of the MRAMS stems from the use of the nested two-way grid system. The two-way grid system incorporates a parent grid that establishes initial boundary layers to be used and by the finer grid for accurate data collection. References External links MRAMS homepage. Atmosphere of Mars Physics software Numerical climate and weather models
Mars regional atmospheric modeling system
[ "Physics" ]
525
[ "Physics software", "Computational physics" ]
6,396,576
https://en.wikipedia.org/wiki/Scheil%20equation
In metallurgy, the Scheil-Gulliver equation (or Scheil equation) describes solute redistribution during solidification of an alloy. Assumptions Four key assumptions in Scheil analysis enable determination of phases present in a cast part. These assumptions are: No diffusion occurs in solid phases once they are formed () Infinitely fast diffusion occurs in the liquid at all temperatures by virtue of a high diffusion coefficient, thermal convection, Marangoni convection, etc. () Equilibrium exists at the solid-liquid interface, and so compositions from the phase diagram are valid Solidus and liquidus are straight segments The fourth condition (straight solidus/liquidus segments) may be relaxed when numerical techniques are used, such as those used in CALPHAD software packages, though these calculations rely on calculated equilibrium phase diagrams. Calculated diagrams may include odd artifacts (i.e. retrograde solubility) that influence Scheil calculations. Derivation The hatched areas in the figure represent the amount of solute in the solid and liquid. Considering that the total amount of solute in the system must be conserved, the areas are set equal as follows: . Since the partition coefficient (related to solute distribution) is (determined from the phase diagram) and mass must be conserved the mass balance may be rewritten as . Using the boundary condition at the following integration may be performed: . Integrating results in the Scheil-Gulliver equation for composition of the liquid during solidification: or for the composition of the solid: . Applications of the Scheil equation: Calphad Tools for the Metallurgy of Solidification Nowadays, several Calphad softwares are available - in a framework of computational thermodynamics - to simulate solidification in systems with more than two components; these have recently been defined as Calphad Tools for the Metallurgy of Solidification. In recent years, Calphad-based methodologies have reached maturity in several important fields of metallurgy, and especially in solidification-related processes such as semi-solid casting, 3d printing, and welding, to name a few. While there are important studies devoted to the progress of Calphad methodology, there is still space for a systematization of the field, which proceeds from the ability of most Calphad-based software to simulate solidification curves and includes both fundamental and applied studies on solidification, to be substantially appreciated by a wider community than today. The three applied fields mentioned above could be widened by specific successful examples of simple modeling related to the topic of this issue, with the aim of widening the application of simple and effective tools related to Calphad and Metallurgy. See also "Calphad Tools for the Metallurgy of Solidification" in an ongoing issue of an Open Journal. https://www.mdpi.com/journal/metals/special_issues/Calphad_Solidification Given a specific chemical composition, using a software for computational thermodynamics - which might be open or commercial - the calculation of the Scheil curve is possible if a thermodynamic database is available. A good point in favour of some specific commercial softwares is that the install is easy indeed and you can use it on a windows based system - for instance with students or for self training. One should get some open, chiefly binary, databases (extension *.tdb), one could find - after registering - at Computational Phase Diagram Database (CPDDB) of the National Institute for Materials Science of Japan, NIMS https://cpddb.nims.go.jp/index_en.html. They are available - for free - and the collection is rather complete; in fact currently 507 binary systems are available in the thermodynamic data base (tdb) format. Some wider and more specific alloy systems partly open - with tdb compatible format - are available with minor corrections for Pandat use at Matcalc https://www.matcalc.at/index.php/databases/open-databases. Numerical expression and numerical derivative of the Scheil curve: application to grain size on solidification and semi-solid processing A key concept that might be used for applications is the (numerical) derivative of the solid fraction fs with temperature. A numerical example using a copper zinc alloy at composition Zn 30% in weight is proposed as an example here using the opposite sign for using both temperature and its derivative in the same graph. Kozlov and Schmid-Fetzer have calculated numerically the derivative of the Scheil curve in an open paper https://iopscience.iop.org/article/10.1088/1757-899X/27/1/012001 and applied it to the growth restriction factor Q in Al-Si-Mg-Cu alloys. Application to grain size on solidification This - Calphad calculated value of numerical derivative - Q has some interesting applications in the field of metal solidification. In fact, Q reflects the phase diagram of the alloy system and its reciprocal has been found to have a relationship with grain size d on solidification, which empirically has been found in some cases to be linear: where a and b are constants, as illustrated with some examples from the literature for Mg and Al alloys. Before Calphad use, Q values were calculated from the conventional relationship: Q=m*c0(k−1) where m is the slope of the liquidus, c0 is the solute concentration, and k is the equilibrium distribution coefficient. More recently some other possible correlation of Q with grain size d have been found, for instance: where B is a constant independent of alloy composition. Application to solidification cracking In recent publications, prof. Sindo Kou has proposed an approach to evaluate susceptibility to solidification cracking; this approach is based on a similar approach where a quantity, , which has the dimensions of a temperature is proposed as an index of the cracking susceptibility. Again one could exploit Scheil based solidification curves to link this index to the slope of the (Scheil) solidification curve: ∂T/(∂(fS)^{1/2})= ∂T/(∂(fS)*(∂(fS)^{1/2})/∂(fS))= (1/2)∂T/∂(fS)*(fS)^{1/2}= Application to semi-solid processing Last but not least prof. E.J.Zoqui has summarized in his work the approach proposed by several researchers in the criteria for semi-solid processing, which involves the stability of the solid phase fs with the temperature; to process semisolid alloys the sensitivity to variation of solid fraction with temperature should be minimal: in one direction it could evolve to a difficult to deform solid, on the other to a liquid which may be difficult to shape without proper moulding. It turns out that we can express this criterion again by evaluating the slope of the solidification curve, in fact ∂(fS)/∂T should be less than a certain threshold, which is commonly accepted in the scientific and technical literature to be below 0.03 1/K. Mathematically this may be expressed by an inequation, ∂(fS)/∂T < 0.03 (1/K) - where K stands for Kelvin degrees - could be equally assumed for a rough estimate of the two main semi-solid casting processing: both rheocasting ( 0.3<fs<0.4 ) and thixoforming (0.6<fs<0.7). If one would go back just to the (numerical) and functional approaches above, one should consider the reciprocal value i.e. ∂T/∂(fS)> 33 (K) References Gulliver, G.H., The Quantitative Effect of Rapid Cooling Upon the Constitution of Binary Alloys, J. Inst. Met., 1913, 9, p 120-157 Scheil, E., Bemerkungen zur Schichtkristallbildung, Z. Metallkd., 1942, 34, p 70-72 Greer L., et al. Modelling of inoculation of metallic melts: application to grain refinement of aluminium by Al–Ti–B Acta Mat. 48, 11, 2000, 2823-2835 https://doi.org/10.1016/S1359-6454(00)00094-X Porter, D. A., and Easterling, K. E., Phase Transformations in Metals and Alloys (2nd Edition), Chapman & Hall, 1992. https://doi.org/10.1201/9781439883570 Kou, S., Welding Metallurgy, 2nd Edition, Wiley -Interscience, 2003. https://doi.org/10.1002/0471434027 Karl B. Rundman Principles of Metal Casting Textbook - Michigan Technological University Quested T.E., Dinsdale A.T., Greer A.L. Thermodynamic modelling of growth-restriction effects in aluminium alloys Acta Materialia 53, 5, 2005, 1323-1334. https://doi.org/10.1016/j.actamat.2004.11.024 H. Fredriksson, Y. Akerlind, Materials Processing during Casting, Chapter 7, Wiley, 2006. https://www.wiley.com/en-us/Materials+Processing+During+Casting-p-9780470015148 H. Fredriksson, Y. Akerlind, Materials Processing during Casting, Supplementary (open) Material https://www.wiley.com/legacy/wileychi/fredriksson/features.html Schmid-Fetzer, R. Phase Diagrams: The Beginning of Wisdom. J. Phase Equilib. Diffus. 35, 735–760, 2014. https://doi.org/10.1007/s11669-014-0343-5 Zoqui, E. Alloys for Semisolid Processing, Comprehensive Materials Processing Volume 5, 2014, Pages 163-190 https://doi.org/10.1016/B978-0-08-096532-1.00520-3 Zhang, D., Prasad, A., Bermingham, M.J. et al. Grain Refinement of Alloys in Fusion-Based Additive Manufacturing Processes. Metall Mater Trans A 51, 4341–4359 (2020). https://doi.org/10.1007/s11661-020-05880-4 Todaro C.J., Easton M.A., Qiu D., Brandt M., StJohn D.H., Qian M. Grain refinement of stainless steel in ultrasound-assisted additive manufacturing, Additive Manufacturing 37, 2021, https://doi.org/10.1016/j.addma.2020.101632 Balart, M.J., Patel, J.B., Gao, F. et al. Grain Refinement of Deoxidized Copper. Metall Mater Trans A 47, 4988–5011 (2016) https://doi.org/10.1007/s11661-016-3671-8 Kou, S. Predicting Susceptibility to Solidification Cracking and Liquation Cracking by CALPHAD, Metals 2021, 11(9), 1442 https://doi.org/10.3390/met11091442 Zhang F., Liang S., Zhang C., Chen S., Lv D., Cao W., Kou S. Prediction of Cracking Susceptibility of Commercial Aluminum Alloys during Solidification, Metals 2021, 11(9), 1479; https://doi.org/10.3390/met11091479 External links Metallurgy Eponymous equations of physics Differential equations
Scheil equation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,549
[ "Equations of physics", "Metallurgy", "Eponymous equations of physics", "Mathematical objects", "Equations", "Differential equations", "Materials science", "nan" ]
2,657,905
https://en.wikipedia.org/wiki/Quasiperiodic%20function
In mathematics, a quasiperiodic function is a function that has a certain similarity to a periodic function. A function is quasiperiodic with quasiperiod if , where is a "simpler" function than . What it means to be "simpler" is vague. A simple case (sometimes called arithmetic quasiperiodic) is if the function obeys the equation: Another case (sometimes called geometric quasiperiodic) is if the function obeys the equation: An example of this is the Jacobi theta function, where shows that for fixed it has quasiperiod ; it also is periodic with period one. Another example is provided by the Weierstrass sigma function, which is quasiperiodic in two independent quasiperiods, the periods of the corresponding Weierstrass ℘ function. Bloch's theorem says that the eigenfunctions of a periodic Schrödinger equation (or other periodic linear equations) can be found in quasiperiodic form, and a related form of quasi-periodic solution for periodic linear differential equations is expressed by Floquet theory. Functions with an additive functional equation are also called quasiperiodic. An example of this is the Weierstrass zeta function, where for a z-independent η when ω is a period of the corresponding Weierstrass ℘ function. In the special case where we say f is periodic with period ω in the period lattice . Quasiperiodic signals Quasiperiodic signals in the sense of audio processing are not quasiperiodic functions in the sense defined here; instead they have the nature of almost periodic functions and that article should be consulted. The more vague and general notion of quasiperiodicity has even less to do with quasiperiodic functions in the mathematical sense. A useful example is the function: If the ratio A/B is rational, this will have a true period, but if A/B is irrational there is no true period, but a succession of increasingly accurate "almost" periods. See also Quasiperiodic motion References External links Quasiperiodic function at PlanetMath Complex analysis Types of functions
Quasiperiodic function
[ "Mathematics" ]
440
[ "Mathematical objects", "Functions and mappings", "Types of functions", "Mathematical relations" ]
2,658,073
https://en.wikipedia.org/wiki/Hyperbolic%20set
In dynamical systems theory, a subset Λ of a smooth manifold M is said to have a hyperbolic structure with respect to a smooth map f if its tangent bundle may be split into two invariant subbundles, one of which is contracting and the other is expanding under f, with respect to some Riemannian metric on M. An analogous definition applies to the case of flows. In the special case when the entire manifold M is hyperbolic, the map f is called an Anosov diffeomorphism. The dynamics of f on a hyperbolic set, or hyperbolic dynamics, exhibits features of local structural stability and has been much studied, cf. Axiom A. Definition Let M be a compact smooth manifold, f: M → M a diffeomorphism, and Df: TM → TM the differential of f. An f-invariant subset Λ of M is said to be hyperbolic, or to have a hyperbolic structure, if the restriction to Λ of the tangent bundle of M admits a splitting into a Whitney sum of two Df-invariant subbundles, called the stable bundle and the unstable bundle and denoted Es and Eu. With respect to some Riemannian metric on M, the restriction of Df to Es must be a contraction and the restriction of Df to Eu must be an expansion. Thus, there exist constants 0<λ<1 and c>0 such that and and for all and for all and and for all and . If Λ is hyperbolic then there exists a Riemannian metric for which c = 1 — such a metric is called adapted. Examples Hyperbolic equilibrium point p is a fixed point, or equilibrium point, of f, such that (Df)p has no eigenvalue with absolute value 1. In this case, Λ = {p}. More generally, a periodic orbit of f with period n is hyperbolic if and only if Dfn at any point of the orbit has no eigenvalue with absolute value 1, and it is enough to check this condition at a single point of the orbit. References Dynamical systems Limit sets
Hyperbolic set
[ "Physics", "Mathematics" ]
437
[ "Limit sets", "Topology", "Mechanics", "Dynamical systems" ]
2,660,978
https://en.wikipedia.org/wiki/Sine%20and%20cosine%20transforms
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis. Definition The Fourier sine transform of is: If means time, then is frequency in cycles per unit time, but in the abstract, they can be any dual pair of variables (e.g. position and spatial frequency). The sine transform is necessarily an odd function of frequency, i.e. for all : The Fourier cosine transform of is: The cosine transform is necessarily an even function of frequency, i.e. for all : Odd and even simplification The multiplication rules for even and odd functions shown in the overbraces in the following equations dramatically simplify the integrands when transforming even and odd functions. Some authors even only define the cosine transform for even functions . Since cosine is an even function and because the integral of an even function from to is twice its integral from to , the cosine transform of any even function can be simplified to avoid negative : And because the integral from to of any odd function is zero, the cosine transform of any odd function is simply zero: Similarly, because sin is odd, the sine transform of any odd function also simplifies to avoid negative : and the sine transform of any even function is simply zero: The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function. Other conventions Just like the Fourier transform takes the form of different equations with different constant factors (see for discussion), other authors also define the cosine transform as and the sine transform as Another convention defines the cosine transform as and the sine transform as using as the transformation variable. And while is typically used to represent the time domain, is often instead used to represent a spatial domain when transforming to spatial frequencies. Fourier inversion The original function can be recovered from its sine and cosine transforms under the usual hypotheses using the inversion formula: Simplifications Note that since both integrands are even functions of , the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies: Also, if is an odd function, then the cosine transform is zero, so its inversion simplifies to: Likewise, if the original function is an even function, then the sine transform is zero, so its inversion also simplifies to: Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though with swapped with (and with swapped with or ). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are called transform pairs. Overview of inversion proof Using the addition formula for cosine, the full inversion formula can also be rewritten as Fourier's integral formula: This theorem is often stated under different hypotheses, that is integrable, and is of bounded variation on an open interval containing the point , in which case This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due to Cauchy is to insert a into the integral, where is fixed. Then Now when , the integrand tends to zero except at , so that formally the above is Relation with complex exponentials The complex exponential form of the Fourier transform used more often today is where is the square root of negative one. By applying Euler's formula it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function):Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. in ) can be simply found by taking the real part of the Fourier transform:while the sine transform is simply the negative of the imaginary part of the Fourier transform: Pros and cons An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract the phase information of a frequency, the modern Fourier transform instead compactly packs both phase and amplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency. The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, a discrete cosine transform may start by assuming an even extension of its input while a discrete sine transform may start by assuming an odd extension of its input, to avoid having to compute the entire discrete Fourier transform. Numerical evaluation Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned. Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed. See also Discrete cosine transform Discrete sine transform List of Fourier-related transforms Notes References Whittaker, Edmund, and James Watson, A Course in Modern Analysis, Fourth Edition, Cambridge Univ. Press, 1927, pp. 189, 211 Integral transforms Fourier analysis Mathematical physics
Sine and cosine transforms
[ "Physics", "Mathematics" ]
1,375
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
18,728,089
https://en.wikipedia.org/wiki/Aerospace%20bearing
Aerospace bearings are the bearings installed in aircraft and aerospace systems including commercial, private, military, or space applications. Materials include M50 tool steel (AMS6491), carbon chrome steel (AMS6444), the corrosion resistant AMS5930, 440C stainless steel, silicon nitride (ceramic) and titanium carbide-coated 440C. Typically, special attention is given to the material specification, non-destructive testing, and to the traceability of the bearing (a system of documents that enables an engineer to trace a bearing, typically back to its manufacturing batch and material supply). Design When designing aerospace bearings, it is important to take a few things into account, including: material standard design lubrication type surface coatings and treatments non-destructive testing traceability In order to assure bearing performance, it is necessary for the bearing steel to be of high quality. Jet engine bearings are typically manufactured from metals manufactured using a vacuum arc remelt to enable material requirements to be met. Jet engine shaft bearings and accessory drive shaft bearings typically use single piece or two piece machined retainers. The pressed steel or moulded retainers found on mass-produced bearings are not used. Temperature and moisture resistant oils, greases and lubricants are normally specified. If the lubricant is not correct the performance of the bearing will be compromised. Application In jet engines bearings can operate at over 200 degrees Celsius (400 °F) and at speeds over 10,000 rpm for the turbine shafts to over 30,000 rpm in the accessory drives. In wing control surface applications temperatures as low as may be encountered. Monitoring Bearings are a vital factor in many products and assemblies and their performance is often monitored continuously. In jet engines the oil supply is monitored to detect the presence of metallic debris that could identify a failure either of the bearings or of other components whose failure may contaminate the bearings. References Bearings (mechanical) Aerospace engineering
Aerospace bearing
[ "Engineering" ]
400
[ "Aerospace engineering" ]
18,730,256
https://en.wikipedia.org/wiki/Proton-transfer-reaction%20mass%20spectrometry
Proton-transfer-reaction mass spectrometry (PTR-MS) is an analytical chemistry technique that uses gas phase hydronium reagent ions which are produced in an ion source. PTR-MS is used for online monitoring of volatile organic compounds (VOCs) in ambient air and was developed in 1995 by scientists at the Institut für Ionenphysik at the Leopold-Franzens University in Innsbruck, Austria. A PTR-MS instrument consists of an ion source that is directly connected to a drift tube (in contrast to SIFT-MS no mass filter is interconnected) and an analyzing system (quadrupole mass analyzer or time-of-flight mass spectrometer). Commercially available PTR-MS instruments have a response time of about 100 ms and reach a detection limit in the single digit pptv or even ppqv region. Established fields of application are environmental research, food and flavor science, biological research, medicine, security, cleanroom monitoring, etc. Theory With H3O+ as the reagent ion the proton transfer process is (with being the trace component) Reaction () is only possible if energetically allowed, i.e. if the proton affinity of is higher than the proton affinity of H2O (691 kJ/mol). As most components of ambient air possess a lower proton affinity than H2O (e.g. N2, O2, Ar, CO2, etc.) the H3O+ ions only react with VOC trace components and the air itself acts as a buffer gas. Moreover, due to the low concentrations of trace components one can assume that the total number of H3O+ ions remains nearly unchanged, which leads to the equation In equation () [RH+] is the density of product ions, [H3O+]0 is the density of reagent ions in absence of reactant molecules in the buffer gas, is the reaction rate constant and is the average time the ions need to pass the reaction region. With a PTR-MS instrument the number of product and of reagent ions can be measured, the reaction rate constant can be found in literature for most substances and the reaction time can be derived from the set instrument parameters. Therefore, the absolute concentration of trace constituents [R] can be easily calculated without the need of calibration or gas standards. Furthermore, it gets obvious that the overall sensitivity of a PTR-MS instrument is dependent on the reagent ion yield. Fig. 1 gives an overview of several published (in peer-reviewed journals) reagent ion yields during the last decades and the corresponding sensitivities. Technology In commercial PTR-MS instruments water vapor is ionized in a hollow cathode discharge: e^- + H2O -> H2O+ + 2e^- e^- + H2O -> H2+ + O + 2e^- e^- + H2O -> H+ + OH + 2e^- e^- + H2O -> O+ + H2 + 2e^-. After the discharge a short drift tube is used to form very pure (>99.5%) H3O+ via ion-molecule reactions: H2+ + H2O -> H2O+ + H2 H+ + H2O -> H2O+ + H O+ + H2O -> H2O+ + O H2O+ + H2O -> H3O+ + OH. Due to the high purity of the reagent ions a mass filter between the ion source and the reaction drift tube is not necessary and H3O+ can be injected directly. The absence of this mass filter in turn greatly reduces losses of reagent ions and leads eventually to an outstandingly low detection limit of the whole instrument. In the reaction drift tube a vacuum pump is continuously drawing through air containing the VOCs one wants to analyze. At the end of the drift tube the protonated molecules are mass analyzed (quadrupole mass analyzer or time-of-flight mass spectrometer) and detected. As an alternative to H3O+ already in early PTR-MS related publications the use of NH4+ reagent ions has been suggested. Ammonia has a proton affinity of 853.6 kJ/mol. For compounds that have a higher proton affinity than ammonia proton transfer can take place similar to the process described above for hydronium: NH4+ + R -> RH+ + NH3. Additionally, for compounds with higher, but also for some with lower proton affinities than ammonia a clustering reaction can be observed NH4+ + R -> R.NH4+* where the cluster needs a third body to get collisionally stabilized. The main advantage of using NH4+ reagent ions is that fragmentation of analytes upon chemical ionization is strongly suppressed, leading to straightforward mass spectra even for complex mixtures. The reason why during the first 20 years after the invention of PTR-MS NH4+ reagent ions have only been used in a very limited number of studies is most probably because the NH4+ production required toxic and corrosive ammonia as a source gas. This led to problems with handling the instrument and its exhaust gas, as well as to increased wear of vacuum components. In 2017 a patent application was submitted where the inventors introduced a novel method of NH4+ production without the need of any form of ammonia. In this method N2 and water vapor are introduced into the hollow cathode ion source and by adjusting electric fields and pressures NH4+ can be produced at the same or even higher purity levels than H3O+. It is expected that this invention, which eliminates the problems connected to the use of NH4+ so far, will lead to a widespread use of NH4+ reagent ions in the near future. Advantages Advantages include low fragmentation – only a small amount of energy is transferred during the ionization process (compared to e.g. electron ionization), therefore fragmentation is suppressed and the obtained mass spectra are easily interpretable, no sample preparation is necessary – VOC containing air and liquids' headspaces can be analyzed directly, real-time measurements – with a typical response time of 100 ms VOCs can be monitored on-line, real-time quantification – absolute concentrations are obtained directly without previous calibration measurements, compact and robust setup – due to the simple design and the low number of parts needed for a PTR-MS instrument, it can be built in into space saving and even mobile housings, easy to operate – for the operation of a PTR-MS only electric power and a small amount of distilled water are needed. Unlike other techniques no gas cylinders are needed for buffer gas or calibration standards. Disadvantages One disadvantage is that not all molecules are detectable. Because only molecules with a proton affinity higher than water can be detected by PTR-MS, proton transfer from H3O+ is not suitable for all fields of application. Therefore, in 2009 first PTR-MS instruments were presented, which are capable of switching between H3O+ and O2+ (and NO+) as reagent ions. This enhances the number of detectable substances to important compounds like ethylene, acetylene, most halocarbons, etc. Furthermore, particularly with NO+ it is possible to separate and independently quantify some isomers. In 2012 a PTR-MS instrument was introduced which extends the selectable reagent ions to Kr+ and Xe+; this should allow for the detection of nearly all possible substances (up to the ionization energy of krypton (14 eV)). Although the ionization method for these additional reagent ions is charge-exchange rather than proton-transfer ionization the instruments can still be considered as "classic" PTR-MS instruments, i.e. no mass filter between the ion source and the drift tube and only some minor modifications on the ion source and vacuum design. The maximum measurable concentration is limited. Equation (2) is based on the assumption that the decrease of reagent ions is negligible, therefore the total concentration of VOCs in air must not exceed about 10 ppmv. Otherwise the instrument's response will not be linear anymore and the concentration calculation will be incorrect. This limitation can be overcome easily by diluting the sample with a well-defined amount of pure air. Sensitivity enhancing measures As it is the case for most analytical instruments, also in PTR-MS there has always been a quest for sensitivity improvement and for lowering the detection limit. However, until 2012 these improvements were limited to optimizations of the conventional setup, i.e. ion source, DC drift tube, transfer lens system, mass spectrometer (compare above). The reason for this conservative approach was that the addition of any RF ion focusing device negatively affects the well-defined PTR-MS ion chemistry, which makes quantification complicated and considerably limits comparability of measurement results obtained with different instruments. Only in 2016 a patent application providing a solution to this problem was submitted. Ion funnel Ion funnels are RF devices which have been used for decades to focus ion currents into narrow beams. In PTR-MS they have been introduced in 2012 by Barber et al. when they presented a PTR-MS setup with a PTR reaction region incorporating an ion funnel. Although the focusing properties of the ion funnel improved the sensitivity of the setup by a factor of >200 (compared to operating in DC only mode, i.e. with the ion funnel turned off) for some compounds, the sensitivities of other compounds were only improved by a factor of <10. That is, because of the highly compound dependent instrumental response one of the main advantages of PTR-MS, namely that concentration values can be directly calculated, is lost and a calibration measurement is needed for each analyte of interest. Furthermore, with this approach unusual fragmentation of analytes has been observed which complicates interpretation of measurement results and comparison between different types of instruments even more. A different concept has been introduced by the company IONICON Analytik GmbH. (Innsbruck, AT) where the ion funnel is not predominantly part of the reaction region but mainly for focusing the ions into the transfer region to the TOF mass spectrometer. In combination with the above-mentioned method of controlling the ion chemistry this enables a considerable increase in sensitivity and thus also an improvement of the detection limit, while keeping the ion chemistry well-defined and thus avoiding problems with quantification and interpretation of the results. Ion guide Quadrupole, hexapole and other multipole ion guides can be used to transfer ions between different parts of an instrument with high efficiency. In PTR-MS they are particularly suitable for being installed in the differentially pumped interface between the reaction region and the mass spectrometer. In 2014 Sulzer et al. published an article about a PTR-MS instrument which utilizes a quadrupole ion guide between the drift tube and the TOF mass spectrometer. They reported an increase in sensitivity by a factor of 25 compared to a similar instrument without an ion guide. Quadrupole ion guides are known to have high focusing power, but also rather narrow m/z transmission bands. Hexapole ion guides on the other hand have focusing capabilities over a broader m/z band. Additionally, less energy is put into the transmitted ions, i.e. fragmentation and other adverse effects are less likely to occur. Consequently, some latest high-end PTR-MS instruments are equipped with hexapole ion guides for considerably improved performance or even with a sequential arrangement of an ion funnel followed by a hexapole ion guide for even higher sensitivity and lower detection limit. Add-ons As a real-time trace gas analysis method based on mass spectrometry, PTR-MS has two obvious limitations: Isomers cannot be easily separated (for some it is possible by switching the reagent ions or by changing the reduced electric field strength in the drift tube) and the sample has to be in the gas phase. Countermeasures against these limitations have been developed in the form of add-ons, which can either be installed into the PTR-MS instrument or operated as external devices. FastGC Gas chromatography (GC) in combination with mass spectrometry (GC-MS) is capable of separating isomeric compounds. Although GC has been successfully coupled to PTR-MS in the past, this approach annihilates the real-time capability of the PTR-MS technology, because a single GC analysis run typically takes between 30 min and 1 h. Thus, state-of-the-art GC add-ons for PTR-MS are based on fastGC technology. Materic et al. utilized an early version of a commercially available fastGC addon in order to distinguish various monoterpene isomers. Within a fastGC run of about 70 s they were able to separate and identify: alpha-pinene, beta-pinene, camphene, myrcene, 3-carene and limonene in a standard mixture, Norway spruce, Scots pine and black pine samples, respectively. Particularly, if the operation mode of a PTR-MS instrument equipped with fastGC is continuously switched between fastGC and direct injection (dependent on the application, e.g. a loop sequence of one fastGC run followed by 10 min of direct injection measurement), real-time capability is preserved, while at the same time valuable information on substance identification and isomer separation is acquired. Aerosol and particulate matter inlet Researchers at the Leopold-Franzens University in Innsbruck invented a dedicated PTR-MS inlet system for the analysis of aerosols and particulate matter, which they called "CHemical Analysis of aeRosol ON-line (CHARON)". After further development work in collaboration with a PTR-MS manufacturer, CHARON has become readily available as an add-on for PTR-MS instruments in 2017. The add-on consists of a honeycomb activated charcoal denuder which adsorbs organic gases but transmits particles, an aerodynamic lens system that collimates sub-μm particles, and a thermo-desorber that evaporates non-refractory organic particulate matter at moderate temperatures of 100-160 °C and reduced pressures of a few mbar. So far, CHARON has predominantly been used within studies in the field of atmospheric chemistry, e.g. for airborne measurements of particulate organic matter and bulk organic aerosol analysis. Inlet for liquids A now well established setup for the controlled evaporation and subsequent analysis of liquids with PTR-MS has been published in 2013 by Fischer et al. As the authors saw the main application of their setup in the calibration of PTR-MS instruments via aqueous standards, they named it "Liquid Calibration Unit (LCU)". The LCU sprays a liquid standard into a gas stream at well-defined flow rates via a purpose-built nebulizer (optimized for reduced probability of clogging and high tolerance to salts in the liquid). The resulting micro-droplets are injected into a heated (> 100 °C) evaporation chamber. This concept offers two main advantages: (i) the evaporation of compounds is enhanced by the enlarged surface area of the droplets and (ii) compounds which are dissociated in water, such as acids (or bases), experience a shift in pH value when the water evaporates from a droplet. This in turn reduces dissociation and supports total evaporation of the compound. The resulting continuous gas flow containing the analytes can be directly introduced into a PTR-MS instrument for analysis. Applications The most common applications for the PTR-MS technique are environmental research, waste incineration, food science, biological research, process monitoring, indoor air quality, medicine and biotechnology and Homeland security. Trace gas analysis is another common application. Some other techniques are Secondary electrospray ionization (SESI), Electrospray ionization (ESI), and Selected-ion flow-tube mass spectrometry (SIFT). Food science Fig. 2 shows a typical PTR-MS measurement performed in food and flavor research. The test person swallows a sip of a vanillin flavored drink and breathes via his nose into a heated inlet device coupled to a PTR-MS instrument. Due to the high time resolution and sensitivity of the instrument used here, the development of vanillin in the person's breath can be monitored in real-time (please note that isoprene is shown in this figure because it is a product of human metabolism and therefore acts as an indicator for the breath cycles). The data can be used for food design, i.e. for adjusting the intensity and duration of vanillin flavor tasted by the consumer. Another example for the application of PTR-MS in food science was published in 2008 by C. Lindinger et al. in Analytical Chemistry. This publication found great response even in non-scientific media. Lindinger et al. developed a method to convert "dry" data from a PTR-MS instrument that measured headspace air from different coffee samples into expressions of flavor (e.g. "woody", "winey", "flowery", etc.) and showed that the obtained flavor profiles matched nicely to the ones created by a panel of European coffee tasting experts. Air quality analysis In Fig. 3 a mass spectrum of air inside a laboratory (obtained with a time-of-flight (TOF) based PTR-MS instrument), is shown. The peaks on m/z 19, 37 and 55 (and their isotopes) represent the reagent ions (H3O+) and their clusters. On m/z 30 and 32 NO+ and O2+, which are both impurities originating from the ion source, appear. All other peaks correspond to compounds present in typical laboratory air (e.g. high intensity of protonated acetone on m/z 59). If one takes into account that virtually all peaks visible in Fig. 3 are in fact double, triple or multiple peaks (isobaric compounds) it becomes obvious that for PTR-MS instruments selectivity is at least as important as sensitivity, especially when complex samples / compositions are analyzed. One methods for improving the selectivity is high mass resolution. When the PTR source is coupled to a high resolution mass spectrometer isobaric compounds can be distinguished and substances can be identified via their exact mass. Some PTR-MS instruments are, despite the lack of a mass filter between the ion source and the drift tube, capable of switching the reagent ions (e.g. to NO+ or O2+). With the additional information obtained by using different reagent ions a much higher level of selectivity can be reached, e.g. some isomeric molecules can be distinguished. See also Chemical ionization Gas analysis Mass Spectrometry Selected-ion flow-tube mass spectrometry Secondary electrospray ionization References External links Kore Technology – Principles of PTR Mass spectrometry Ion source Measuring instruments Proton
Proton-transfer-reaction mass spectrometry
[ "Physics", "Chemistry", "Technology", "Engineering" ]
4,012
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Measuring instruments", "Mass spectrometry", "Matter" ]
22,480,056
https://en.wikipedia.org/wiki/Dithiooxamide
Dithiooxamide, also known as rubeanic acid, is an organic compound. It is the sulfur analog of oxamide. It acts as a chelating agent, e.g. in the detection or determination of copper. It has also been used as a building block in the synthesis of cyclen. Materials chemistry Rubeanic acid has received much attention as a precursor to photonic or electroactive materials. It is a precursor to inorganic C-S-N rings. It condenses with acid chlorides to thiazoles. It forms coordination polymers. References Thioamides
Dithiooxamide
[ "Chemistry" ]
125
[ "Functional groups", "Organic compounds", "Thioamides", "Organic compound stubs", "Organic chemistry stubs" ]
22,483,052
https://en.wikipedia.org/wiki/Crich%20beta-mannosylation
The Crich β-mannosylation in organic chemistry is a synthetic strategy which is used in carbohydrate synthesis to generate a 1,2-cis-glycosidic bond. This type of linkate is generally very difficult to make, and specific methods like the Crich β-mannosylation are used to overcome these issues. The technique takes its name from its developer, Professor David Crich. Background The development of facile chemical glycosylation protocols is essential to synthesizing complex oligosaccharides. Among many diverse type of glycosidic linkages, the 1,2-cis-β-glycoside, which exists in many biologically relevant glycoconjugates and oligosaccharides, is arguably one of the most difficult to synthesize. The challenges in constructing β-mannose linkage have been well documented in several reviews. To date, a few laboratories have devised efficient methodologies to overcome these synthetic hurdles, and achieved varying degrees of success. Of those elegant approaches, a highly stereoselective β-mannosylation protocol developed by Crich and co-workers was realized as a breakthrough in β-mannoside synthesis. This strategy is based on the initial activation of α-mannosyl sulfoxides 1 with triflic anhydride (Tf2O) using DTBMP (2,6-di-tert-butyl-4-methylpyridine) as a base, followed by nucleophilic substitution of glycosyl acceptors (HOR3) to provide the 1,2-cis-β-glycoside 2 in good yield and selectivity (Scheme 1). Mechanistic Studies The mechanistic details of this reaction have been extensively explored by Crich’s laboratories. Low-temperature 1H, 13C, and 19F NMR spectroscopic investigations revealed that anomeric triflate 3 derived from 1 is the intermediate glycosyl donor. Moreover, the mechanism of glycosidic bond forming reaction (3→2) was examined thoroughly by the determination of kinetic isotopic effects (KIEs) and NMR spectroscopy. Consequently, the magnitude of KIEs indicated that the displacement of the triflate from 3 proceeded with the development of significant oxacarbenium ion character at the anomeric position. This might be rationalized either by (1) a dissociative mechanism involving the intermediacy of either a transient contact ion pair (CIP) 4 or a solvent-separated ion pair (SSIP) 5, or (2) a mechanistically variant transition state 7 (Scheme 2). For the intermediate CIP 4, the triflate anion is closely associated with face where it just departed thus shields that side against nucleophilic attack. For the alternative intermediate SSIP 5 which is in equilibrium with an initial CIP, the anomeric center could presumably be attacked by incoming alcohol from either face, giving β-mannoside 2 along with the undesired α-anomer 6. Along these lines, the presence of the 4,6-O-benzylidene protecting group, which serves to rigidify the pyranoside against rehybridization at the anomeric carbon, is essential in shifting the equilibrium toward the covalent triflate, thus reducing α-glycoside formation. Additionally, the only intermediate observed by NMR spectroscopy is the covalent triflate 3, indicating that the complete set of equilibria between 3, the CIP 4, and SSIP 5 set is very heavily biased towards 3. Reaction Scope Some representative examples of Crich’s β-mannosylation are shown in Scheme 3. It is noteworthy that, with this method in hand, primary, secondary, and tertiary alcohols (9, 12, and 13) all serve as glycosyl acceptors effectively in terms of yields and selectivity. In a recent version, the β-mannosylation of thioglycoside 14 and its analogues were examined to prepare sterically hindered glycosides, in which PhSOTf (or other newly developed sulfur-type oxidants) served as a convenient reagent for the in situ generation of the glycosyl triflate from 14, thus facilitating the reaction. Solid-Phase Synthesis The polymer-supported synthesis of β-mannosides based on the Crich’s protocol has also been studied in the same laboratories. As shown in Scheme 4, diol 17 was first reacted with polystyrylboronic acid (18) to offer the bound donor 19, in which 4,6-O-phenylboronates served as the torsionally disarming protecting group. With that, activation of the thioglycoside 19 was readily achieved, and the coupling reaction with the acceptor alcohol underwent smoothly to provide the bound β-mannoside 20. After removal of the excess reagents and byproducts from the resin, 20 was then treated with aqueous acetone to release 4,6-diol 21. Overall, this is a powerful method for solid-phase synthesis of β-mannosides, which has great potential to be further extended, was established. See also Carbohydrate synthesis Difficult linkages Carbohydrate chemistry References Carbohydrate chemistry Organic reactions
Crich beta-mannosylation
[ "Chemistry" ]
1,144
[ "Organic reactions", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
22,484,332
https://en.wikipedia.org/wiki/Glycosyl%20acceptor
A glycosyl acceptor is any suitable nucleophile-containing molecule that will react with a glycosyl donor to form a new glycosidic bond. By convention, the acceptor is the member of this pair which did not contain the resulting anomeric carbon of the new glycosidic bond. Since the nucleophilic atom of the acceptor is typically an oxygen atom, this can be remembered using the mnemonic of the acceptor is the alcohol. A glycosyl acceptor can be a mono- or oligosaccharide that contains an available nucleophile, such as an unprotected hydroxyl. Background Examples glucose to haemoglobin See also Chemical glycosylation Glycosyl halide Armed and disarmed saccharides Carbohydrate chemistry References Carbohydrate chemistry Organic reactions
Glycosyl acceptor
[ "Chemistry" ]
195
[ "Organic reactions", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Glycobiology" ]
22,484,557
https://en.wikipedia.org/wiki/Oxocarbenium
An oxocarbenium ion (or oxacarbenium ion) is a chemical species characterized by a central sp2-hybridized carbon, an oxygen substituent, and an overall positive charge that is delocalized between the central carbon and oxygen atoms. An oxocarbenium ion is represented by two limiting resonance structures, one in the form of a carbenium ion with the positive charge on carbon and the other in the form of an oxonium species with the formal charge on oxygen. As a resonance hybrid, the true structure falls between the two. Compared to neutral carbonyl compounds like ketones or esters, the carbenium ion form is a larger contributor to the structure. They are common reactive intermediates in the hydrolysis of glycosidic bonds, and are a commonly used strategy for chemical glycosylation. These ions have since been proposed as reactive intermediates in a wide range of chemical transformations, and have been utilized in the total synthesis of several natural products. In addition, they commonly appear in mechanisms of enzyme-catalyzed biosynthesis and hydrolysis of carbohydrates in nature. Anthocyanins are natural flavylium dyes, which are stabilized oxocarbenium compounds. Anthocyanins are responsible for the colors of a wide variety of common flowers such as pansies and edible plants such as eggplant and blueberry. Electron distribution and reactivity The best Lewis structure for an oxocarbenium ion contains an oxygen–carbon double bond, with the oxygen atom attached to an additional group and consequently taking on a formal positive charge. In the language of canonical structures (or "resonance"), the polarization of the π bond is described by a secondary carbocationic resonance form, with a formal positive charge on carbon (see above). In terms of frontier molecular orbital theory, the Lowest Unoccupied Molecular Orbital (LUMO) of the oxocarbenium ion is a π* orbital that has the large lobe on the carbon atom; the more electronegative oxygen contributes less to the LUMO. Consequently, in an event of a nucleophilic attack, the carbon is the electrophilic site. Compared to a ketone, the polarization of an oxocarbenium ion is accentuated: they more strongly resemble a "true" carbocation, and they are more reactive toward nucleophiles. In organic reactions, ketones are commonly activated by the coordination of a Lewis acid or Brønsted acid to the oxygen to generate an oxocarbenium ion as an intermediate. Numerically, a typical partial charge (derived from Hartree-Fock computations) for the carbonyl carbon of a ketone R2C=O (like acetone) is δ+ = 0.51. With the addition of an acidic hydrogen to the oxygen atom to produce [R2C=OH]+, the partial charge increases to δ+ = 0.61. In comparison, the nitrogen analogues of ketones and oxocarbenium ions, imines (R2C=NR) and iminium ions ([R2C=NRH]+), respectively, have partial charges of δ+ = 0.33 and δ+ = 0.54, respectively. The order of partial positive charge on the carbonyl carbon is therefore imine < ketone < iminium < oxocarbenium. This is also the order of electrophilicity for species containing C=X (X = O, NR) bonds. This order is synthetically significant and explains, for example, why reductive aminations are often best carried out at pH = 5 to 6 using sodium cyanoborohydride (Na+[H3B(CN)]−) or sodium triacetoxyborohydride (Na+[HB(OAc)3]−) as a reagent. Bearing an electron-withdrawing group, sodium cyanoborohydride and sodium triacetoxyborohydride are poorer reducing agents than sodium borohydride, and their direct reaction with ketones is generally a slow and inefficient process. However, the iminium ion (but not the imine itself) formed in situ during a reductive amination reaction is a stronger electrophile than the ketone starting material and will react with the hydride source at a synthetically useful rate. Importantly, the reaction is conducted under mildly acidic conditions that protonate the imine intermediate to a significant extent, forming the iminium ion, while not being strongly acidic enough to protonate the ketone, which would form the even more electrophilic oxocarbenium ion. Thus, the reaction conditions and reagent ensure that amine is formed selectively from iminium reduction, instead of direct reduction of the carbonyl group (or its protonated form) to form an alcohol. Formation Formation of oxocarbenium ions can proceed through several different pathways. Most commonly, the oxygen of a ketone will bind to a Lewis Acid, which activates the ketone, making it a more effective electrophile. The Lewis acid can be a wide range of molecules, from a simple hydrogen atom to metal complexes. The remainder of this article will focus on alkyl oxocarbenium ions, however, where the atom added to the oxygen is a carbon. One way that this sort of ion will form is the elimination of a leaving group. In carbohydrate chemistry, this leaving group is often an ether or ester. An alternative to elimination is direct deprotonation of the molecule to form the ion, however, this can be difficult and require strong bases to achieve. Applications to synthesis 5-membered rings The stereochemistry involved in the reactions of five-membered rings can be predicted by an envelope transition state model. Nucleophiles favor addition from the "inside" of the envelope, or from the top of the figure on the right. The "inside" addition produces a results in a staggered conformation, rather than the eclipsed conformation that results from the "outside" addition. 6-membered rings The transition state model for a six-membered oxocarbenium ring was proposed earlier in 1992 by Woods et al. The general strategy for determining the stereochemistry of a nucleophilic addition to a six-membered ring follows a similar procedure to the case of the five-membered ring. The assumption that one makes for this analysis is that the ring is in the same conformation as cyclohexene, with three carbons and the oxygen in a plane with the two other carbon atome puckered out of the plane, with one above and one below (see the figure to the right). Based on the substituients present on the ring, the lowest energy conformation is determined, keeping in mind steric and stereoelectronic effects (see the section below for a discussion of stereoelectronic effects in oxocarbenium rings). Once this conformation is established, one can consider the nucleophilic addition. The addition will proceed through the low energy chair transition state, rather than the relatively high energy twist-boat. An example of this type of reaction can be seen below. The example also highlights how the stereoelectronic effect exerted by an electronegative substituent flips the lowest energy conformation and leads to opposite selectivity. Stereoelectronic effects In an alkene ring that does not contain an oxygen atom, any large substituent prefers to be in an equatorial position, in order to minimize steric effects. It has been observed in rings containing oxocarbenium ions that electronegative substituents prefer the axial or pseudo-axial positions. When the electronegative atom is in the axial position, its electron density can be donated through space to the positively charged oxygen atom in the ring. This electronic interaction stabilizes the axial conformation. Hydroxyl groups, ethers and halogens are examples of substituents that exhibit this phenomenon. Stereoelectronic effects must be taken into consideration when determining the lowest energy conformation in the analysis for nucleophilic addition to an oxocarbenium ion. Cycloadditions In organic synthesis, vinyl oxocarbenium ions (structure on right) can be utilized in a wide range of cycloaddition reactions. They are commonly employed as dienophiles in the Diels–Alder reaction. An electron withdrawing ketone is often added to the dienophile to increase the rate of the reaction, and these ketones are often converted to vinyl oxocarbenium ions during the reaction. It is not clear that an oxocarbenium ion necessarily will form, but Roush and co-workers demonstrated the oxocarbenium intermediate in the cyclization shown below. Two products were observed in this reaction, which could only form if the oxocarbenium ring is present as an intermediate. [4+3], [2+2], [3+2] and [5+2] cycloadditions with oxocarbenium intermediates have also been reported. Aldol reaction Chiral oxocarbenium ions have been exploited to carry out highly diastereoselective and enantioselective acetate aldol addition reactions. The oxocarbenium ion is used as an electrophile in the reaction. When the methyl group increases in size, the diastereoselevtivity increases. Examples from total synthesis Oxocarbenium ions have been utilized in total synthesis on several occasions. A major subunit of (+)-clavosolide was synthesized with a reduction of a six-membered oxocarbenium ring. All the large substituents were found in an equatorial position, and the transformation went through the chair transition state, as predicted. A second example is seen in the key step of the synthesis of (−)-neopeltolide, which uses another six-membered oxocarbenium ring reduction for a diastereoselective hydride addition. Applications to biology In biological systems, oxocarbenium ions are mostly seen during reactions of carbohydrates. Since sugars are present in the structure of nucleic acids, with a ribose sugar present in RNA and a deoxyribose present in the structure of DNA, their chemistry plays an important role in wide range of cellular functions of nucleic acids. In addition to their functions in nucleotides, sugars are also used for structural components of organisms, as energy storage molecules, cell signaling molecules, protein modification and play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development. The abundance of sugar chemistry in biological processes leads many reaction mechanisms to proceed through oxocarbenium ions. Several important biological reactions that utilize oxocarbenium ions are outlined in this section. Nucleotide biosynthesis Nucleotides can undergo enzyme-catalyzed intramolecular cyclization in order to produce several important biological molecules. These cyclizations typically proceed through an oxocarbenium intermediate. An example of this reaction can be seen in the cyclization cyclic ADP ribose, which is an important molecule for intracellular calcium signaling. Glycosidases A glycosidase is an enzyme that catalyzes the breakdown of a glycosidic linkage to produce two smaller sugars. This process has important implications in the utilization of stored energy, like glycogen in animals, as well as in the breakdown of cellulose by organisms that feed on plants. In general, aspartic or glutamic acid residues in the active site of the enzyme catalyze the hydrolysis of the glycosidic bond. The mechanism of these enzymes involves an oxocarbenium ion intermediate, a general example of which is shown below. See also Carbocation Chemical glycosylation Glycosyl donor Glycosidase Oxocarbon anion References Carbohydrate chemistry Organic reactions Reactive intermediates Carbocations Oxycations
Oxocarbenium
[ "Chemistry" ]
2,569
[ "Organic reactions", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
22,487,049
https://en.wikipedia.org/wiki/Michael%20McKubre
Michael Charles Harold McKubre is an electrochemist involved with cold fusion energy research. McKubre was the director of the Energy Research Center at SRI International in 1998. He is a native of New Zealand. Education McKubre completed two degrees at Victoria University of Wellington, a Master's degree in 1972, titled A Study of the Frequency Domain Induced Polarisation Effects Displayed by Clay and by Cation Exchange Resin, Model Soil Systems, followed by a PhD in 1976 on membrane polarisation effects in simulated rock systems. Career From 1989 to 2002, he researched cold fusion at SRI International. Unlike other researchers in the same field, he obtained mainstream funding during all his research: first from the Electric Power Research Institute, then from the Japanese government, and in 2002 he had funding from the U.S. government. In January 1992 a cold fusion cell exploded in an SRI lab. One of McKubre's collaborators was killed and three people including McKubre were wounded. McKubre still has pieces of glass embedded in his side. Subsequent experiments were done behind bulletproof glass. In 2004 he and other cold fusion researchers asked the United States Department of Energy (DOE) to give a new review to the field of cold fusion, and he co-authored a report with all the available experimental and theoretical evidence since the 1989 review. The 2004 review concluded that "while significant progress has been made in the sophistication of calorimeters since the review of this subject in 1989, the conclusions reached by the reviewers today are similar to those found in the 1989 review." As of 2010, he was still making experiments with palladium cells at SRI International, and collaborates with the ENEA laboratory, where the most reliable palladium is being produced. McKubre more recently took part as one of the 22 physicists of the Steorn "jury". Selected publications (manuscript) Paper listing the available experimental evidence of cold fusion. References External links New Zealand scientists SRI International people Living people Year of birth missing (living people) Cold fusion Electrochemists Victoria University of Wellington alumni
Michael McKubre
[ "Physics", "Chemistry" ]
424
[ "Electrochemistry", "Cold fusion", "Nuclear physics", "Nuclear fusion", "Electrochemists" ]
22,489,171
https://en.wikipedia.org/wiki/Gold%20number
The gold number is the minimum weight (in milligrams) of a protective colloid/lyophilic colloid required to prevent the coagulation of 10 ml of a standard hydro gold sol when 1 ml of a 10% sodium chloride solution is added to it. It was first used by Richard Adolf Zsigmondy in 1901. An electrical double layer is normally present on the gold sol particles, resulting in electrostatic repulsion between the particles. The sodium chloride ions disrupt this electrical double layer, causing coagulation to occur. The coagulation of gold sol results in an increase in particle size, indicated by a colour change from red to blue or purple. The higher the gold number, the lower the protective power of the colloid, because a greater amount of colloid is required to prevent coagulation. The gold number of some colloids are given below. References Colloidal chemistry
Gold number
[ "Chemistry" ]
189
[ "Colloidal chemistry", "Surface science", "Colloids" ]
23,989,039
https://en.wikipedia.org/wiki/Choke%20manifold
In oil and gas production a choke manifold is used to lower the pressure from the well head. It consist of a set of high pressure valves and at least two chokes. These chokes can be fixed or adjustable or a mix of both. The redundancy is needed so that if one choke has to be taken out of service, the flow can be directed through another one. By lowering pressure the retrieved gases can be flared off on site. Sources Schlumberger Oilfield Glossary Petroleum production Industrial equipment
Choke manifold
[ "Chemistry", "Engineering" ]
105
[ "Petroleum", "nan", "Petroleum stubs" ]
23,990,825
https://en.wikipedia.org/wiki/Bromochlorofluoroiodomethane
Bromochlorofluoroiodomethane is a hypothetical haloalkane with all four stable halogen substituents present in it. Overview This compound can be seen as a methane molecule, whose four hydrogen atoms are each replaced with a different halogen atom. As the mirror images of this molecule are not superimposable, the molecule has two enantiomers. As one of the simplest such molecules, it is often cited as the prototypical chiral compound. However, since there is no synthetic route known to produce bromochlorofluoroiodomethane, the related simple chiral compound bromochlorofluoromethane is used instead when such a compound is required for research. References Halomethanes Hypothetical chemical compounds Chirality
Bromochlorofluoroiodomethane
[ "Physics", "Chemistry", "Biology" ]
163
[ "Pharmacology", "Origin of life", "Biochemistry", "Theoretical chemistry stubs", "Stereochemistry", "Hypotheses in chemistry", "Chirality", "Theoretical chemistry", "Stereochemistry stubs", "Hypothetical chemical compounds", "Asymmetry", "Biological hypotheses", "Symmetry" ]
23,991,325
https://en.wikipedia.org/wiki/Jacobi%20form
In mathematics, a Jacobi form is an automorphic form on the Jacobi group, which is the semidirect product of the symplectic group Sp(n;R) and the Heisenberg group . The theory was first systematically studied by . Definition A Jacobi form of level 1, weight k and index m is a function of two complex variables (with τ in the upper half plane) such that for all integers λ, μ. has a Fourier expansion Examples Examples in two variables include Jacobi theta functions, the Weierstrass ℘ function, and Fourier–Jacobi coefficients of Siegel modular forms of genus 2. Examples with more than two variables include characters of some irreducible highest-weight representations of affine Kac–Moody algebras. Meromorphic Jacobi forms appear in the theory of Mock modular forms. References Modular forms Theta functions
Jacobi form
[ "Mathematics" ]
177
[ "Modular forms", "Number theory" ]
23,992,011
https://en.wikipedia.org/wiki/List%20of%20derivatives%20and%20integrals%20in%20alternative%20calculi
There are many alternatives to the classical calculus of Newton and Leibniz; for example, each of the infinitely many non-Newtonian calculi. Occasionally an alternative calculus is more suited than the classical calculus for expressing a given scientific or mathematical idea. The table below is intended to assist people working with the alternative calculus called the "geometric calculus" (or its discrete analog). Interested readers are encouraged to improve the table by inserting citations for verification, and by inserting more functions and more calculi. Table In the following table; is the digamma function, is the K-function, is subfactorial, are the generalized to real numbers Bernoulli polynomials. See also Derivative Differentiation rules Indefinite product Product integral Fractal derivative References External links Non-Newtonian calculus website Non-Newtonian calculus Mathematics-related lists Mathematical tables
List of derivatives and integrals in alternative calculi
[ "Mathematics" ]
175
[ "Mathematical tables", "Non-Newtonian calculus", "Calculus" ]
23,992,863
https://en.wikipedia.org/wiki/Simultaneous%20game
In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns (moves alternate between players). In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time. Characteristics In sequential games, players observe what rivals have done in the past and there is a specific order of play. However, in simultaneous games, all players select strategies without observing the choices of their rivals and players choose at exactly the same time. A simple example is rock-paper-scissors in which all players make their choice at exactly the same time. However moving at exactly the same time isn’t always taken literally, instead players may move without being able to see the choices of other players. A simple example is an election in which not all voters will vote literally at the same time but each voter will vote not knowing what anyone else has chosen. Given that decision makers are rational, then so is individual rationality. An outcome is individually rational if it yields each player at least his security level. The security level for Player i is the amount max min Hi (s) that the player can guarantee themselves unilaterally, that is, without considering the actions of other players. Representation In a simultaneous game, players will make their moves simultaneously, determine the outcome of the game and receive their payoffs. The most common representation of a simultaneous game is normal form (matrix form). For a 2 player game; one player selects a row and the other player selects a column at exactly the same time. Traditionally, within a cell, the first entry is the payoff of the row player, the second entry is the payoff of the column player. The “cell” that is chosen is the outcome of the game. To determine which "cell" is chosen, the payoffs for both the row player and the column player must be compared respectively. Each player is best off where their payoff is higher. Rock–paper–scissors, a widely played hand game, is an example of a simultaneous game. Both players make a decision without knowledge of the opponent's decision, and reveal their hands at the same time. There are two players in this game and each of them has three different strategies to make their decision; the combination of strategy profiles (a complete set of each player's possible strategies) forms a 3×3 table. We will display Player 1's strategies as rows and Player 2's strategies as columns. In the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in rock-paper-scissors will look like this: Another common representation of a simultaneous game is extensive form (game tree). Information sets are used to emphasize the imperfect information. Although it is not simple, it is easier to use game trees for games with more than 2 players. Even though simultaneous games are typically represented in normal form, they can be represented using extensive form too. While in extensive form one player’s decision must be draw before that of the other, by definition such representation does not correspond to the real life timing of the players’ decisions in a simultaneous game. The key to modeling simultaneous games in the extensive form is to get the information sets right. A dashed line between nodes in extensive form representation of a game represents information asymmetry and specifies that, during the game, a party cannot distinguish between the nodes, due to the party being unaware of the other party's decision (by definition of "simultaneous game"). Some variants of chess that belong to this class of games include synchronous chess and parity chess. Bimatrix Game In a simultaneous game, players only have one move and all players' moves are made simultaneously. The number of players in a game must be stipulated and all possible moves for each player must be listed. Each player may have different roles and options for moves. However, each player has a finite number of options available to choose. Two Players An example of a simultaneous 2-player game: A town has two companies, A and B, who currently make $8,000,000 each and need to determine whether they should advertise. The table below shows the payoff patterns; the rows are options of A and the columns are options of B. The entries are payoffs for A and B, respectively, separated by a comma. Two Players (zero sum) A zero-sum game is when the sum of payoffs equals zero for any outcome i.e. the losers pay for the winners gains. For a zero-sum 2-player game the payoff of player A doesn’t have to be displayed since it is the negative of the payoff of player B. An example of a simultaneous zero-sum 2-player game: Rock–paper–scissors is being played by two friends, A and B for $10. The first cell stands for a payoff of 0 for both players. The second cell is a payoff of 10 for A which has to be paid by B, therefore a payoff of -10 for B. Three or more Players An example of a simultaneous 3-player game: A classroom vote is held as to whether or not they should have an increased amount of free time. Player A selects the matrix, player B selects the row, and player C selects the column. The payoffs are: Symmetric Games All of the above examples have been symmetric. All players have the same options so if players interchange their moves, they also interchange their payoffs. By design, symmetric games are fair in which every player is given the same chances. Strategies - The Best Choice Game theory should provide players with advice on how to find which move is best. These are known as “Best Response” strategies. Pure vs Mixed Strategy Pure strategies are those in which players pick only one strategy from their best response. A Pure Strategy determines all your possible moves in a game, it is a complete plan for a player in a given game. Mixed strategies are those in which players randomize strategies in their best responses set. These have associated probabilities with each set of strategies. For simultaneous games, players will typically select mixed strategies while very occasionally choosing pure strategies. The reason for this is that in a game where players don’t know what the other one will choose it is best to pick the option that is likely to give the you the greatest benefit for the lowest risk given the other player could choose anything i.e. if you pick your best option but the other player also picks their best option, someone will suffer. Dominant vs Dominated Strategy A dominant strategy provides a player with the highest possible payoff for any strategy of the other players. In simultaneous games, the best move a player can make is to follow their dominant strategy, if one exists. When analyzing a simultaneous game: Firstly, identify any dominant strategies for all players. If each player has a dominant strategy, then players will play that strategy however if there is more than one dominant strategy then any of them are possible. Secondly, if there aren’t any dominant strategies, identify all strategies dominated by other strategies. Then eliminate the dominated strategies and the remaining are strategies players will play. Maximin Strategy Some people always expect the worst and believe that others want to bring them down when in fact others want to maximise their payoffs. Still, nonetheless, player A will concentrate on their smallest possible payoff, believing this is what player A will get, they will choose the option with the highest value. This option is the maximin move (strategy), as it maximises the minimum possible payoff. Thus, the player can be assured a payoff of at least the maximin value, regardless of how the others are playing. The player doesn’t have the know the payoffs of the other players in order to choose the maximin move, therefore players can choose the maximin strategy in a simultaneous game regardless of what the other players choose. Nash Equilibrium A pure Nash Equilibrium is when no one can gain a higher payoff by deviating from their move, provided others stick with their original choices. Nash equilibria are self-enforcing contracts, in which negotiation happens prior to the game being played in which each player best sticks with their negotiated move. In a Nash Equilibrium, each player is best responded to the choices of the other player. Prisoner's Dilemma The prisoner’s dilemma originated with Merrill Flood and Melvin Dresher and is one of the most famous games in Game theory. The game is usually presented as follows: Two members of a criminal gang have been apprehended by the police. Both individuals now sit in solitary confinement. The prosecutors have the evidence required to put both prisoners away on lesser charges. However, they do not possess the evidence required to convict the prisoners on their principle charges. The prosecution therefore simultaneously offers both prisoners a deal where they can choose to cooperate with one another by remaining silent, or they can choose betrayal, meaning they testify against their partner and receive a reduced sentence. It should be mentioned that the prisoners cannot communicate with one another. Therefore, resulting in the following payoff matrix: This game results in a clear dominant strategy of betrayal where the only strong Nash Equilibrium is for both prisoners to confess. This is because we assume both prisoners to be rational and possessing no loyalty towards one another. Therefore, betrayal provides a greater reward for a majority of the potential outcomes. If B cooperates, A should choose betrayal, as serving 3 months is better than serving 1 year. Moreover, if B chooses betrayal, then A should also choose betrayal as serving 2 years is better than serving 3. The choice to cooperate clearly provides a better outcome for the two prisoners however from a perspective of self interest this option would be deemed irrational. The aforementioned both cooperating option features the least total time spent in prison, serving 2 years total. This total is significantly less than the Nash Equilibrium total, where both cooperate, of 4 years. However, given the constraints that Prisoners A and B are individually motivated, they will always choose betrayal. They do so by selecting the best option for themselves while considering each possible decisions of the other prisoner. Battle of the Sexes In the battle of the sexes game, a wife and husband decide independently whether to go to a football game or the ballet. Each person likes to do something together with the other, but the husband prefers football and the wife prefers ballet. The two Nash equilibria, and therefore the best responses for both husband and wife, are for them to both pick the same leisure activity e.g. (ballet, ballet) or (football, football). The table below shows the payoff for each option: Socially Desirable Outcomes Simultaneous games are designed to inform strategic choices in competitive and non cooperative environments. However, is important to note that Nash equilibria and many of the aforementioned strategies generally fail to result in socially desirable outcomes. Pareto Optimality Pareto efficiency is a notion rooted in the theoretical construct of perfect competition. Originating with Italian economist Vilfredo Pareto the concept refers to a state in which an economy has maximized efficiency in terms of resource allocation. Pareto Efficiency is closely linked to Pareto Optimality which is an ideal of Welfare Economics and often implies a notion of ethical consideration. A simultaneous game, for example, is said to reach Pareto optimality if there is no alternative outcome that can make at least one player better off while leaving all other players at least as well off. Therefore, these outcomes are referred to as socially desirable outcomes. The Stag Hunt The Stag Hunt by philosopher Jean-Jacques Rousseau is a simultaneous game in which there are two players. The decision to be made is whether or not each player wishes to hunt a Stag or a Hare. Naturally hunting a Stag will provide greater utility in comparison to hunting a Hare. However, in order to hunt a Stag both players need to work together. On the other hand, each player is perfectly capable of hunting a hare alone. The resulting dilemma is that neither player can be sure of what the other will choose to do. Thus, providing the potential for a player to receive no payoff should they be the only party to choose to hunt a Stag. Therefore, resulting in the following payoff matrix: The game is designed to illustrate a clear Pareto optimality where both players cooperate to hunt a Stag. However, due to the inherent risk of the game, such an outcome does not always come to fruition. It is imperative to note that Pareto optimality is not a strategic solution for simultaneous games. However, the ideal informs players about the potential for more efficient outcomes. Moreover, potentially providing insight into how players should learn to play over time. See also Sequential game Simultaneous action selection References Bibliography Game theory game classes Game theory
Simultaneous game
[ "Mathematics" ]
2,755
[ "Game theory game classes", "Game theory" ]
23,993,412
https://en.wikipedia.org/wiki/C6H4Cl2O
{{DISPLAYTITLE:C6H4Cl2O}} The molecular formula C6H4Cl2O could refer to: 2,3-Dichlorophenol 2,4-Dichlorophenol 2,5-Dichlorophenol 2,6-Dichlorophenol 3,4-Dichlorophenol 3,5-Dichlorophenol
C6H4Cl2O
[ "Chemistry" ]
91
[ "Isomerism", "Set index articles on molecular formulas" ]
1,291,319
https://en.wikipedia.org/wiki/Time-invariant%20system
In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system". Mathematically speaking, "time-invariance" of a system is the following property: Given a system with a time-dependent output function , and a time-dependent input function , the system will be considered time-invariant if a time-delay on the input directly equates to a time-delay of the output function. For example, if time is "elapsed time", then "time-invariance" implies that the relationship between the input function and the output function is constant with respect to time In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output. In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay. If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems. Simple example To demonstrate how to determine if a system is time-invariant, consider the two systems: System A: System B: Since the System Function for system A explicitly depends on t outside of , it is not time-invariant because the time-dependence is not explicitly a function of the input function. In contrast, system B's time-dependence is only a function of the time-varying input . This makes system B time-invariant. The Formal Example below shows in more detail that while System B is a Shift-Invariant System as a function of time, t, System A is not. Formal example A more formal proof of why systems A and B above differ is now presented. To perform this proof, the second definition will be used. System A: Start with a delay of the input Now delay the output by Clearly , therefore the system is not time-invariant. System B: Start with a delay of the input Now delay the output by Clearly , therefore the system is time-invariant. More generally, the relationship between the input and output is and its variation with time is For time-invariant systems, the system properties remain constant with time, Applied to Systems A and B above: in general, so it is not time-invariant, so it is time-invariant. Abstract example We can denote the shift operator by where is the amount by which a vector's index set should be shifted. For example, the "advance-by-1" system can be represented in this abstract notation by where is a function given by with the system yielding the shifted output So is an operator that advances the input vector by 1. Suppose we represent a system by an operator . This system is time-invariant if it commutes with the shift operator, i.e., If our system equation is given by then it is time-invariant if we can apply the system operator on followed by the shift operator , or we can apply the shift operator followed by the system operator , with the two computations yielding equivalent results. Applying the system operator first gives Applying the shift operator first gives If the system is time-invariant, then See also Finite impulse response Sheffer sequence State space (controls) Signal-flow graph LTI system theory Autonomous system (mathematics) References Control theory Signal processing
Time-invariant system
[ "Mathematics", "Technology", "Engineering" ]
864
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Control theory", "Dynamical systems" ]
1,291,808
https://en.wikipedia.org/wiki/Competitive%20Lotka%E2%80%93Volterra%20equations
The competitive Lotka–Volterra equations are a simple model of the population dynamics of species competing for some common resource. They can be further generalised to the generalized Lotka–Volterra equation to include trophic interactions. Overview The form is similar to the Lotka–Volterra equations for predation in that the equation for each species has one term for self-interaction and one term for the interaction with other species. In the equations for predation, the base population model is exponential. For the competition equations, the logistic equation is the basis. The logistic population model, when used by ecologists often takes the following form: Here is the size of the population at a given time, is inherent per-capita growth rate, and is the carrying capacity. Two species Given two populations, and , with logistic dynamics, the Lotka–Volterra formulation adds an additional term to account for the species' interactions. Thus the competitive Lotka–Volterra equations are: Here, represents the effect species 2 has on the population of species 1 and represents the effect species 1 has on the population of species 2. These values do not have to be equal. Because this is the competitive version of the model, all interactions must be harmful (competition) and therefore all α-values are positive. Also, note that each species can have its own growth rate and carrying capacity. A complete classification of this dynamics, even for all sign patterns of above coefficients, is available, which is based upon equivalence to the 3-type replicator equation. N species This model can be generalized to any number of species competing against each other. One can think of the populations and growth rates as vectors, 's as a matrix. Then the equation for any species becomes or, if the carrying capacity is pulled into the interaction matrix (this doesn't actually change the equations, only how the interaction matrix is defined), where is the total number of interacting species. For simplicity all self-interacting terms are often set to 1. Possible dynamics The definition of a competitive Lotka–Volterra system assumes that all values in the interaction matrix are positive or 0 ( for all , ). If it is also assumed that the population of any species will increase in the absence of competition unless the population is already at the carrying capacity ( for all ), then some definite statements can be made about the behavior of the system. The populations of all species will be bounded between 0 and 1 at all times (, for all ) as long as the populations started out positive. Smale showed that Lotka–Volterra systems that meet the above conditions and have five or more species (N ≥ 5) can exhibit any asymptotic behavior, including a fixed point, a limit cycle, an n-torus, or attractors. Hirsch proved that all of the dynamics of the attractor occur on a manifold of dimension N−1. This essentially says that the attractor cannot have dimension greater than N−1. This is important because a limit cycle cannot exist in fewer than two dimensions, an n-torus cannot exist in less than n dimensions, and chaos cannot occur in less than three dimensions. So, Hirsch proved that competitive Lotka–Volterra systems cannot exhibit a limit cycle for N < 3, or any torus or chaos for N < 4. This is still in agreement with Smale that any dynamics can occur for N ≥ 5. More specifically, Hirsch showed there is an invariant set C that is homeomorphic to the (N−1)-dimensional simplex and is a global attractor of every point excluding the origin. This carrying simplex contains all of the asymptotic dynamics of the system. To create a stable ecosystem the αij matrix must have all positive eigenvalues. For large-N systems Lotka–Volterra models are either unstable or have low connectivity. Kondoh and Ackland and Gallagher have independently shown that large, stable Lotka–Volterra systems arise if the elements of (i.e. the features of the species) can evolve in accordance with natural selection. 4-dimensional example A simple 4-dimensional example of a competitive Lotka–Volterra system has been characterized by Vano et al. Here the growth rates and interaction matrix have been set to with for all . This system is chaotic and has a largest Lyapunov exponent of 0.0203. From the theorems by Hirsch, it is one of the lowest-dimensional chaotic competitive Lotka–Volterra systems. The Kaplan–Yorke dimension, a measure of the dimensionality of the attractor, is 2.074. This value is not a whole number, indicative of the fractal structure inherent in a strange attractor. The coexisting equilibrium point, the point at which all derivatives are equal to zero but that is not the origin, can be found by inverting the interaction matrix and multiplying by the unit column vector, and is equal to Note that there are always equilibrium points, but all others have at least one species' population equal to zero. The eigenvalues of the system at this point are 0.0414±0.1903i, −0.3342, and −1.0319. This point is unstable due to the positive value of the real part of the complex eigenvalue pair. If the real part were negative, this point would be stable and the orbit would attract asymptotically. The transition between these two states, where the real part of the complex eigenvalue pair is equal to zero, is called a Hopf bifurcation. A detailed study of the parameter dependence of the dynamics was performed by Roques and Chekroun in. The authors observed that interaction and growth parameters leading respectively to extinction of three species, or coexistence of two, three or four species, are for the most part arranged in large regions with clear boundaries. As predicted by the theory, chaos was also found; taking place however over much smaller islands of the parameter space which causes difficulties in the identification of their location by a random search algorithm. These regions where chaos occurs are, in the three cases analyzed in, situated at the interface between a non-chaotic four species region and a region where extinction occurs. This implies a high sensitivity of biodiversity with respect to parameter variations in the chaotic regions. Additionally, in regions where extinction occurs which are adjacent to chaotic regions, the computation of local Lyapunov exponents revealed that a possible cause of extinction is the overly strong fluctuations in species abundances induced by local chaos. Spatial arrangements Background There are many situations where the strength of species' interactions depends on the physical distance of separation. Imagine bee colonies in a field. They will compete for food strongly with the colonies located near to them, weakly with further colonies, and not at all with colonies that are far away. This doesn't mean, however, that those far colonies can be ignored. There is a transitive effect that permeates through the system. If colony A interacts with colony B, and B with C, then C affects A through B. Therefore, if the competitive Lotka–Volterra equations are to be used for modeling such a system, they must incorporate this spatial structure. Matrix organization One possible way to incorporate this spatial structure is to modify the nature of the Lotka–Volterra equations to something like a reaction–diffusion system. It is much easier, however, to keep the format of the equations the same and instead modify the interaction matrix. For simplicity, consider a five species example where all of the species are aligned on a circle, and each interacts only with the two neighbors on either side with strength and respectively. Thus, species 3 interacts only with species 2 and 4, species 1 interacts only with species 2 and 5, etc. The interaction matrix will now be If each species is identical in its interactions with neighboring species, then each row of the matrix is just a permutation of the first row. A simple, but non-realistic, example of this type of system has been characterized by Sprott et al. The coexisting equilibrium point for these systems has a very simple form given by the inverse of the sum of the row Lyapunov functions A Lyapunov function is a function of the system whose existence in a system demonstrates stability. It is often useful to imagine a Lyapunov function as the energy of the system. If the derivative of the function is equal to zero for some orbit not including the equilibrium point, then that orbit is a stable attractor, but it must be either a limit cycle or n-torus - but not a strange attractor (this is because the largest Lyapunov exponent of a limit cycle and n-torus are zero while that of a strange attractor is positive). If the derivative is less than zero everywhere except the equilibrium point, then the equilibrium point is a stable fixed point attractor. When searching a dynamical system for non-fixed point attractors, the existence of a Lyapunov function can help eliminate regions of parameter space where these dynamics are impossible. The spatial system introduced above has a Lyapunov function that has been explored by Wildenberg et al. If all species are identical in their spatial interactions, then the interaction matrix is circulant. The eigenvalues of a circulant matrix are given by for and where the Nth root of unity. Here is the jth value in the first row of the circulant matrix. The Lyapunov function exists if the real part of the eigenvalues are positive ( for ). Consider the system where , , , and . The Lyapunov function exists if for . Now, instead of having to integrate the system over thousands of time steps to see if any dynamics other than a fixed point attractor exist, one need only determine if the Lyapunov function exists (note: the absence of the Lyapunov function doesn't guarantee a limit cycle, torus, or chaos). Example: Let , , and . If then all eigenvalues are negative and the only attractor is a fixed point. If then the real part of one of the complex eigenvalue pair becomes positive and there is a strange attractor. The disappearance of this Lyapunov function coincides with a Hopf bifurcation. Line systems and eigenvalues It is also possible to arrange the species into a line. The interaction matrix for this system is very similar to that of a circle except the interaction terms in the lower left and upper right of the matrix are deleted (those that describe the interactions between species 1 and N, etc.). This change eliminates the Lyapunov function described above for the system on a circle, but most likely there are other Lyapunov functions that have not been discovered. The eigenvalues of the circle system plotted in the complex plane form a trefoil shape. The eigenvalues from a short line form a sideways Y, but those of a long line begin to resemble the trefoil shape of the circle. This could be due to the fact that a long line is indistinguishable from a circle to those species far from the ends. Notes Chaotic maps Equations Population dynamics Population ecology Community ecology Population models
Competitive Lotka–Volterra equations
[ "Mathematics" ]
2,342
[ "Functions and mappings", "Mathematical objects", "Equations", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
1,292,142
https://en.wikipedia.org/wiki/Optimal%20experimental%20design
In the design of experiments, optimal experimental designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith. In the design of experiments for estimating statistical models, optimal designs allow parameters to be estimated without bias and with minimum variance. A non-optimal design requires a greater number of experimental runs to estimate the parameters with the same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation. The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments. Advantages Optimal designs offer three advantages over sub-optimal experimental designs: Optimal designs reduce the costs of experimentation by allowing statistical models to be estimated with fewer experimental runs. Optimal designs can accommodate multiple types of factors, such as process, mixture, and discrete factors. Designs can be optimized when the design-space is constrained, for example, when the mathematical process-space contains factor-settings that are practically infeasible (e.g. due to safety concerns). Minimizing the variance of estimators Experimental designs are evaluated using statistical criteria. It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an ("efficient") estimator is called the "Fisher information" for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information. When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix. A-optimality ("average" or trace) One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients. C-optimality This criterion minimizes the variance of a best linear unbiased estimator of a predetermined linear combination of model parameters. D-optimality (determinant) A popular criterion is D-optimality, which seeks to minimize |(X'X)−1|, or equivalently maximize the determinant of the information matrix X'X of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates. E-optimality (eigenvalue) Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix. S-optimality This criterion maximizes a quantity measuring the mutual column orthogonality of X and the determinant of the information matrix. T-optimality This criterion maximizes the discrepancy between two proposed models at the design locations. Other optimality-criteria are concerned with the variance of predictions: G-optimality A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)−1X'. This has the effect of minimizing the maximum variance of the predicted values. I-optimality (integrated) A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space. V-optimality (variance) A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points. Contrasts In many applications, the statistician is most concerned with a "parameter of interest" rather than with "nuisance parameters". More generally, statisticians consider linear combinations of parameters, which are estimated via linear combinations of treatment-means in the design of experiments and in the analysis of variance; such linear combinations are called contrasts. Statisticians can use appropriate optimality-criteria for such parameters of interest and for contrasts. Implementation Catalogs of optimal designs occur in books and in software libraries. In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user's specification. The experimenter must specify a model for the design and an optimality-criterion before the method can compute an optimal design. Practical considerations Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments. Model dependence and robustness Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design is model dependent: While an optimal design is best for that model, its performance may deteriorate on other models. On other models, an optimal design can be either better or worse than a non-optimal design. Therefore, it is important to benchmark the performance of designs under alternative models. Choosing an optimality criterion and robustness The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" of Kiefer. The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in the optimality-criterion is much greater than is robustness with respect to changes in the model. Flexible optimality criteria and convex analysis High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion. All of the traditional optimality-criteria are convex (or concave) functions, and therefore optimal-designs are amenable to the mathematical theory of convex analysis and their computation can use specialized methods of convex minimization. The practitioner need not select exactly one traditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria and nonnegative combinations of optimality criteria (since these operations preserve convex functions). For convex optimality criteria, the Kiefer-Wolfowitz equivalence theorem allows the practitioner to verify that a given design is globally optimal. The Kiefer-Wolfowitz equivalence theorem is related with the Legendre-Fenchel conjugacy for convex functions. If an optimality-criterion lacks convexity, then finding a global optimum and verifying its optimality often are difficult. Model uncertainty and Bayesian approaches Model selection When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in the biostatistics supporting pharmacokinetics and pharmacodynamics, following the work of Cox and Atkinson. Bayesian experimental design When practitioners need to consider multiple models, they can specify a probability-measure on the models and then select any design maximizing the expected value of such an experiment. Such probability-based optimal-designs are called optimal Bayesian designs. Such Bayesian designs are used especially for generalized linear models (where the response follows an exponential-family distribution). The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers. Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality. Iterative experimentation Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments. Sequential analysis Sequential analysis was pioneered by Abraham Wald. In 1972, Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs were surveyed later by S. Zacks. Of course, much work on the optimal design of experiments is related to the theory of optimal decisions, especially the statistical decision theory of Abraham Wald. Response-surface methodology Optimal designs for response-surface models are discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. The blocking of optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos. The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, by J. D. Gergonne in 1815 (Stigler). In English, two early contributions were made by Charles S. Peirce and Kirstine Smith. Pioneering designs for multivariate response-surfaces were proposed by George E. P. Box. However, Box's designs have few optimality properties. Indeed, the Box–Behnken design requires excessive experimental runs when the number of variables exceeds three. Box's "central-composite" designs require more experimental runs than do the optimal designs of Kôno. System identification and stochastic approximation The optimization of sequential experimentation is studied also in stochastic programming and in systems and control. Popular methods include stochastic approximation and other methods of stochastic optimization. Much of this research has been associated with the subdiscipline of system identification. In computational optimal control, D. Judin & A. Nemirovskii and Boris Polyak has described methods that are more efficient than the (Armijo-style) step-size rules introduced by G. E. P. Box in response-surface methodology. Adaptive designs are used in clinical trials, and optimal adaptive designs are surveyed in the Handbook of Experimental Designs chapter by Shelemyahu Zacks. Specifying the number of experimental runs Using a computer to find a good design There are several methods of finding an optimal design, given an a priori restriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin and Sloane. Of course, fixing the number of experimental runs a priori would be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ. Discretizing probability-measure designs In the mathematical theory on optimal experiments, an optimal design can be a probability measure that is supported on an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can be discretized to furnish approximately optimal designs. In some cases, a finite set of observation-locations suffices to support an optimal design. Such a result was proved by Kôno and Kiefer in their works on response-surface designs for quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional in response surface methodology. History In 1815, an article on optimal designs for polynomial regression was published by Joseph Diaz Gergonne, according to Stigler. Charles S. Peirce proposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture at Johns Hopkins University, Peirce introduced experimental design with these words: Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation. [....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly. Kirstine Smith proposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statistician Thorvald N. Thiele and was working with Karl Pearson in London.) See also Bayesian experimental design Blocking (statistics) Computer experiment Convex function Convex minimization Design of experiments Efficiency (statistics) Entropy (information theory) Fisher information Glossary of experimental design Hadamard's maximal determinant problem Information theory Kiefer, Jack Replication (statistics) Response surface methodology Statistical model Wald, Abraham Wolfowitz, Jacob Notes References Further reading Textbooks for practitioners and students Textbooks emphasizing regression and response-surface methodology The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses. Textbooks emphasizing block designs Optimal block designs are discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews the linear algebra used by Bailey (or the advanced books below). Bailey's exercises and discussion of randomization both emphasize statistical concepts (rather than algebraic computations). Draft available on-line. (Especially Chapter 11.8 "Optimality") (Chapter 5 "Block designs and optimality", pages 99–111) Optimal block designs are discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar. Books for professional statisticians and researchers Republication with errata-list and new preface of Wiley (0-471-61971-X) 1993 Articles and chapters Historical (Appendix No. 14). NOAA PDF Eprint. Reprinted in paragraphs 139–157, and in Design of experiments Regression analysis Statistical theory Optimal decisions Mathematical optimization Industrial engineering Systems engineering Statistical process control Management cybernetics
Optimal experimental design
[ "Mathematics", "Engineering" ]
3,011
[ "Systems engineering", "Mathematical analysis", "Statistical process control", "Industrial engineering", "Engineering statistics", "Mathematical optimization" ]
1,295,412
https://en.wikipedia.org/wiki/Pressure-fed%20engine
The pressure-fed engine is a class of rocket engine designs. A separate gas supply, usually helium, pressurizes the propellant tanks to force fuel and oxidizer to the combustion chamber. To maintain adequate flow, the tank pressures must exceed the combustion chamber pressure. Pressure fed engines have simple plumbing and have no need for complex and occasionally unreliable turbopumps. A typical startup procedure begins with opening a valve, often a one-shot pyrotechnic device, to allow the pressurizing gas to flow through check valves into the propellant tanks. Then the propellant valves in the engine itself are opened. If the fuel and oxidizer are hypergolic, they burn on contact; non-hypergolic fuels require an igniter. Multiple burns can be conducted by merely opening and closing the propellant valves as needed. If the pressurization system also has activating valves, they can be operated electrically, or by gas pressure controlled by smaller electrically operated valves. Care must be taken, especially during long burns, to avoid excessive cooling of the pressurizing gas due to adiabatic expansion. Cold helium won't liquify, but it could freeze a propellant, decrease tank pressures, or damage components not designed for low temperatures. The Apollo Lunar Module Descent Propulsion System was unusual in storing its helium in a supercritical but very cold state. It was warmed as it was withdrawn through a heat exchanger from the ambient temperature fuel. Spacecraft attitude control and orbital maneuvering thrusters are almost universally pressure-fed designs. Examples include the Reaction Control (RCS) and the Orbital Maneuvering (OMS) engines of the Space Shuttle orbiter; the RCS and Service Propulsion System (SPS) engines on the Apollo Command/Service Module; the SuperDraco (in-flight abort) and Draco (RCS) engines on the SpaceX Dragon 2; and the RCS, ascent and descent engines on the Apollo Lunar Module. Some launcher upper stages also use pressure-fed engines. These include the Aerojet AJ10 and TRW TR-201 used in the second stage of Delta II launch vehicle, and the Kestrel engine of the Falcon 1 by SpaceX. The 1960s Sea Dragon concept by Robert Truax for a big dumb booster would have used pressure-fed engines. Pressure-fed engines have practical limits on propellant pressure, which in turn limits combustion chamber pressure. High pressure propellant tanks require thicker walls and stronger materials which make the vehicle tanks heavier, thereby reducing performance and payload capacity. The lower stages of launch vehicles often use either solid fuel or pump-fed liquid fuel engines instead, where high pressure ratio nozzles are considered desirable. Other vehicles or companies using pressure-fed engine: OTRAG (rocket) Quad (rocket) of Armadillo Aerospace XCOR EZ-Rocket of XCOR Aerospace Masten Space Systems Aquarius Launch Vehicle NASA's Project Morpheus prototype lander NASA Mighty Eagle mini lunar lander CONAE's Tronador II upper stage Copenhagen Suborbitals' Spica See also Gas-generator cycle Combustion tap-off cycle Expander cycle Staged combustion cycle References External links Rocket power cycles Rocket propulsion Rocket engines Thermodynamics
Pressure-fed engine
[ "Physics", "Chemistry", "Mathematics", "Technology" ]
659
[ "Rocket engines", "Thermodynamics", "Engines", "Dynamical systems" ]
1,295,520
https://en.wikipedia.org/wiki/Eugene%20Podkletnov
Eugene Podkletnov (, Yevgeny Podkletnov) is a Russian ceramics engineer known for his claims made in the 1990s of designing and demonstrating gravity shielding devices consisting of rotating discs constructed from ceramic superconducting materials. Background and education Podkletnov graduated from the University of Chemical Technology, Mendeleyev Institute, in Moscow; he then spent 15 years at the Institute for High Temperatures in the Russian Academy of Sciences. He received a doctorate in materials science from Tampere University of Technology in Finland. After graduation he continued superconductor research at the university, in the Materials Science department, until his expulsion in 1997. After which he moved back to Moscow where it is reported that he took an engineering job. Since leaving Tampere in 1997 Podkletnov has avoided public contact or appearances. There is a report that he later returned to Tampere to work on superconductors at Tamglass Engineering Oy. Gravity shielding According to the account Podkletnov gave to Wired reporter Charles Platt in a 1996 phone interview, during a 1992 experiment with a rotating superconducting disc: "Someone in the laboratory was smoking a pipe, and the pipe smoke rose in a column above the superconducting disc. So we placed a ball-shaped magnet above the disc, attached to a balance. The balance behaved strangely. We substituted a nonmagnetic material, silicon, and still the balance was very strange. We found that any object above the disc lost some of its weight, and we found that if we rotated the disc, the effect was increased." Public controversy Podkletnov's first peer-reviewed paper on the apparent gravity-modification effect, published in 1992, attracted little notice. In 1996, he submitted a longer paper, in which he claimed to have observed a larger effect (2% weight reduction as opposed to 0.3% in the 1992 paper) to the Journal of Physics D. According to Platt, a member of the editorial staff, Ian Sample, leaked the submitted paper to Robert Matthews, the science correspondent for the British newspaper, the Sunday Telegraph. On September 1, 1996, Matthews's story broke, leading with the startling statement: "Scientists in Finland are about to reveal details of the world's first anti-gravity device." In the ensuing uproar, the director of the laboratory where Podkletnov was working issued a defensive statement that Podkletnov was working entirely on his own. Vuorinen, listed as the paper's coauthor, disavowed prior knowledge of the paper and claimed that the name was used without consent. Podkletnov himself complained that he had never claimed to block gravity, only to have reduced its effect. Podkletnov withdrew his second paper after it had been initially accepted. The resulting uproar over the alleged claims in the withdrawn paper is reported to be the primary reason for his expulsion from his lab and the termination of his employment at the university. Attempted verification In a 1997 telephone interview with Charles Platt, Podkletnov insisted that his gravity-shielding work was reproduced by researchers at universities in Toronto and Sheffield, but none have come forward to acknowledge this. The Sheffield work is known to have only been intended as partial replication, aimed at observing any unusual effects which might be present, since the team involved lacked the necessary facilities to construct a large enough disc and the ability to duplicate the means by which the original disc was rotated. Podkletnov counters that the researchers in question have kept quiet "lest they be criticized by the mainstream scientific community". Podkletnov is reported to have visited the Sheffield team in 2000 and advised them on the conditions necessary to achieve his effect, conditions that they never achieved. In a BBC news item, it was alleged that researchers at Boeing were funding a project called GRASP (Gravity Research for Advanced Space Propulsion) which would attempt to construct a gravity shielding device based on rotating superconductors, but a subsequent Popular Mechanics news item stated that Boeing had denied funding GRASP with company money, although Boeing acknowledged that it could not comment on "black projects". It is alleged that the GRASP proposal was presented to Boeing and Boeing chose not to fund it. In July 2002, an article by Nick Cook in Jane's Defence Weekly reported about Boeing's internal project GRASP — Gravity Research for Advanced Space Propulsion to evaluate the validity of Podkletnov's claims. The briefing obtained by Jane's says "If gravity modification is real, it will alter the entire aerospace business." The briefing allegedly says that Boeing, as well as BAE Systems and Lockheed Martin tried to approach Podkletnov directly and that "Podkletnov is strongly anti-military and will only provide assistance if the research is carried out in the 'white world' of open development." See also Anti-gravity Ning Li References External links Sven Piper on Antigravity Article about the current status of antigravity research from 2024 Eugene Podkletnov on Antigravity A recounting of recent work claimed by supporters in 2014. Russia's 'gravity-beating' scientist A BBC News profile on Podkletnov from 2002. Russian physicists Russian inventors Anti-gravity Living people Place of birth missing (living people) Russian materials scientists Superconductivity D. Mendeleev University of Chemical Technology of Russia alumni Year of birth missing (living people)
Eugene Podkletnov
[ "Physics", "Materials_science", "Astronomy", "Engineering" ]
1,107
[ "Astronomical hypotheses", "Physical quantities", "Superconductivity", "Materials science", "Anti-gravity", "Condensed matter physics", "Electrical resistance and conductance" ]
20,983,125
https://en.wikipedia.org/wiki/Pettis%20integral
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral. Definition Let where is a measure space and is a topological vector space (TVS) with a continuous dual space that separates points (that is, if is nonzero then there is some such that ), for example, is a normed space or (more generally) is a Hausdorff locally convex TVS. Evaluation of a functional may be written as a duality pairing: The map is called if for all the scalar-valued map is a measurable map. A weakly measurable map is said to be if there exists some such that for all the scalar-valued map is Lebesgue integrable (that is, ) and The map is said to be if for all and also for every there exists a vector such that In this case, is called the of on Common notations for the Pettis integral include To understand the motivation behind the definition of "weakly integrable", consider the special case where is the underlying scalar field; that is, where or In this case, every linear functional on is of the form for some scalar (that is, is just scalar multiplication by a constant), the condition simplifies to In particular, in this special case, is weakly integrable on if and only if is Lebesgue integrable. Relation to Dunford integral The map is said to be if for all and also for every there exists a vector called the of on such that where Identify every vector with the map scalar-valued functional on defined by This assignment induces a map called the canonical evaluation map and through it, is identified as a vector subspace of the double dual The space is a semi-reflexive space if and only if this map is surjective. The is Pettis integrable if and only if for every Properties An immediate consequence of the definition is that Pettis integrals are compatible with continuous linear operators: If is linear and continuous and is Pettis integrable, then is Pettis integrable as well and The standard estimate for real- and complex-valued functions generalises to Pettis integrals in the following sense: For all continuous seminorms and all Pettis integrable , holds. The right-hand side is the lower Lebesgue integral of a -valued function, that is, Taking a lower Lebesgue integral is necessary because the integrand may not be measurable. This follows from the Hahn-Banach theorem because for every vector there must be a continuous functional such that and for all , . Applying this to gives the result. Mean value theorem An important property is that the Pettis integral with respect to a finite measure is contained in the closure of the convex hull of the values scaled by the measure of the integration domain: This is a consequence of the Hahn-Banach theorem and generalizes the mean value theorem for integrals of real-valued functions: If , then closed convex sets are simply intervals and for , the following inequalities hold: Existence If is finite-dimensional then is Pettis integrable if and only if each of ’s coordinates is Lebesgue integrable. If is Pettis integrable and is a measurable subset of , then by definition and are also Pettis integrable and If is a topological space, its Borel--algebra, a Borel measure that assigns finite values to compact subsets, is quasi-complete (that is, every bounded Cauchy net converges) and if is continuous with compact support, then is Pettis integrable. More generally: If is weakly measurable and there exists a compact, convex and a null set such that , then is Pettis-integrable. Law of large numbers for Pettis-integrable random variables Let be a probability space, and let be a topological vector space with a dual space that separates points. Let be a sequence of Pettis-integrable random variables, and write for the Pettis integral of (over ). Note that is a (non-random) vector in and is not a scalar value. Let denote the sample average. By linearity, is Pettis integrable, and Suppose that the partial sums converge absolutely in the topology of in the sense that all rearrangements of the sum converge to a single vector The weak law of large numbers implies that for every functional Consequently, in the weak topology on Without further assumptions, it is possible that does not converge to To get strong convergence, more assumptions are necessary. See also References James K. Brooks, Representations of weak and strong integrals in Banach spaces, Proceedings of the National Academy of Sciences of the United States of America 63, 1969, 266–270. Fulltext Israel M. Gel'fand, Sur un lemme de la théorie des espaces linéaires, Commun. Inst. Sci. Math. et Mecan., Univ. Kharkoff et Soc. Math. Kharkoff, IV. Ser. 13, 1936, 35–40 Michel Talagrand, Pettis Integral and Measure Theory, Memoirs of the AMS no. 307 (1984) Functional analysis Integrals
Pettis integral
[ "Mathematics" ]
1,158
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
20,986,106
https://en.wikipedia.org/wiki/List%20of%20medical%20and%20health%20informatics%20journals
This is a list of notable journals related to medical and health informatics. BMC Medical Informatics and Decision Making BMJ Health & Care Informatics Computers in Biology and Medicine Health Informatics Journal International Journal of Medical Informatics Journal of the American Medical Informatics Association Journal of Biomedical Informatics Journal of Information Professionals in Health Journal of Innovation in Health Informatics Journal of Medical Internet Research Medical & Biological Engineering & Computing Methods of Information in Medicine PLOS Digital Health Statistics in Medicine References See also List of medical journals Lists of academic journals Health Health informatics Medical and health informatics Medical health j
List of medical and health informatics journals
[ "Technology", "Biology" ]
120
[ "Computing-related lists", "Health informatics", "Bioinformatics", "Medical technology", "Biomedical informatics journals" ]
20,986,604
https://en.wikipedia.org/wiki/Fredholm%20module
In noncommutative geometry, a Fredholm module is a mathematical structure used to quantize the differential calculus. Such a module is, up to trivial changes, the same as the abstract elliptic operator introduced by . Definition If A is an involutive algebra over the complex numbers C, then a Fredholm module over A consists of an involutive representation of A on a Hilbert space H, together with a self-adjoint operator F, of square 1 and such that the commutator [F, a] is a compact operator, for all a in A. References The paper by Atiyah is reprinted in volume 3 of his collected works, External links Fredholm module, on PlanetMath Noncommutative geometry Mathematical quantization
Fredholm module
[ "Physics" ]
158
[ "Mathematical quantization", "Quantum mechanics" ]
20,987,851
https://en.wikipedia.org/wiki/Artificial%20skin
Artificial skin is a collagen scaffold that induces regeneration of skin in mammals such as humans. The term was used in the late 1970s and early 1980s to describe a new treatment for massive burns. It was later discovered that treatment of deep skin wounds in adult animals and humans with this scaffold induces regeneration of the dermis. It has been developed commercially under the name Integra and is used in massively burned patients, during plastic surgery of the skin, and in treatment of chronic skin wounds. Alternatively, the term "artificial skin" sometimes is used to refer to skin-like tissue grown in a laboratory, although this technology is still quite a way away from being viable for use in the medical field. 'Artificial skin' can also refer to flexible semiconductor materials that can sense touch for those with prosthetic limbs (also experimental). Background The skin is the largest organ in the human body. Skin is made up of three layers, the epidermis, dermis and the fat layer, also called the hypodermis. The epidermis is the outer layer of skin that keeps vital fluids in and harmful bacteria out of the body. The dermis is the inner layer of skin that contains blood vessels, nerves, hair follicles, oil, and sweat glands. Severe damage to large areas of skin exposes the human organism to dehydration and infections that can result in death. Traditional ways of dealing with large losses of skin have been to use skin grafts from the patient (autografts) or from an unrelated donor or a cadaver. The former approach has the disadvantage that there may not be enough skin available, while the latter suffers from the possibility of rejection or infection. Until the late twentieth century, skin grafts were constructed from the patient's own skin. This became a problem when skin had been damaged extensively, making it impossible to treat severely injured patients with autografts only. Regenerated skin: discovery and clinical use A process for inducing regeneration in skin was invented by Ioannis V. Yannas (then an assistant professor in the Fibers and Polymers Division, Department of Mechanical Engineering, at Massachusetts Institute of Technology) and John F. Burke (then chief of staff at Shriners Burns Institute in Boston, Massachusetts). Their initial objective was to discover a wound cover that would protect severe skin wounds from infection by accelerating wound closure. Several kinds of grafts made of synthetic and natural polymers were prepared and tested in a guinea pig animal model. By the late 1970s it was evident that the original objective was not reached. Instead, these experimental grafts typically did not affect the speed of wound closure. In one case, however, a particular type of collagen graft led to significant delay of wound closure. Careful study of histology samples revealed that grafts that delayed wound closure induced the synthesis of new dermis de novo at the injury site, instead of forming scar, which is the normal outcome of the spontaneous wound healing response. This was the first demonstration of regeneration of a tissue (dermis) that does not regenerate by itself in the adult mammal. After the initial discovery, further research led to the composition and fabrication of grafts that were evaluated in clinical trials. These grafts were synthesized as a graft copolymer of microfibrillar type I collagen and a glycosaminoglycan, chondroitin-6-sulfate, fabricated into porous sheets by freeze-drying, and then cross-linked by dehydrothermal treatment. Control of the structural features of the collagen scaffold (average pore size, degradation rate and surface chemistry) was eventually found to be a critical prerequisite for its unusual biological activity. In 1981 Burke and Yannas proved that their artificial skin worked on patients with 50 to 90 percent burns, vastly improving the chances of recovery and improved quality of life. John F. Burke also claimed, in 1981, "[The Artificial skin] is soft and pliable, not stiff and hard, unlike other substances used to cover burned-off skin." Several patents were granted to MIT for the creation of collagen-based grafts that can induce dermis regeneration. U.S. Patent 4,418,691 (December 6, 1983) was cited by the National Inventors Hall of Fame as the key patent describing the invention of a process for regenerated skin (Inductees Natl. Inventors Hall of Fame, 2015). These patents were later translated into a commercial product by Integra LifeSciences Corp., a company founded in 1989. Integra Dermal Regeneration Template received FDA approval in 1996, and the FDA listed it as a "Significant Medical Device Breakthrough" in the same year. Since then, it has been applied worldwide to treat patients who are in need of new skin to treat massive burns and traumatic skin wounds, those undergoing plastic surgery of the skin, as well as others who have certain forms of skin cancer. In clinical practice, a thin graft sheet manufactured from the active collagen scaffold is placed on the injury site, which is then covered with a thin sheet of silicone elastomer that protects the wound site from bacterial infection and dehydration. The graft can be seeded with autologous cells (keratinocytes) in order to accelerate wound closure, however the presence of these cells is not required for regenerating the dermis. Grafting skin wounds with Integra leads to the synthesis of normal vascularized and innervated dermis de novo, followed by re-epithelization and formation of epidermis. Although early versions of the scaffold were not capable of regenerating hair follicles and sweat glands, later developments by S.T Boyce and coworkers led to solution of this problem. The mechanism of regeneration using an active collagen scaffold has been largely clarified. The scaffold retains regenerative activity provided that it has been prepared with appropriate levels of the specific surface (pore size in range 20-125 μm), degradation rate (degradation half-life 14 ± 7 days) and surface chemical features (ligand densities for integrins α1β1 and α2β1 must exceed approximately 200 μΜ α1β1 and α2β1 ligands). It has been hypothesized that specific binding of a sufficient number of contractile cells (myofibroblasts) on the scaffold surface, occurring within a narrow time window, is required for induction of skin regeneration in the presence of this scaffold. Studies with skin wounds have been extended to transected peripheral nerves, and the combined evidence supports a common regeneration mechanism for skin and peripheral nerves using this scaffold. Design considerations Fabricating artificial skin has the difficulty of mimicking living tissue with similar biological and mechanical performance. As outlined by Integra founders Yannas and Burke, there are three key factors to consider in the creation of artificial skin: material, bio/physiochemical properties, and mechanical properties. Material Material selection is the most important part for designing artificial skin. It needs to be biocompatible with the body while having adequate properties for adequate function. Human skin is made of type I collagen, elastin, and glycosaminoglycan. The artificial skin by Integra is made of a copolymer composed of collagen and glycosaminoglycan. Collagen is a hydrophilic polymer whose degradation and stiffness can controlled by the degree of cross linking. However, it can be brittle and susceptible to breakdown by the enzyme collagenase. In order to make the material tougher and more resistant, a copolymer is formed with glycosaminoglycan (GAG). GAGs are long polysaccharides that act as shock absorbers. Collagen-GAG (CG) matrices have a higher modulus of elasticity and energy needed to fracture than collagen alone, making it a more ideal material. An outer layer of silicone is normally applied to the matrix in order to serve as a protective layer. Another material that can be used in synthetic skin is elastin. Elastin has a similar effect to GAG as it reduces the tensile strength and compressive modulus of the material while increasing its toughness. Mechanical properties Not only does the material have to be biocompatible and conducive to proliferation, it also has to have mechanical properties similar to that of real skin in order to serve as an adequate substitute. Skin is the first line of defense for the body, so it is subject to lots of chemical and mechanical assaults. As such, the artificial skin needs to be strong and tear resistant from stretching that occurs in everyday activity. It also needs to be strong enough to resist sutures from surgery. Stiffness can be controlled in several ways. As previously mentioned, crosslinking through chemical or biophysical methods. Chemical methods produce stronger materials, but biophysical methods are more conducive to cell proliferation. Furthermore, it has been noted that skin is viscoelastic and undergoes hysteresis- it has a time dependent stress relaxation factor and goes through a separate path during unloading. Another important consideration is the wettability of the material. This is the ability of a liquid to maintain contact with a solid surface. If the CG matrix membrane does not wet the woundbed substrate properly, air pockets can form which will lead to infection. The membrane must not be too stiff so it can drape over the surface. Furthermore, shear (lateral) or peeling (normal) forces can displace the membrane such that air pockets can reform. This can be mitigated by adding an adhesive bond like eschar or scab between the two surfaces. Although the mechanical properties of the synthetic skin do not need to be exactly the same as human, the main ones that should be similar include modulus of elasticity, tear strength, and fracture energy. Biophysical and physiochemical properties Ultimately, the goal of the synthetic skin is to close the wound and regrow new skin. This means it first adheres to the wound and creates an airtight seal where neodermal growth can occur. During this time, the synthetic skin must degrade such that there is space for the newly grown skin. Thus, biocompatibility and degradability are also under consideration for design. Further research Research is continually being done on artificial skin. Newer technologies, such as an autologous spray-on skin produced by Avita Medical, are being tested in efforts to accelerate healing and minimize scarring. The Fraunhofer Institute for Interfacial Engineering and Biotechnology is working towards a fully automated process for producing artificial skin. Their goal is a simple two-layer skin without blood vessels that can be used to study how skin interacts with consumer products, such as creams and medicines. They hope to eventually produce more complex skin that can be used in transplants. Hanna Wendt, and a team of her colleagues in the Department of Plastic, Hand and Reconstructive Surgery at Medical School Hannover Germany, have found a method for creating artificial skin using spider silk. Before this, however, artificial skin was grown using materials like collagen. These materials did not seem strong enough. Instead, Wendt and her team turned to spider silk, which is known to be 5 times stronger than Kevlar. The silk is harvested by "milking" the silk glands of golden orb web spiders. The silk was spooled as it was harvested, and then it was woven into a rectangular steel frame. The steel frame was 0.7 mm thick, and the resulting weave was easy to handle or sterilize. Human skin cells were added to the meshwork silk and were found to flourish under an environment providing nutrients, warmth and air. However at this time, using spider silk to grow artificial skin in mass quantities is not practical because of the tedious process of harvesting spider silk. Australian researchers are currently searching for a new, innovative way to produce artificial skin. This would produce artificial skin more quickly and in a more efficient way. The skin produced would only be 1 millimeter thick and would only be used to rebuild the epidermis. They can also make the skin 1.5 millimetres thick, which would allow the dermis to repair itself if needed. This would require bone marrow from a donation or from the patient's body. The bone marrow would be used as a "seed", and would be placed in the grafts to mimic the dermis. This has been tested on animals and has been proven to work with animal skin. Professor Maitz said, "In Australia, someone with a full-thickness burn to up to 80 per cent of their body surface area has every prospect of surviving the injury... However their quality of life remains questionable as we're unable, at present, to replace the burned skin with normal skin...We're committed to ensuring the pain of survival is worth it, by developing a living skin equivalent." Synthetic skin Another form of "artificial skin" has been created out of flexible semiconductor materials that can sense touch for those with prosthetic limbs. The artificial skin is anticipated to augment robotics in conducting rudimentary jobs that would be considered delicate and require sensitive "touch". Scientists found that by applying a layer of rubber with two parallel electrodes that stored electrical charges inside of the artificial skin, tiny amounts of pressure could be detected. When pressure is exerted, the electrical charge in the rubber is changed and the change is detected by the electrodes. However, the film is so small that when pressure is applied to the skin, the molecules have nowhere to move and become entangled. The molecules also fail to return to their original shape when the pressure is removed. A recent development in the synthetic skin technique has been made by imparting the color changing properties to the thin layer of silicon with the help of artificial ridges which reflect a very specific wavelength of light. By tuning the spaces between these ridges, color to be reflected by the skin can be controlled. This technology can be used in color-shifting camouflages and sensors that can detect otherwise imperceptible defects in buildings, bridges, and aircraft. 3D printers Universidad Carlos III de Madrid, Center for Energy, Environmental and Technological Research, Hospital General Universitario Gregorio Marañón and BioDan Group created a 3D bioprinter capable of creating human skin that functions exactly as real skin does. References Skin Skin Synthetic biology
Artificial skin
[ "Engineering", "Biology" ]
3,006
[ "Synthetic biology", "Biological engineering", "Artificial organs", "Bioinformatics", "Molecular genetics" ]
19,858,946
https://en.wikipedia.org/wiki/Positive%20pressure%20enclosure
A positive pressure enclosure, also known as a welding habitat or hot work habitat, is a chamber used to provide a safe working environment for performing hot work in the presence of explosive gases or vapors. They are commonly used in welding environments and are associated with the offshore oil industry. A positive pressure enclosure works by providing a constant inflow of breathable air, which in turn causes a continuous outflow of gas from the chamber. This outflow of gas prevents the ingress of explosive gases or vapors that are often present in these work environments. This constant venting of gases from the chamber also serves to purge the air inside the chamber of unwanted gaseous byproducts of the welding process. Most commercial versions of positive pressure enclosures are referred to by their manufacturers as habitats. Safety measures Air quality According to Malaysia's Department of Occupational Safety and Health (DOSH) the ventilation is considered adequate when the number of air changes every hour is not less than 10 under normal conditions (defined as "processes which generate little or no heat, smoke or fume") and no less than twenty ("should there be processes which generate heat, smoke or fume"). The airflow inside the habitat should be arranged in a one-way flow with the inlet (at floor level) and outlet (at top level) on the farthest and opposite side. The habitat ventilation flow rate must be 2,000 cubic feet per minute from a clean source per welder to dilute the polluted air inside the habitat. Heat stress The United States Department of Labor OSHA claims that every type of job that raises workers deep core temperature (listed as higher than 100.4 degrees F (38°C)) raises the risk of heat stress, and provides a list of guidelines which might be used to manage work in these environments. Positive pressure assurance A minimum positive pressure of 0.1 inch of water or 0.00025 bar (equivalent to 25 Pascal or 0.00363 PSI) shall be maintained inside the Habitat enclosure to prevent ingestion of hydrocarbon in the event of leak occurred outside. Standards IEC standard Many countries are members of the International Electrotechnical Commission (IEC). Positive pressure enclosures, or "welding habitats" are working on the principle of overpressure. This protection principle is regulated by IEC Standard 60079-13. ATEX directive The ATEX certification is the national certification standard of the European Union, and mandatory to operate equipment in explosive atmosphere in Europe. ATEX certification is not based on the IECEx certification scheme, certification is bases on the ATEX directive 2014/34/EU – see main article. Under the ATEX 2014/34/EU Directive, all equipment requires a proper manufacturer's EU Declaration of Conformity. For Zone 1 welding habitats this EU Declaration of Conformity must be based on a Notified Body issued EU–Type Examination Certificate for transportable ventilated rooms EN 50381. Operating principles Flammability limits Flammable gases are not generally explosive under all conditions, . Additionally, oxygen must be present. The flammability limits of gases are expressed in proportions to the other gases present. For example, for methane, research by Akifumi Takahashi et. al. shows limits between 4.4% for the lower explosive limit and the upper explosive limit at 16.3%. A positive pressure enclosure works by ensuring that the methane present in the work area never approaches the lower explosive limit. Positive pressure The operating pressure inside a typical isolation chamber is set only marginally above local pressure; typically only 0.05 kilopascals (about 0.007 pounds per square inch) above local atmospheric pressure. This is low enough to be undetectable to operators working inside the enclosure (a person sitting in a bathtub full of water is exposed to higher pressures), but due to the leakages, it needs to ensure that the volume of air inside the enclosure is constantly changing. Intake air is drawn into the enclosure by fan units. See also References Welding safety Process safety
Positive pressure enclosure
[ "Chemistry", "Engineering" ]
825
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
19,862,791
https://en.wikipedia.org/wiki/Composite%20overwrapped%20pressure%20vessel
A composite overwrapped pressure vessel (COPV) is a vessel consisting of a thin, non-structural liner wrapped with a structural fiber composite, designed to hold a fluid under pressure. The liner provides a barrier between the fluid and the composite, preventing leaks (which can occur through matrix microcracks which do not cause structural failure) and chemical degradation of the structure. In general, a protective shell is applied for shielding against impact damage. The most commonly used composites are fiber reinforced polymers (FRP), using carbon and kevlar fibers. The primary advantage of a COPV as compared to a similar sized metallic pressure vessel is lower weight; COPVs, however, carry an increased cost of manufacturing and certification. Overview A composite overwrapped pressure vessel (COPV) is a pressure-containing vessel, typically composed of a metallic liner, a composite overwrap, and one or more bosses. They are used in spaceflight due to their high strength and low weight. During operation, COPVs expand from their unpressurized state. Manufacturing COPVs are commonly manufactured by winding resin-impregnated high tensile strength fiber tape directly onto a cylindrical or spherical metallic liner. A robot places the tape so that the fibers lay straight and do not cross or kink, which would create a stress concentration in the fiber, and also ensures that there are minimal gaps or voids between tapes. The entire vessel is then heated in a temperature controlled oven in order to harden the composite resin. During manufacturing, COPVs undergo a process called autofrettage. The unit is pressurized and the liner expands and plastically (permanently) deforms, resulting in a permanent volume increase. The pressure is then relieved and the liner contracts a small amount, being loaded in compression by the overwrap at near its compressive yield point. This residual strain improves cycle life. Another reason to autofrettage a vessel is to verify that the volume increase across pressure vessels in a product line remain within an expected range. Larger volume growth than usual could indicate manufacturing defects such as overwrap voids, a high stress gradient through the overwrap layers, or other damage. Testing Various tests and inspections are performed on COPVs, including hydrostatic tests, stress-rupture lifetime, and nondestructive evaluation. Aging Three main components affect a COPVs strength due to aging: cycle fatigue, age life of the overwrap, and stress rupture life. Failures COPVs can be subject to complex modes of failure. In 2016, a SpaceX Falcon 9 rocket exploded on the pad due to the failure of a COPV inside the liquid oxygen tank: the failure resulted from accumulation of frozen solid oxygen between the COPV's aluminum liner and composite overwrap in a void or buckle. The entrapped oxygen can either break overwrap fibers or cause friction between fibers as it swells, igniting the fibers in the pure oxygen and causing the COPV to fail. A similar failure occurred in 2015 on CRS-7 when the COPV burst, causing the oxygen tank to overpressurize and explode 139 seconds into flight. See also References Pressure vessels
Composite overwrapped pressure vessel
[ "Physics", "Chemistry", "Engineering" ]
650
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
19,870,711
https://en.wikipedia.org/wiki/Laser-based%20angle-resolved%20photoemission%20spectroscopy
Laser-based angle-resolved photoemission spectroscopy is a form of angle-resolved photoemission spectroscopy that uses a laser as the light source. Photoemission spectroscopy is a powerful and sensitive experimental technique to study surface physics. It is based on the photoelectric effect originally observed by Heinrich Hertz in 1887 and later explained by Albert Einstein in 1905 that when a material is shone by light, the electrons can absorb photons and escape from the material with the kinetic energy: , where is the incident photon energy, the work function of the material. Since the kinetic energy of ejected electrons are highly associated with the internal electronic structure, by analyzing the photoelectron spectroscopy one can realize the fundamental physical and chemical properties of the material, such as the type and arrangement of local bonding, electronic structure and chemical composition. In addition, because electrons with different momentum will escape from the sample in different directions, angle-resolved photoemission spectroscopy is widely used to provide the dispersive energy-momentum spectrum. The photoemission experiment is conducted using synchrotron radiation light source with typical photon energy of 20 – 100 eV. Synchrotron light is ideal for investigating two-dimensional surface systems and offers unparalleled flexibility to continuously vary the incident photon energy. However, due to the high costs to construct and maintain this accelerator, high competition for beam time, as well as the universal minimum electron mean free path in the material around the operating photon energy (20–100 eV) which leads to the fundamental hindrance to the three-dimensional bulk materials sensitivity, an alternative photon source for angle-resolved photoemission spectroscopy is desirable. If femtosecond lasers are used, the method can easily be extended to access excited electronic states and electron dynamics by introducing a pump-probe scheme, see also two-photon photoelectron spectroscopy. Laser-based ARPES Background Table-top laser-based angle-resolved photoemission spectroscopy had been developed by some research groups. Daniel Dessau of University of Colorado, Boulder, made the first demonstration and applied this technique to explore superconducting system. The achievement not only greatly reduces the costs and size of facility, but also, most importantly, provides the unprecedented higher bulk sensitivity due to the low photon energy, typically 6 eV, and consequently the longer photoelectron mean free path (2–7 nm) in the sample. This advantage is extremely beneficial and powerful for the study of strongly correlated materials and high-Tc superconductors in which the physics of photoelectrons from the topmost layers might be different from the bulk. In addition to about one-order-of-magnitude improvement in the bulk sensitivity, the advance in the momentum resolution is also very significant: the photoelectrons will be more broadly dispersed in emission angle when the energy of incident photon decreases. In other words, for a given angular resolution of the electron spectrometer, the lower photon energy leads to higher momentum resolution. The typical momentum resolution of a 6 eV laser-based ARPES is approximately 8 times better than that of a 50 eV synchrotron radiation ARPES. Besides, the better momentum resolution due to low photon energy also results in less k-space accessible to ARPES which is helpful to the more precise spectrum analysis. For instance, in the 50 eV synchrotron ARPES, electrons from the first 4 Brillouin zones will be excited and scattered to contribute to the background of photoelectron analysis. However, the small momentum of 6 eV ARPES will only access some part of the first Brillouin zone and therefore only those electrons from small region of k-space can be ejected and detected as the background. The reduced inelastic scattering background is desirable while doing the measurement of weak physical quantities, in particular the high-Tc superconductors. Experimental realization The first 6 eV laser-based ARPES system used a Kerr mode-locked Ti: sapphire oscillator is used and pumped with another frequency doubled Nd:Vanadate laser of 5 W and then generates 70 fs and 6 nJ pulses which are tunable around 840 nm (1.5 eV) with the 1 MHz repetition rate. Two stages of non-linear second harmonic generation of light are carried out through type Ι phase matching in β-barium borate and then the quadruple light with 210 nm (~ 6 eV) is generated and finally focused and directed into the ultra-high vacuum chamber as the low-energy photon source to investigate the electronic structure of the sample. In the first demonstration, Dessau’s group showed that the typical forth harmonic spectrum fits very well with the Gaussian profile with a full width at half maximum of 4.7 meV as well as presents a 200 μW power. The performance of high flux (~ 1014- 1015 photons/s) and narrow bandwidth makes the laser-based ARPES overwhelm the synchrotron radiation ARPES even though the best undulator beamlines are used. Another noticeable point is that one can make the quadruple light pass through either 1/4 wave plate or 1/2 wave plate which produces the circular polarization or any linear polarization light in the ARPES. Because the polarization of light can influence the signal to background ratio, the ability to control the polarization of light is a very significant improvement and advantage over the synchrotron ARPES. With the aforementioned favorable features, including lower costs for operating and maintenance, better energy and momentum resolution, and higher flux and ease of polarization control of photon source, the laser-based ARPES undoubtedly is an ideal candidate to be employed to conduct more sophisticated experiments in condensed matter physics. Applications High-Tc superconductor One way to show the powerful ability of laser-based ARPES is to study high Tc superconductors. The following figure references refer to this publication. Fig. 1 shows the experimental dispersion relation, binding energy vs. momentum, of the superconducting Bi2Sr2CaCu2O8+d along the nodal direction of the Brillouin zone. Fig. 1 (b) and Fig. 1 (c) are taken by the synchrotron light source of 28 eV and 52 eV, respectively, with the best undulator beamlines. The significantly sharper spectral peaks, the evidence of quasiparticles in the cuprate superconductor, by the powerful laser-based ARPES are shown in Fig. 1 (a). This is the first comparison of dispersive energy-momentum relation at low photon energy from table-top laser with higher energy from synchrotron ARPES. The much clearer dispersion in (a) indicates the improved energy-momentum resolution as well as many important physical features, such as overall band dispersion, Fermi surface, superconducting gaps, and a kink by electron-boson coupling, are successfully reproduced. It is foreseeable that in the near future the laser-based ARPES will be widely used to help condensed matter physicists get more detailed information about the nature of superconductivity in the exotic materials as well as other novel properties that cannot be observed by the state-of-the-art conventional experimental techniques. Time-resolved electron dynamics Femtosecond laser-based ARPES can be extended to give spectroscopic access to excited states in time-resolved photoemission and two-photon photoelectron spectroscopy. By pumping an electron to a higher level excited state with the first photon, the subsequent evolution and interactions of electronic states as a function of time can be studied by the second probing photon. The traditional pump-probe experiments usually measure the changes of some optical constants, which might be too complex to obtain the relevant physics. Since the ARPES can provide a lot of detailed information about the electronic structures and interactions, the pump-probe laser-based ARPES may study more complicated electronic systems with sub-picosecond resolution. Summary and perspective Even though the angle-resolved synchrotron radiation source is widely used to investigate the surface dispersive energy-momentum spectrum, the laser-based ARPES can even provide more detailed and bulk-sensitive electronic structures with much better energy and momentum resolution, which are critically necessary for studying the strongly correlated electronic system, high-Tc superconductor, and phase transition in exotic quantum system. In addition, the lower costs for operating and higher photon flux make laser-based ARPES easier to be handled and more versatile and powerful among other modern experimental techniques for surface science. See also Photoemission ARPES Two-photon photoelectron spectroscopy Synchrotron radiation XPS Fermi surface List of laser articles References Emission spectroscopy Synchrotron-related techniques
Laser-based angle-resolved photoemission spectroscopy
[ "Physics", "Chemistry" ]
1,794
[ "Emission spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
19,871,700
https://en.wikipedia.org/wiki/Ballistic%20impact
Ballistic impact is a high velocity impact by a small mass object, analogous to runway debris or small arms fire. The simulation of ballistic impacts can be achieved with a light-gas gun or other ballistic launcher. It is important to study the response of materials to ballistic impact loads. Applications of this research include body armor, armored vehicles and fortified buildings, as well as the protection of essential equipment, such as the jet engines of an airliner. References See also Impact (mechanics) Ballistics Firearms Forensic techniques Materials testing Projectiles
Ballistic impact
[ "Physics", "Materials_science", "Engineering" ]
106
[ "Materials testing", "Applied and interdisciplinary physics", "Ballistics", "Materials science" ]
19,872,687
https://en.wikipedia.org/wiki/Ferroelectric%20liquid%20crystal%20display
Ferroelectric liquid-crystal display (FLCD) is a display technology based on the ferroelectric properties of chiral smectic liquid crystals as proposed in 1980 by Clark and Lagerwall. Reportedly discovered in 1975, several companies pursued the development of FLCD technologies, notably Canon and Central Research Laboratories (CRL), along with others including Seiko, Sharp, Mitsubishi and GEC. Canon and CRL pursued different technological approaches with regard to the switching of display cells, these providing the individual pixels or subpixels, and the production of intermediate pixel intensities between full transparency and full opacity, these differing approaches being adopted by other companies seeking to develop FLCD products. Development By 1985, Seiko had already demonstrated a colour FLCD panel able to display a 10-inch diagonal still image with a resolution of . By 1993, Canon had delivered the first commercial application of the technology in its EZPS Japanese-language desktop publishing system in the form of a 15-inch monochrome display with a reported cost of around £2,000, and the company demonstrated a 21-inch 64-colour display and a 24-inch 16-greyscale display, both with a resolution and able to show "GUI software with multiple windows". Other applications included projectors, viewfinders and printers. The FLCD did not make many inroads as a direct view display device. Manufacturing of larger FLCDs was problematic making them unable to compete against direct view LCDs based on nematic liquid crystals using the Twisted nematic field effect or In-Plane Switching. Today, the FLCD is used in reflective microdisplays based on Liquid Crystal on Silicon technology. Using ferroelectric liquid crystal (FLC) in FLCoS technology allows a much smaller display area which eliminates the problems of manufacturing larger area FLC displays. Additionally, the dot pitch or pixel pitch of such displays can be as low as 6 μm giving a very high resolution display in a small area. To produce color and grey-scale, time multiplexing is used, exploiting the sub-millisecond switching time of the ferroelectric liquid crystal. These microdisplays find applications in 3D head mounted displays (HMDs), image insertion in surgical microscopes, and electronic viewfinders where direct-view LCDs fail to provide more than 600 ppi resolution. Ferroelectric LCoS also finds commercial uses in Structured illumination for 3D-Metrology and Super-resolution microscopy. Some commercial products use FLCD. High switching speed allows building optical switches and shutters in printer heads. References Display technology Liquid crystal displays
Ferroelectric liquid crystal display
[ "Engineering" ]
541
[ "Electronic engineering", "Display technology" ]
13,021,753
https://en.wikipedia.org/wiki/Ostrowski%E2%80%93Hadamard%20gap%20theorem
In mathematics, the Ostrowski–Hadamard gap theorem is a result about the analytic continuation of complex power series whose non-zero terms are of orders that have a suitable "gap" between them. Such a power series is "badly behaved" in the sense that it cannot be extended to be an analytic function anywhere on the boundary of its disc of convergence. The result is named after the mathematicians Alexander Ostrowski and Jacques Hadamard. Statement of the theorem Let 0 < p1 < p2 < ... be a sequence of integers such that, for some λ > 1 and all j ∈ N, Let (αj)j∈N be a sequence of complex numbers such that the power series has radius of convergence 1. Then no point z with |z| = 1 is a regular point for f; i.e. f cannot be analytically extended from the open unit disc D to any larger open set—not even to a single point on the boundary of D. See also Lacunary function Fabry gap theorem References External links Mathematical series Theorems in complex analysis
Ostrowski–Hadamard gap theorem
[ "Mathematics" ]
224
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical structures", "Series (mathematics)", "Mathematical analysis stubs", "Calculus", "Theorems in complex analysis" ]
13,024,185
https://en.wikipedia.org/wiki/Glucagon%20receptor%20family
The glucagon receptor family is a group of closely related G-protein coupled receptors which include: Glucagon receptor Glucagon-like peptide 1 receptor Glucagon-like peptide 2 receptor Gastric inhibitory polypeptide receptor The first three receptors bind closely related peptide hormones (glucagon, glucagon-like peptide-1, glucagon-like peptide-2) derived from the proglucagon polypeptide. The last receptor binds gastric inhibitory polypeptide. References External links G protein-coupled receptors
Glucagon receptor family
[ "Chemistry" ]
123
[ "G protein-coupled receptors", "Signal transduction" ]
13,026,240
https://en.wikipedia.org/wiki/Glycogen%20phosphorylase%20isoenzyme%20BB
Glycogen phosphorylase isoenzyme BB (abbreviation: GPBB) is an isoenzyme of glycogen phosphorylase. This isoform of the enzyme exists in cardiac (heart) and brain tissue. The enzyme is one of the "new cardiac markers" which are discussed to improve early diagnosis in acute coronary syndrome. A rapid rise in blood levels can be seen in myocardial infarction and unstable angina. Other enzymes related to glycogen phosphorylase are abbreviated as GPLL (liver) and GPMM (muscle). References Cardiology Blood tests de:Glycogenphosphorylase#Glycogenphosphorylase BB
Glycogen phosphorylase isoenzyme BB
[ "Chemistry" ]
155
[ "Blood tests", "Chemical pathology" ]
13,028,634
https://en.wikipedia.org/wiki/Jet%20Propulsion%20Laboratory%20Development%20Ephemeris
Jet Propulsion Laboratory Development Ephemeris (abbreviated JPL DE(number), or simply DE(number)) designates one of a series of mathematical models of the Solar System produced at the Jet Propulsion Laboratory in Pasadena, California, for use in spacecraft navigation and astronomy. The models consist of numeric representations of positions, velocities and accelerations of major Solar System bodies, tabulated at equally spaced intervals of time, covering a specified span of years. Barycentric rectangular coordinates of the Sun, eight major planets and Pluto, and geocentric coordinates of the Moon are tabulated. History There have been many versions of the JPL DE, from the 1960s through the present, in support of both robotic and crewed spacecraft missions. Available documentation is limited, but we know DE69 was announced in 1969 to be the third release of the JPL Ephemeris Tapes, and was a special purpose, short-duration ephemeris. The then-current JPL Export Ephemeris was DE19. These early releases were distributed on magnetic tape. In the days before personal computers, computers were large and expensive, and numerical integrations such as these were run by large organizations with ample resources. The JPL ephemerides prior to DE405 were integrated on a Univac mainframe in double precision. For instance, DE102, which was created in 1977, took six million steps and ran for nine days on a Univac 1100/81. DE405 was integrated on a DEC Alpha in quadruple precision. In the 1970s and early 1980s, much work was done in the astronomical community to update the astronomical almanacs from the theoretical work of the 1890s to modern, relativistic theory. From 1975 through 1982, six ephemerides were produced at JPL using the modern techniques of least-squares adjustment of numerically-integrated output to high precision data: DE96 in Nov. 1975, DE102 in Sep. 1977, DE111 in May 1980, DE118 in Sep. 1981, and DE200 in 1982. DE102 was the first numerically integrated so-called Long Ephemeris, covering much of history for which useful astronomical observations were available: 1141 BC to AD 3001. DE200, a version of DE118 migrated to the J2000.0 reference frame, was adopted as the fundamental ephemeris for the new almanacs starting in 1984. DE402 introduced coordinates referred to the International Celestial Reference Frame (ICRF). DE440 and DE441 were published in 2021, with improvements in the orbits of Jupiter, Saturn and Pluto from more recent spacecraft observations. JPL ephemerides have been the basis of the ephemerides of sun, moon and planets in the Astronomical Almanac since the volumes for 1984 through 2002, which used JPL's ephemeris DE200. (From 2003 through 2014 the basis was updated to use DE405, and further updated from 2015 when DE430 began to be used.) Construction Each ephemeris was produced by numerical integration of the equations of motion, starting from a set of initial conditions. Due to the precision of modern observational data, the analytical method of general perturbations could no longer be applied to a high enough accuracy to adequately reproduce the observations. The method of special perturbations was applied, using numerical integration to solve the n-body problem, in effect putting the entire Solar System into motion in the computer's memory, accounting for all relevant physical laws. The initial conditions were both constants such as planetary masses, from outside sources, and parameters such as initial positions and velocities, adjusted to produce output which was a "best fit" to a large set of observations. A least-squares technique was used to perform the fitting. As of DE421, perturbations from 343 asteroids, representing about 90% of the mass of the main asteroid belt, have been included in the dynamical model. The physics modeled included the mutual Newtonian gravitational accelerations and their relativistic corrections (a modified form of the Einstein-Infeld-Hoffmann equations), the accelerations caused by the tidal distortion of the Earth, the accelerations caused by the figure of the Earth and Moon, and a model of the lunar librations. The observational data in the fits has been an evolving set, including: ranges (distances) to planets measured by radio signals from spacecraft, direct radar-ranging of planets, two-dimensional position fixes (on the plane of the sky) by VLBI of spacecraft, transit and CCD telescopic observations of planets and small bodies, and laser-ranging of retroreflectors on the Moon, among others. DE102, for instance, was fit to 48,479 observations. The time argument of the JPL integrated ephemerides, in early versions known as Teph, became recognized as a relativistic coordinate time scale, as is necessary in precise work to account for the small relativistic effects of time dilation and simultaneity. The IAU's 2006 redefinition of TDB became essentially equivalent to Teph, and the redefined TDB has been explicitly adopted in recent versions of the JPL ephemerides. Distribution Positions and velocities of the Sun, Earth, Moon, and planets, along with the orientation of the Moon, are stored as Chebyshev polynomial coefficients fit in 32 day-long segments. The ephemerides are now available via World Wide Web and FTP as data files containing the Chebyshev coefficients, along with source code to recover (calculate) positions and velocities. Files vary in the time periods they cover, ranging from a few hundred years to several thousand, and bodies they include. Data may be based on each planet's geometric center or a planetary-system barycenter. The use of Chebyshev polynomials enables highly precise, efficient calculations for any given point in time. DE405 calculation for the inner planets "recovers" accuracy of about 0.001 seconds of arc (arcseconds) (equivalent to about 1 km at the distance of Mars); for the outer planets it is generally about 0.1 arcseconds. The 'reduced accuracy' DE406 ephemeris gives an interpolating precision (relative to the full ephemeris values) no worse than 25 metres for any planet and no worse than 1 metre for the Moon. Note that these precision numbers are for the interpolated values relative to the original tabulated coordinates. The overall precision and accuracy of interpolated values for describing the actual motions of the planets will be a function of both the precision of the ephemeris tabulated coordinates and the precision of the interpolation. Applications JPL uses the ephemerides for navigation of spacecraft throughout the Solar System. Typically, a new ephemeris is computed including the latest available observations of the target planet(s), either for planning of the mission(s), or for final contact of the spacecraft with the target. See below, Recent ephemerides in the series. The Astronomical Almanac for 1984 through 2002 were based on JPL ephemeris DE200, and from 2003 to 2014 the Astronomical Almanac was based on JPL ephemeris DE405. , the Almanac is derived from DE430. The JPL ephemerides are widely used for planetary science; some examples are included in the Notes and References. Software is available to use the JPL ephemerides for the production of apparent ephemerides for any location and time; these are widely used by professional and amateur astronomers for reducing planetary observations and producing very precise observing guides. Recent ephemerides can be used with the planetarium software Stellarium. Ephemerides in the series Ephemerides for Solar System bodies are available through a JPL website and via FTP. Latest releases Source: DE440 was created in June 2020. The new DE440 / 441 general-purpose planetary solution includes seven additional years of ground and space-based astrometric data, data calibrations, and dynamical model improvements, most significantly involving Jupiter, Saturn, Pluto, and the Kuiper Belt. Inclusion of 30 new Kuiper-belt masses, and the Kuiper Belt ring mass, results in a time-varying shift of ~100 km in DE440's barycenter relative to DE430. The 114 Megabyte ephemeris files include the orientation of the Moon. It spans the years 1550–2650. JPL started transitioning to DE440 in early April 2021. Supplemental versions are also available which include the planetary geometric center of Mars as well as Mars' barycenter. DE441 was created in June 2020. This ephemeris is longer than DE440, -13,200 to 17,191, but less accurate (due to neglecting lunar core-mantle damping). It is useful for analyzing historical observations that are outside the span of DE440. Past releases DE102 was created in 1981; includes nutations but not librations. Referred to the dynamical equator and equinox of 1950. Covers early 1410 BC through late 3002 AD. DE200 was created in 1981; includes nutations but not librations. Referred to the dynamical equator and equinox of 2000. Covers late 1599 AD through early 2169 AD. This ephemeris was used for the Astronomical Almanac from 1984 to 2003. DE202 was created in 1987; includes nutations and librations. Referred to the dynamical equator and equinox of 2000. Covers late 1899 through 2049. DE402 was released in 1995, and was quickly superseded by DE403. DE403 was created 1993, released in 1995, expressed in the coordinates of the International Earth Rotation Service (IERS) reference frame, essentially the ICRF. The data compiled by JPL to derive the ephemeris began to move away from limited-accuracy telescopic observations and more toward higher-accuracy radar-ranging of the planets, radio-ranging of spacecraft, and very-long-baseline-interferometric (VLBI) observations of spacecraft, especially for the four inner planets. Telescopic observations remained important for the outer planets because of their distance, hence the inability to bounce radar off of them, and the difficulty of parking a spacecraft near them. The perturbations of 300 asteroids were included, vs DE118/DE200 which included only the five asteroids determined to cause the largest perturbations. Better values of the planets' masses had been found since DE118/DE200, further refining the perturbations. Lunar Laser Ranging accuracy was improved, giving better positions of the Moon. DE403 covered the time span early 1599 to mid 2199. DE404 was released in 1996. A so-called Long Ephemeris, this condensed version of DE403 covered 3000 BC to AD 3000. While both DE403 and DE404 were integrated over the same timespan, the interpolation of DE404 was somewhat reduced in accuracy and nutation of the Earth and libration of the Moon were not included. DE405 was released in 1998. It added several years' extra data from telescopic, radar, spacecraft, and VLBI observations (of the Galileo spacecraft at Jupiter, in particular). The method of modeling the asteroids' perturbations was improved, although the same number of asteroids were modeled. The ephemeris was more accurately oriented onto the ICRF. DE405 covered 1600 to 2200 to full precision. This ephemeris was utilized in the Astronomical Almanac from 2003 until 2014. DE406 was released with DE405 in 1998. A Long Ephemeris, this was the condensed version of DE405, covering 3000 BC to AD 3000 with the same limitations as DE404. This is the same integration as DE405, with the accuracy of the interpolating polynomials has been lessened to reduce file size for the longer time span covered by the file. DE407 was apparently unreleased. Details in readily-available sources are sketchy. DE408 was an unreleased ephemeris, created in 2005 as a longer version of DE406, covering 20,000 years. DE409 was released in 2003 for the Mars Exploration Rover spacecraft arrival at Mars and the Cassini arrival at Saturn. Further spacecraft ranging and VLBI (to the Mars Global Surveyor, Mars Pathfinder and the Mars Odyssey spacecraft) and telescopic data were included in the fit. The orbits of the Pioneer and Voyager spacecraft were reprocessed to give data points for Saturn. These resulted in improvements over DE405, especially to the predicted positions of Mars and Saturn. DE409 covered the years 1901 to 2019. DE410 was also released in 2003 covered 1901 - 2019, with improvements from DE409 in the masses for Venus, Mars, Jupiter, Saturn and the Earth-Moon system based on recent research. Though the masses had not yet been adopted by the IAU. The ephemerides were created to support the arrivals of the MER and Cassini spacecraft. DE411 was widely cited in the astronomical community, but not publicly released by JPL DE412 was widely cited in the astronomical community, but not publicly released by JPL DE413 was released in 2004 with updated ephemeris of Pluto in support of the occultation of a star by its satellite Charon on 11 Jul 2005. DE413 was fit to new CCD telescopic observations of Pluto in order to give improved positions of the planet and its moon. DE414 was created in 2005 and released in 2006. The numerical integration software was updated to use quadruple-precision for the Newtonian part of the equations of motion. Ranging data to the Mars Global Surveyor and Mars Odyssey spacecraft were extended to 2005, and further CCD observations of the five outer planets were included in the fit. Some data was accidentally left out of the fit, namely Magellan Venus data for 1992-94 and Galileo Jupiter data for 1996-97. Some ranging data to the NEAR Shoemaker spacecraft orbiting the asteroid Eros was used to derive the Earth/Moon mass ratio. DE414 covered the years 1599 to 2201. DE418 was released in 2007 for planning the New Horizons mission to Pluto. New observations of Pluto, which took advantage of the new astrometric accuracy of the Hipparcos star catalog, were included in the fit. Mars spacecraft ranging and VLBI observations were updated through 2007. Asteroid masses were estimated differently. Lunar laser ranging data for the Moon was added for the first time since DE403, significantly improving the lunar orbit and librations. Estimated position data from the Cassini spacecraft was included in the fit, improving the orbit of Saturn, but rigorous analysis of the data was deferred to a later date. DE418 covered the years 1899 to 2051, and JPL recommended not using it outside of that range due to minor inconsistencies which remained in the planets' masses due to time constraints. DE421 was released in 2008. It included additional ranging and VLBI measurements of Mars spacecraft, new ranging and VLBI of the Venus Express spacecraft, the latest estimates of planetary masses, additional lunar laser ranging, and two more months of CCD measurements of Pluto. When initially released in 2008, the DE421 ephemeris covered the years 1900 to 2050. An additional data release in 2013 extended the coverage to the year 2200. DE422 was created in 2009 for the MESSENGER mission to Mercury. A Long Ephemeris, it was intended to replace DE406, covering 3000 BC to AD 3000. DE423 was released in 2010. Position estimates of the MESSENGER spacecraft and additional range and VLBI data from the Venus Express spacecraft were fit. DE423 covered the years 1799 to 2200. DE424 was created in 2011 to support the Mars Science Laboratory mission. DE430 was created in 2013 and Is intended for use in analyzing modern data. It covers the dates 1550 January 1 to 2650 January 22 with the most accurate lunar ephemeris. From 2015 onwards this ephemeris is utilized in the Astronomical Almanac. Beginning with this release only Mars' Barycenter was included due to the small masses of its moons Phobos and Deimos which create a very small offset from the planet's center. The complete ephemerides files is 128 megabytes but several alternative versions have been made available by JPL DE431 was created in 2013 and is intended for analysis of earlier historical observations of the Sun, Moon, and planets. It covers a longer time span than DE430 (13201 BC to AD 17191) agreeing with DE430 within 1 meter over the time period covered by DE430. Position of the Moon is accurate within 20 meters between 1913-2113 and that error grows quadratically outside of that range. It is the largest of the ephemerides files at 3.4 gigabytes. DE432 was created April 2014. It includes librations but no nutations. DE432 is a minor update to DE430, and is intended primarily to aid the New Horizons project targeting of Pluto. DE436 was created in 2016 and was based on the DE430, with improved orbital data for Jupiter specifically for the Juno mission). DE438 was created in 2018 and was based on the DE430, with improved orbital data for Mercury (for the MESSENGER mission), Mars (for the Mars Odyssey and Mars Reconnaissance Orbiters), and Jupiter (for Juno). See also Jet Propulsion Laboratory Fundamental ephemeris Notes and references External links Folkner's descriptions of most of JPL's DE series NRAO description of the format and uses of the JPL ephemerides IAU's Papers and Information Related to Ephemerides JPL Solar System Dynamics Information about JPL ephemerides and links to programs and source code Java source code that implements JPL ephemerides and other theories Matlab code to read the ephemerides JPL FTP site with ephemerides (data files), source code (for access and basic processing of the data to recover positions and velocities), and documentation. JPL Interoffice Memoranda describing features of the ephemerides. US Naval Observatory (Naval Oceanography Portal), "History of the Astronomical Almanac" (accessed September 2017). Sources . an unpublished, updated version of the above source. Astrometry Celestial mechanics Dynamical systems Dynamics of the Solar System
Jet Propulsion Laboratory Development Ephemeris
[ "Physics", "Astronomy", "Mathematics" ]
3,911
[ "Dynamics of the Solar System", "Classical mechanics", "Astrophysics", "Astrometry", "Mechanics", "Celestial mechanics", "Solar System", "Astronomical sub-disciplines", "Dynamical systems" ]
26,848,621
https://en.wikipedia.org/wiki/Cipher%20security%20summary
This article summarizes publicly known attacks against block ciphers and stream ciphers. Note that there are perhaps attacks that are not publicly known, and not all entries may be up to date. Table color key Best attack This column lists the complexity of the attack: If the attack doesn't break the full cipher, "rounds" refers to how many rounds were broken "time" — time complexity, number of cipher evaluations for the attacker "data" — required known plaintext-ciphertext pairs (if applicable) "memory" — how many blocks worth of data needs to be stored (if applicable) "related keys" — for related-key attacks, how many related key queries are needed Common ciphers Key or plaintext recovery attacks Attacks that lead to disclosure of the key or plaintext. Distinguishing attacks Attacks that allow distinguishing ciphertext from random data. Less-common ciphers Key recovery attacks Attacks that lead to disclosure of the key. Distinguishing attacks Attacks that allow distinguishing ciphertext from random data. See also Block cipher Hash function security summary Time/memory/data tradeoff attack Transport Layer Security Bullrun (decryption program) — a secret anti-encryption program run by the U.S. National Security Agency References Cryptography lists and comparisons
Cipher security summary
[ "Technology" ]
256
[ "Computing-related lists", "Cryptography lists and comparisons" ]
26,850,357
https://en.wikipedia.org/wiki/Nuclear%20drip%20line
The nuclear drip line is the boundary beyond which atomic nuclei are unbound with respect to the emission of a proton or neutron. An arbitrary combination of protons and neutrons does not necessarily yield a stable nucleus. One can think of moving up or to the right across the table of nuclides by adding a proton or a neutron, respectively, to a given nucleus. However, adding nucleons one at a time to a given nucleus will eventually lead to a newly formed nucleus that immediately decays by emitting a proton (or neutron). Colloquially speaking, the nucleon has leaked or dripped out of the nucleus, hence giving rise to the term drip line. Drip lines are defined for protons and neutrons at the extreme of the proton-to-neutron ratio; at p:n ratios at or beyond the drip lines, no bound nuclei can exist. While the location of the proton drip line is well known for many elements, the location of the neutron drip line is only known for elements up to neon. General description Nuclear stability is limited to those combinations of protons and neutrons described by the chart of the nuclides, also called the valley of stability. The boundaries of this valley are the neutron drip line on the neutron-rich side, and the proton drip line on the proton-rich side. These limits exist because of particle decay, whereby an exothermic nuclear transition can occur by the emission of one or more nucleons (not to be confused with particle decay in particle physics). As such, the drip line may be defined as the boundary beyond which proton or neutron separation energy becomes negative, favoring the emission of a particle from a newly formed unbound system. Allowed transitions When considering whether a specific nuclear transmutation, a reaction or a decay, is energetically allowed, one only needs to sum the masses of the initial nucleus/nuclei and subtract from that value the sum of the masses of the product particles. If the result, or Q-value, is positive, then the transmutation is allowed, or exothermic because it releases energy, and if the Q-value is a negative quantity, then it is endothermic as at least that much energy must be added to the system before the transmutation may proceed. For example, to determine if 12C, the most common isotope of carbon, can undergo proton emission to 11B, one finds that about 16 MeV must be added to the system for this process to be allowed. While Q-values can be used to describe any nuclear transmutation, for particle decay, the particle separation energy quantity S, is also used, and it is equivalent to the negative of the Q-value. In other words, the proton separation energy Sp indicates how much energy must be added to a given nucleus to remove a single proton. Thus, the particle drip lines defined the boundaries where the particle separation energy is less than or equal to zero, for which the spontaneous emission of that particle is energetically allowed. Although the location of the drip lines is well defined as the boundary beyond which particle separation energy becomes negative, the definition of what constitutes a nucleus or an unbound resonance is unclear. Some known nuclei of light elements beyond the drip lines decay with lifetimes on the order of 10−22 seconds; this is sometimes defined to be a limit of nuclear existence because several fundamental nuclear processes (such as vibration and rotation) occur on this timescale. For more massive nuclei, particle emission half-lives may be significantly longer due to a stronger Coulomb barrier and enable other transitions such as alpha and beta decay to instead occur. This renders unambiguous determination of the drip lines difficult, as nuclei with lifetimes long enough to be observed exist far longer than the timescale of particle emission and are most probably bound. Consequently, particle-unbound nuclei are difficult to observe directly, and are instead identified through their decay energy. Nuclear structure origin of the drip lines The energy of a nucleon in a nucleus is its rest mass energy minus a binding energy. In addition to this, there is an energy due to degeneracy: for instance, a nucleon with energy E1 will be forced to a higher energy E2 if all the lower energy states are filled. This is because nucleons are fermions and obey Fermi–Dirac statistics. The work done in putting this nucleon to a higher energy level results in a pressure, which is the degeneracy pressure. When the effective binding energy, or Fermi energy, reaches zero, adding a nucleon of the same isospin to the nucleus is not possible, as the new nucleon would have a negative effective binding energy — i.e. it is more energetically favourable (system will have lowest overall energy) for the nucleon to be created outside the nucleus. This defines the particle drip point for that species. One- and two-particle drip lines In many cases, nuclides along the drip lines are not contiguous, but rather are separated by so-called one-particle and two-particle drip lines. This is a consequence of even and odd nucleon numbers affecting binding energy, as nuclides with even numbers of nucleons generally have a higher binding energy, and hence greater stability, than adjacent odd nuclei. These energy differences result in the one-particle drip line in an odd-Z or odd-N nuclide, for which prompt proton or neutron emission is energetically favorable in that nuclide and all other odd nuclides further outside the drip line. However, the next even nuclide outside the one-particle drip line may still be particle stable if its two-particle separation energy is non-negative. This is possible because the two-particle separation energy is always greater than the one-particle separation energy, and a transition to a less stable odd nuclide is energetically forbidden. The two-particle drip line is thus defined where the two-particle separation energy becomes negative, and denotes the outermost boundary for particle stability of a species. The one- and two-neutron drip lines have been experimentally determined up to neon, though unbound odd-N isotopes are known or deduced through non-observance for every element up to magnesium. For example, the last bound odd-N fluorine isotope is 26F, though the last bound even-N isotope is 31F. Nuclei near the drip lines are uncommon on Earth Of the three types of naturally occurring radioactivities (α, β, and γ), only alpha decay is a type of decay resulting from the nuclear strong force. The other proton and neutron decays occurred much earlier in the life of the atomic species and before the earth was formed. Thus, alpha decay can be considered either a form of particle decay or, less frequently, as a special case of nuclear fission. The timescale for the nuclear strong force is much faster than that of the nuclear weak force or the electromagnetic force, so the lifetime of nuclei past the drip lines are typically on the order of nanoseconds or less. For alpha decay, the timescale can be much longer than for proton or neutron emission owing to the high Coulomb barrier seen by an alpha-cluster in a nucleus (the alpha particle must tunnel through the barrier). As a consequence, there are no naturally-occurring nuclei on Earth that undergo proton or neutron emission; however, such nuclei can be created, for example, in the laboratory with accelerators or naturally in stars. The Facility for Rare Isotope Beams (FRIB) at Michigan State University came online in mid-2022 and has created many novel radioisotopes, each of which is extracted in a beam and used for study. FRIB runs a beam of relatively stable isotopes through a medium, which disrupts the nuclei and creates numerous novel nuclei, which are then extracted. Nucleosynthesis Explosive astrophysical environments often have very large fluxes of high-energy nucleons that can be captured on seed nuclei. In these environments, radiative proton or neutron capture will occur much faster than beta decays, and as astrophysical environments with both large neutron fluxes and high-energy protons are unknown at present, the reaction flow will proceed away from beta-stability towards or up to either the neutron or proton drip lines, respectively. However, once a nucleus reaches a drip line, as we have seen, no more nucleons of that species can be added to the particular nucleus, and the nucleus must first undergo a beta decay before further nucleon captures can occur. Photodisintegration While the drip lines impose the ultimate boundaries for nucleosynthesis, in high-energy environments the burning pathway may be limited before the drip lines are reached by photodisintegration, where a high-energy gamma ray knocks a nucleon out of a nucleus. The same nucleus is subject both to a flux of nucleons and photons, so an equilibrium between neutron capture and photodisintegration is reached for nuclides with a sufficiently low neutron separation energy, particularly those near waiting points. As the photon bath will typically be described by a Planckian distribution, higher energy photons will be less abundant, and so photodisintegration will not become significant until the nucleon separation energy begins to approach zero towards the drip lines, where photodisintegration may be induced by lower energy gamma rays. At kelvin, the photon distribution is energetic enough to knock nucleons out of any nuclei that have particle separation energies less than 3 MeV, but to know which nuclei exist in what abundances one must also consider the competing radiative captures. As neutron captures can proceed in any energy regime, neutron photodisintegration is unimportant except at higher energies. However, as proton captures are inhibited by the Coulomb barrier, the cross sections for those charged-particle reactions at lower energies are greatly suppressed, and in the higher energy regimes where proton captures have a large probability to occur, there is often a competition between the proton capture and the photodisintegration that occurs in explosive hydrogen burning; but because the proton drip line is relatively much closer to the valley of beta-stability than is the neutron drip line, nucleosynthesis in some environments may proceed as far as either nucleon drip line. Waiting points and time scales Once radiative capture can no longer proceed on a given nucleus, either from photodisintegration or the drip lines, further nuclear processing to higher mass must either bypass this nucleus by undergoing a reaction with a heavier nucleus such as 4He, or more often wait for the beta decay. Nuclear species where a significant fraction of the mass builds up during a particular nucleosynthesis episode are considered nuclear waiting points, since further processing by fast radiative captures is delayed. As has been emphasized, the beta decays are the slowest processes occurring in explosive nucleosynthesis. From the nuclear physics side, explosive nucleosynthesis time scales are set simply by summing the beta decay half-lives involved, since the time scale for other nuclear processes is negligible in comparison, although practically speaking this time scale is typically dominated by the sum of a handful of waiting point nuclear half lives. The r-process The rapid neutron capture process is believed to operate very close to the neutron drip line, though the astrophysical site of the r-process, while widely believed to take place in core-collapse supernovae, is unknown. While the neutron drip line is very poorly determined experimentally, and the exact reaction flow is not precisely known, various models predict that nuclei along the r-process path have a two-neutron separation energy (S2n) of approximately 2 MeV. Beyond this point, stability is thought to rapidly decrease in the vicinity of the drip line, with beta decay occurring before further neutron capture. In fact, the nuclear physics of extremely neutron-rich matter is a fairly new subject, and already has led to the discovery of the island of inversion and halo nuclei such as 11Li, which has a very diffuse neutron skin leading to a total radius comparable to that of 208Pb. Thus, although the neutron drip line and the r-process are linked very closely in research, it is an unknown frontier awaiting future research, both from theory and experiment. The rp-process The rapid proton capture process in X-ray bursts runs at the proton drip line except near some photodisintegration waiting points. This includes the nuclei 21Mg, 30S, 34Ar, 38Ca, 56Ni, 60Zn, 64Ge, 68Se, 72Kr, 76Sr, and 80Zr. One clear nuclear structure pattern that emerges is the importance of pairing, as one notices all the waiting points above are at nuclei with an even number of protons, and all but 21Mg also have an even number of neutrons. However, the waiting points will depend on the assumptions of the X-ray burst model, such as metallicity, accretion rate, and the hydrodynamics, along with the nuclear uncertainties, and as mentioned above, the exact definition of the waiting point may not be consistent from one study to the next. Although there are nuclear uncertainties, compared to other explosive nucleosynthesis processes, the rp-process is quite well experimentally constrained, as, for example, all the above waiting point nuclei have at the least been observed in the laboratory. Thus as the nuclear physics inputs can be found in the literature or data compilations, the Computational Infrastructure for Nuclear Astrophysics allows one to do post-processing calculations on various X-ray burst models, and define for oneself the criteria for the waiting point, as well as alter any nuclear parameters. While the rp-process in X-ray bursts may have difficulty bypassing the 64Ge waiting point, certainly in X-ray pulsars where the rp-process is stable, instability toward alpha decay places an upper limit near A = 100 on the mass that can be reached through continuous burning. The exact limit is a matter presently under investigation; 104–109Te are known to undergo alpha decay whereas 103Sb is proton-unbound. Even before the limit near A = 100 is reached, the proton flux is thought to considerably decrease and thus slow down the rp-process, before low capture rate and a cycle of transmutations between isotopes of tin, antimony, and tellurium upon further proton capture terminate it altogether. However, it has been shown that if there are episodes of cooling or mixing of previous ashes into the burning zone, material as heavy as 126Xe can be created. Neutron stars In neutron stars, neutron heavy nuclei are found as relativistic electrons penetrate the nuclei and produce inverse beta decay, wherein the electron combines with a proton in the nucleus to make a neutron and an electron-neutrino: :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || |} As more and more neutrons are created in nuclei the energy levels for neutrons get filled up to an energy level equal to the rest mass of a neutron. At this point any electron penetrating a nucleus will create a neutron, which will "drip" out of the nucleus. At this point we have: And from this point onwards the equation applies, where pFn is the Fermi momentum of the neutron. As we go deeper into the neutron star the free neutron density increases, and as the Fermi momentum increases with increasing density, the Fermi energy increases, so that energy levels lower than the top level reach neutron drip and more and more neutrons drip out of nuclei so that we get nuclei in a neutron fluid. Eventually all the neutrons drip out of nuclei and we have reached the neutron fluid interior of the neutron star. Known values Neutron drip line The values of the neutron drip line are only known for the first ten elements, hydrogen to neon. For oxygen (Z = 8), the maximal number of bound neutrons is 16, rendering 24O the heaviest particle-bound oxygen isotope. For neon (Z = 10), the maximal number of bound neutrons increases to 24 in the heaviest particle-stable isotope 34Ne. The location of the neutron drip line for fluorine and neon was determined in 2017 by the non-observation of isotopes immediately beyond the drip line. The same experiment found that the heaviest bound isotope of the next element, sodium, is at least 39Na. These were the first new discoveries along the neutron drip line in over twenty years. The neutron drip line is expected to diverge from the line of beta stability after calcium with an average neutron-to-proton ratio of 2.4. Hence, is predicted that the neutron drip line will fall out of reach for elements beyond zinc (where the drip line is estimated around N = 60) or possibly zirconium (estimated N = 88), as no known experimental techniques are theoretically capable of creating the necessary imbalance of protons and neutrons in drip line isotopes of heavier elements. Indeed, neutron-rich isotopes such as 49S, 52Cl, and 53Ar that were calculated to lie beyond the drip line have been reported as bound in 2017–2019, indicating that the neutron drip line may lie even farther away from the beta-stability line than predicted. The table below lists the heaviest particle-bound isotope of the first ten elements. Not all lighter isotopes are bound. For example, 39Na is bound, but 38Na is unbound. As another example, although 6He and 8He are bound, 5He and 7He are not. Proton drip line The general location of the proton drip line is well established. For all elements occurring naturally on earth and having an odd number of protons, at least one species with a proton separation energy less than zero has been experimentally observed. Up to germanium, the location of the drip line for many elements with an even number of protons is known, but none past that point are listed in the evaluated nuclear data. There are a few exceptional cases where, due to nuclear pairing, there are some particle-bound species outside the drip line, such as 8B and 178Au. One may also note that nearing the magic numbers, the drip line is less understood. A compilation of the first unbound nuclei known to lie beyond the proton drip line is given below, with the number of protons, Z and the corresponding isotopes, taken from the National Nuclear Data Center. See also Extended periodic table Table of nuclides Radioactive decay Further reading References Nuclear physics Nucleosynthesis Nucleons Radioactivity
Nuclear drip line
[ "Physics", "Chemistry" ]
3,850
[ "Nuclear fission", "Nucleons", "Astrophysics", "Nucleosynthesis", "Nuclear physics", "Nuclear fusion", "Radioactivity" ]
26,855,026
https://en.wikipedia.org/wiki/Laterite
Laterite is a soil type rich in iron and aluminium and is commonly considered to have formed in hot and wet tropical areas. Nearly all laterites are of rusty-red coloration, because of high iron oxide content. They develop by intensive and prolonged weathering of the underlying parent rock, usually when there are conditions of high temperatures and heavy rainfall with alternate wet and dry periods. The process of formation is called laterization. Tropical weathering is a prolonged process of chemical weathering which produces a wide variety in the thickness, grade, chemistry and ore mineralogy of the resulting soils. The majority of the land area containing laterites is between the tropics of Cancer and Capricorn. Laterite has commonly been referred to as a soil type as well as being a rock type. This, and further variation in the modes of conceptualizing about laterite (e.g. also as a complete weathering profile or theory about weathering), has led to calls for the term to be abandoned altogether. At least a few researchers, including T. R. Paton and M. A. J. Williams, specializing in regolith development have considered that hopeless confusion has evolved around the name. Material that looks highly similar to the Indian laterite occurs abundantly worldwide. Historically, laterite was cut into brick-like shapes and used in monument-building. After 1000 CE, construction at Angkor Wat and other southeast Asian sites changed to rectangular temple enclosures made of laterite, brick, and stone. Since the mid-1970s, some trial sections of bituminous-surfaced, low-volume roads have used laterite in place of stone as a base course. Thick laterite layers are porous and slightly permeable, so the layers can function as aquifers in rural areas. Locally available laterites have been used in an acid solution, followed by precipitation to remove phosphorus and heavy metals at sewage-treatment facilities. Laterites are a source of aluminum ore; the ore exists largely in clay minerals and the hydroxides, gibbsite, boehmite, and diaspore, which resembles the composition of bauxite. In Northern Ireland they once provided a major source of iron and aluminum ores. Laterite ores also were the early major source of nickel. Definition and physical description Francis Buchanan-Hamilton first described and named a laterite formation in southern India in 1807. He named it laterite from the Latin word later, which means a brick; this highly compacted and cemented soil can easily be cut into brick-shaped blocks for building. The word laterite has been used for variably cemented, sesquioxide-rich soil horizons. A sesquioxide is an oxide with three atoms of oxygen and two metal atoms. It has also been used for any reddish soil at or near the Earth's surface. Laterite covers are thick in the stable areas of the Western Ethiopian Shield, on cratons of the South American Plate, and on the Australian Shield. In Madhya Pradesh, India, the laterite which caps the plateau is thick. Laterites can be either soft and easily broken into smaller pieces, or firm and physically resistant. Basement rocks are buried under the thick weathered layer and rarely exposed. Lateritic soils form the uppermost part of the laterite cover. In some places laterites contain pisolites and ferricrete, and they may be found in elevated positions as result of relief inversion. Cliff Ollier has criticized the usefulness of the concept given that it is used to mean different things to different authors. Reportedly some have used it for ferricrete, others for tropical red earth soil, and yet others for soil profiles made, from top to bottom, of a crust, a mottled zone and a pallid zone. He cautions strongly against the concept of "lateritic deep weathering" since "it begs so many questions". Formation Tropical weathering (laterization) is a prolonged process of chemical weathering which produces a wide variety in the thickness, grade, chemistry and ore mineralogy of the resulting soils. The initial products of weathering are essentially kaolinized rocks called saprolites. A period of active laterization extended from about the mid-Tertiary to the mid-Quaternary periods (35 to 1.5 million years ago). Statistical analyses show that the transition in the mean and variance levels of 18O during the middle of the Pleistocene was abrupt. It seems this abrupt change was global and mainly represents an increase in ice mass; at about the same time an abrupt decrease in sea surface temperatures occurred; these two changes indicate a sudden global cooling. The rate of laterization would have decreased with the abrupt cooling of the earth. Weathering in tropical climates continues to this day, at a reduced rate. Laterites are formed from the leaching of parent sedimentary rocks (sandstones, clays, limestones); metamorphic rocks (schists, gneisses, migmatites); igneous rocks (granites, basalts, gabbros, peridotites); and mineralized proto-ores; which leaves the more insoluble ions, predominantly iron and aluminum. The mechanism of leaching involves acid dissolving the host mineral lattice, followed by hydrolysis and precipitation of insoluble oxides and sulfates of iron, aluminum and silica under the high temperature conditions of a humid sub-tropical monsoon climate. An essential feature for the formation of laterite is the repetition of wet and dry seasons. Rocks are leached by percolating rain water during the wet season; the resulting solution containing the leached ions is brought to the surface by capillary action during the dry season. These ions form soluble salt compounds which dry on the surface; these salts are washed away during the next wet season. Laterite formation is favored in low topographical reliefs of gentle crests and plateaus which prevents erosion of the surface cover. The reaction zone where rocks are in contact with water—from the lowest to highest water table levels—is progressively depleted of the easily leached ions of sodium, potassium, calcium and magnesium. A solution of these ions can have the correct pH to preferentially dissolve silicon oxide rather than the aluminum oxides and iron oxides. Silcrete has been suggested to form in zones in relatively dry "precipitating zones" of laterites. To the contrary, in the wetter parts of laterites subject to leaching ferricretes have been suggested to form. The mineralogical and chemical compositions of laterites are dependent on their parent rocks. Laterites consist mainly of quartz, zircon, and oxides of titanium, iron, tin, aluminum and manganese, which remain during the course of weathering. Quartz is the most abundant relic mineral from the parent rock. Laterites vary significantly according to their location, climate and depth. The main host minerals for nickel and cobalt can be either iron oxides, clay minerals or manganese oxides. Iron oxides are derived from mafic igneous rocks and other iron-rich rocks; bauxites are derived from granitic igneous rock and other iron-poor rocks. Nickel laterites occur in zones of the earth which experienced prolonged tropical weathering of ultramafic rocks containing the ferro-magnesian minerals olivine, pyroxene, and amphibole. Locations Yves Tardy, from the French Institut National Polytechnique de Toulouse and the Centre National de la Recherche Scientifique, calculated that laterites cover about one-third of the Earth's continental land area. Lateritic soils are the subsoils of the equatorial forests, of the savannas of the humid tropical regions, and of the Sahelian steppes. They cover most of the land area between the tropics of Cancer and Capricorn; areas not covered within these latitudes include the extreme western portion of South America, the southwestern portion of Africa, the desert regions of north-central Africa, the Arabian peninsula and the interior of Australia. Some of the oldest and most highly deformed ultramafic rocks which underwent laterization are found as petrified fossil soils in the complex Precambrian shields in Brazil and Australia. Smaller highly deformed Alpine-type intrusives have formed laterite profiles in Guatemala, Colombia, Central Europe, India and Burma. Large thrust sheets of Mesozoic island arcs and continental collision zones underwent laterization in New Caledonia, Cuba, Indonesian and the Philippines. Laterites reflect past weathering conditions; laterites which are found in present-day non-tropical areas are products of former geological epochs, when that area was near the equator. Present-day laterite occurring outside the humid tropics are considered to be indicators of climatic change, continental drift or a combination of both. In India, laterite soils occupy an area of 240,000 square kilometres. Uses Agriculture Laterite soils have a high clay content, which means they have higher cation exchange capacity, low permeability, high plasticity and high water-holding capacity than sandy soils. It is because the particles are so small, the water is trapped between them. After the rain, the water moves into the soil slowly. Due to intensive leaching, laterite soils lack in fertility in comparison to other soils, however they respond readily to manuring and irrigation. Palms are less likely to suffer from drought because the rainwater is held in the soil. However, if the structure of lateritic soils becomes degraded, a hard crust can form on the surface, which hinders water infiltration, the emergence of seedlings, and leads to increased runoff. It is possible to rehabilitate such soils, using a system called the 'bio-reclamation of degraded lands'. This involves using indigenous water-harvesting methods (such as planting pits and trenches), applying animal and plant residues, and planting high-value fruit trees and indigenous vegetable crops that are tolerant of drought conditions. These soils are most suitable for plantation crops. They are good for oil palm, tea, coffee and cashew cultivation. The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) has employed this system to rehabilitate degraded laterite soils in Niger and increase smallholder farmers' incomes. In some places, these soils support grazing grounds and scrub forests. Building blocks When moist, laterites can easily be cut with a spade into regular-sized blocks. Laterite is mined while it is below the water table, so it is wet and soft. Upon exposure to air it gradually hardens as the moisture between the flat clay particles evaporates and the larger iron salts lock into a rigid lattice structure and become resistant to atmospheric conditions. The art of quarrying laterite material into masonry is suspected to have been introduced from the Indian subcontinent. They harden like iron when they are exposed to air. After 1000 CE Angkorian construction changed from circular or irregular earthen walls to rectangular temple enclosures of laterite, brick and stone structures. Geographic surveys show areas which have laterite stone alignments which may be foundations of temple sites that have not survived. The Khmer people constructed the Angkor monuments—which are widely distributed in Cambodia and Thailand—between the 9th and 13th centuries. The stone materials used were sandstone and laterite; brick had been used in monuments constructed in the 9th and 10th centuries. Two types of laterite can be identified; both types consist of the minerals kaolinite, quartz, hematite and goethite. Differences in the amounts of minor elements arsenic, antimony, vanadium and strontium were measured between the two laterites. Angkor Wat—located in present-day Cambodia—is the largest religious structure built by Suryavarman II, who ruled the Khmer Empire from 1112 to 1152. It is a World Heritage site. The sandstone used for the building of Angkor Wat is Mesozoic sandstone quarried in the Phnom Kulen Mountains, about away from the temple. The foundations and internal parts of the temple contain laterite blocks behind the sandstone surface. The masonry was laid without joint mortar. It is used as a local building material in places such as Burkina Faso, where it is valued for being strong and for reducing heating and cooling costs. Road building The French surfaced roads in the Cambodia, Thailand and Vietnam area with crushed laterite, stone or gravel. Kenya, during the mid-1970s, and Malawi, during the mid-1980s, constructed trial sections of bituminous-surfaced low-volume roads using laterite in place of stone as a base course. The laterite did not conform with any accepted specifications but performed equally well when compared with adjoining sections of road using stone or other stabilized material as a base. In 1984 US$40,000 per was saved in Malawi by using laterite in this way. It is also widely used in Brazil for road building. Water supply Bedrock in tropical zones is often granite, gneiss, schist or sandstone; the thick laterite layer is porous and slightly permeable so the layer can function as an aquifer in rural areas. One example is the Southwestern Laterite (Cabook) Aquifer in Sri Lanka. This aquifer is on the southwest border of Sri Lanka, with the narrow Shallow Aquifers on Coastal Sands between it and the ocean. It has the considerable water-holding capacity, depending on the depth of the formation. The aquifer in this laterite recharges rapidly with the rains of April–May which follow the dry season of February–March, and continues to fill with the monsoon rains. The water table recedes slowly and is recharged several times during the rest of the year. In some high-density suburban areas the water table could recede to below ground level during a prolonged dry period of more than 65 days. The Cabook Aquifer laterites support relatively shallow aquifers that are accessible to dug wells. Waste water treatment In Northern Ireland, phosphorus enrichment of lakes due to agriculture is a significant problem. Locally available laterite—a low-grade bauxite rich in iron and aluminum—is used in acid solution, followed by precipitation to remove phosphorus and heavy metals at several sewage treatment facilities. Calcium-, iron- and aluminum-rich solid media are recommended for phosphorus removal. A study, using both laboratory tests and pilot-scale constructed wetlands, reports the effectiveness of granular laterite in removing phosphorus and heavy metals from landfill leachate. Initial laboratory studies show that laterite is capable of 99% removal of phosphorus from solution. A pilot-scale experimental facility containing laterite achieved 96% removal of phosphorus. This removal is greater than reported in other systems. Initial removals of aluminum and iron by pilot-scale facilities have been up to 85% and 98% respectively. Percolating columns of laterite removed enough cadmium, chromium and lead to undetectable concentrations. There is a possible application of this low-cost, low-technology, visually unobtrusive, efficient system for rural areas with dispersed point sources of pollution. Ores Ores are concentrated in metalliferous laterites; aluminum is found in bauxites, iron and manganese are found in iron-rich hard crusts, nickel and copper are found in disintegrated rocks, and gold is found in mottled clays. Bauxite Bauxite ore is the main source of aluminum. It is a variety of laterite (residual sedimentary rock), so it has no precise chemical formula. It is composed mainly of hydrated alumina minerals such as gibbsite [Al(OH)3 or Al2O3 . 3H2O)] in newer tropical deposits; in older subtropical, temperate deposits the major minerals are boehmite [γ-AlO(OH) or Al2O3.H2O] and some diaspore [α-AlO(OH) or Al2O3.H2O]. The average chemical composition of bauxite, by weight, is 45 to 60% Al2O3 and 20 to 30% Fe2O3. The remaining weight consists of silicas (quartz, chalcedony and kaolinite), carbonates (calcite, magnesite and dolomite), titanium dioxide and water. Bauxites of economical interest must be low in kaolinite. Formation of lateritic bauxites occurs worldwide in the 145- to 2-million-year-old Cretaceous and Tertiary coastal plains. The bauxites form elongate belts, sometimes hundreds of kilometers long, parallel to Lower Tertiary shorelines in India and South America; their distribution is not related to a particular mineralogical composition of the parent rock. Many high-level bauxites are formed in coastal plains which were subsequently uplifted to their present altitude. Iron The basaltic laterites of Northern Ireland were formed by extensive chemical weathering of basalts during a period of volcanic activity. They reach a maximum thickness of and once provided a major source of iron and aluminum ore. Percolating waters caused degradation of the parent basalt and preferential precipitation by acidic water through the lattice left the iron and aluminum ores. Primary olivine, plagioclase feldspar and augite were successively broken down and replaced by a mineral assemblage consisting of hematite, gibbsite, goethite, anatase, halloysite and kaolinite. Nickel Laterite ores were the major source of early nickel. Rich laterite deposits in New Caledonia were mined starting the end of the 19th century to produce white metal. The discovery of sulfide deposits of Sudbury, Ontario, Canada, during the early part of the 20th century shifted the focus to sulfides for nickel extraction. About 70% of the Earth's land-based nickel resources are contained in laterites; they currently account for about 40% of the world nickel production. In 1950 laterite-source nickel was less than 10% of total production, in 2003 it accounted for 42%, and by 2012 the share of laterite-source nickel was expected to be 51%. The four main areas in the world with the largest nickel laterite resources are New Caledonia, with 21%; Australia, with 20%; the Philippines, with 17%; and Indonesia, with 12%. See also Ferricrete – stony particles conglomerated into rock by oxidized iron compounds from ground water References Sedimentology Weathering Ore deposits Aluminium minerals Pedology Building materials Soil-based building materials Regolith
Laterite
[ "Physics", "Engineering" ]
3,815
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
26,855,865
https://en.wikipedia.org/wiki/Antimicrobial%20polymer
Polymers with the ability to kill or inhibit the growth of microorganisms such as bacteria, fungi, or viruses are classified as antimicrobial agents. This class of polymers consists of natural polymers with inherent antimicrobial activity and polymers modified to exhibit antimicrobial activity. Polymers are generally nonvolatile, chemically stable, and can be chemically and physically modified to display desired characteristics and antimicrobial activity. Antimicrobial polymers are a prime candidate for use in the food industry to prevent bacterial contamination and in water sanitation to inhibit the growth of microorganisms in drinking water. Mechanism of Action Antimicrobial polymers inhibit cell growth and initiate cell death through two primary mechanisms. The first mechanism is utilized by contact-active polymers. Contact-active polymers utilize electrostatic interactions, the hydrophobic effect, and the chelate effect. Electrostatic attraction is a common initial interaction of an antimicrobial polymer with a microbe. The chelating and hydrophobic effects are common secondary interactions of antimicrobial polymers with microbes. Cationically charged antimicrobial polymers are attracted to the anionically charged bacterial cell walls. The outer wall of bacterial cells possesses a net negative charge. The cytoplasmic membrane of bacterial cells has a negative charge and contains essential proteins. The secondary interaction, the chelating effect, involves the bonding of the antimicrobial polymer to the microbial cell. These interactions lead to membrane disruption and ultimately inhibited cell growth or death. The cytoplasmic membrane of a cell is a semi-permeable membrane, which controls the transport of solutes into the cell. The phospholipid bilayer is an important component of the cell membrane, which is composed of hydrophilic heads and a hydrophobic tail. The hydrophilic heads form the inner and outer linings of the cell membrane, while the hydrophobic tails compose the interior of the membrane. The secondary interaction, the hydrophobic effect, involves the accumulation of nonpolar compounds away from water. Nonpolar components of antimicrobial polymers insert themselves into the nonpolar interior of the cell membrane. High molecular weight polymers commonly induce cell death or inhibition through contact-active interactions with the surface of cells. Cell death and inhibition result from impairment of normal cellular function. Positive residues on the polymer electrostatically interact with negative charges on the cell and induce secondary cellular effects. Cellular membrane penetration is common in low molecular weight polymers. The initial electrostatic and hydrophobic interaction of an antimicrobial polymer and biomimetic polymer causes membrane disruption and cell death. The hydrophobic tail of the polymer penetrates the phospholipid bilayer into the hydrophobic region, resulting in membrane disruption and denaturing of proteins and enzymes, as well as other secondary effects. Secondary effects include disruption of solute and electron transport as well as  disturbances to energy production pathways, which leads to cell death. The second mechanism is characterized by the release of low molecular weight antimicrobial agents from polymers. Antimicrobial agents that are released from polymers induce cell death through binding to or penetrating the cell wall. When antimicrobial agents bind to proteins, structural changes occur to the cell membrane resulting in cellular death. The penetration of nanoparticle antimicrobial agents into the cell wall enables the antimicrobial agents to interact with cell DNA. Microbe death results from the effects to DNA transcription and mRNA synthesis when polymer nanoparticles combine with DNA. Primary Characteristics of Antimicrobial Polymers There are different primary characteristics of antimicrobial polymers, dependent upon the mechanism of action. The two primary characteristics of contact-active antimicrobial polymers are cationic charge and hydrophobicity. Cationic residues are necessary to induce the interaction with the microbial cell wall. Polycations such as quaternary ammonium, quaternary phosphonium, and guanidinium are frequently found in antimicrobial polymers. Hydrophobic residues improve binding to the lipid bilayer and are utilized for insertion into the microbial cell wall. Non-contact-active antimicrobial polymers require the addition of antimicrobial agents to induce activity. Common agents added include N-halamine compounds, nitric oxide, and copper and silver nanoparticles. Classes of Antimicrobial Polymers Antimicrobial polymers are generally classified into two categories based on how antimicrobial activity is conferred. The first are polymers with inherent antimicrobial which do not require any modifications to incite antimicrobial behavior. The other class requires modification to enable antimicrobial activity and can be differentiated by the type of modification. Polymers may be chemically modified to induce antimicrobial behavior or they may be used as a backbone for the addition of organic or inorganic compounds. Inherent Antimicrobial Activity Polymers with inherent antimicrobial activity include chitosan, poly-ε-lysine, quaternary ammonium compounds, polyethylenimine, and polyguanidines. Chitosan is a nontoxic polymer that has displayed broad-spectrum antimicrobial activity. The mechanism of action for chitosan includes electrostatic interaction, the chelate effect, and the hydrophobic effect. Electrostatic interaction is the primary initial interaction when the pH is lower, while the chelating and hydrophobic effects are the primary initial interactions when the pH is higher. Growth inhibition and death of fungi, bacteria, and yeasts have been seen from chitosan. The antimicrobial effect of chitosan is greater on fungi than yeasts and more effective on gram-negative bacteria than gram-positive bacteria. Poly-ε-lysine is a biodegradable, nontoxic, edible antimicrobial polymer. This polymer utilizes electrostatic interactions to attach to the cell wall, therefore disrupting the integrity of the cell wall. Poly-ε-lysine penetrates the cell wall, causing physiological damage to the cell and death. In comparison to a similar synthetic polymer, poly-ε-lysine is more effective against gram-positive than gram-negative bacteria. Poly-ε-lysine is also effective against Bacillus coagulans, Bacillus stearothermophilus, and Bacillus subtilis. Benzalkonium chloride, stearalkonium chloride, and cetrimonium are all quaternary ammonium compounds containing nitrogen. The antibacterial activity of these compounds is affected by the number of carbon atoms and the length of the nitrogen-containing chain. Optimal antimicrobial activity is generally seen in quaternary ammonium compounds with a long chain length, containing 8-18 carbon atoms. Increased activity is seen against gram-positive bacteria in polymers with a chain length of 12-14 carbon atoms, while improved activity against gram-negative bacteria is seen in polymers with a chain length of 14-16 carbon atoms. Polymer quaternary ammonium compounds containing nitrogen induce cell death through electrostatic interactions and the hydrophobic effect. This group of polymers displays limited hemolytic activity, making them advantageous for use in cosmetics and healthcare. Polyethylenimine is a synthetic, nonbiodegradable polymer containing nitrogen. This polymer induces cell death through cell membrane rupture. When attached to immobilized surfaces including glass and plastic, N-alkyl-polyethylenimine caused cell inactivation in almost 100% of airborne and waterborne bacteria and fungi. A benefit of this polymer is that it is nontoxic to mammalian cells. Polyethylenimine has been applied in the medical industry for use in prostheses. Bacteria growth was reduced by 92% when polyethylenimine was tested as a coating surface for medical devices. The activity of polyethylenimine is affected by the molecular weight of the polymer; low molecular weight polyethylenimine displays negligible activity, while displaying great antimicrobial activity in its high molecular weight form. Polyguanidines are another class of antimicrobial polymers containing nitrogen. This class of antimicrobial polymers is nontoxic and exhibits high water solubility. Polyguanidines display broad-spectrum antimicrobial activity and initially interact with microbes using electrostatic forces. Greater activity against gram-positive bacteria has been seen with polyguanidines than against gram-negative bacteria. The reason for the difference in activity is likened to the different structures of gram-positive and gram-negative bacteria. Gram-negative bacteria have a thinner peptidoglycan layer than gram-positive bacteria. In addition, gram-negative bacteria have an outer lipid membrane, which gram-positive bacteria do not. High molecular weight polymers are able to penetrate gram-positive bacteria. Antimicrobial Activity Through Chemical Modification This class of polymers does not have any inherent antimicrobial activity. To induce antimicrobial activity, polymers re chemically modified to include active agents. Active side groups are attached to the polymer backbone to generate antimicrobial activity. Pendent groups, antibiotic drugs, or inorganic particles can be adjoined to the polymer. Pendant groups that are attached to the polymer backbone include quaternary ammonium, hydroxyl groups with an organic acid, and others. Antimicrobial polymers containing quaternary ammonium as a side group are commonly synthesized from methacrylic monomers. The benefit of these monomers is that the hydrophobicity, molecular weight, and surface charge can all be manipulated. Hydrophobicity of the polymer has a strong effect on antimicrobial activity. Polysiloxanes, which have a quaternary ammonium pendant group, have demonstrated activity against several strains of bacteria including Enterococcus hirae, E. coli, and P. aeruginosa. The flexibility and amphiphilic nature of this polymer enhances the antimicrobial activity. When benzaldehyde, a hydroxyl group containing organic acid, is used as a side group with Methyl methacrylate polymers, growth inhibition five times that of control surfaces has been shown. Benzaldehyde has inherent antimicrobial activity and has been incorporated into polymers to improve activity. Polymers with quaternary ammonium or hydroxyl groups with an organic acid as a pendant group have demonstrated activity against many types of bacteria, fungi, and algae. Antimicrobial activity can also be induced through the addition of inorganic particles such as silver, copper, and titanium dioxide nanoparticles to a polymer. Metal nanoparticles are incorporated into the polymer to form polymeric nanocomposites. Silver is utilized in antimicrobial polymers because of its stability as well as broad-spectrum antimicrobial activity. Positive silver ions are produced in environments beneficial for the growth of bacteria. These positive silver ions physically interact with cell wall proteins resulting in membrane disruption and cell death. Silver nanoparticles embedded into a cationic polymer have displayed activity against E.coli and S.aureus. Copper and titanium dioxide nanoparticles are less commonly employed in antimicrobial polymers than silver nanoparticles. Copper nanoparticles embedded into polypropylene nanocomposites have demonstrated the ability to kill 99.9% of bacteria. Titanium dioxide is a nontoxic material with antimicrobial activity that is photo-activated. Titanium dioxide has been embedded in polypropylene to create photoactive antimicrobial polymers. The antimicrobial activity of the polymer composite is initiated by a light source. The light source causes the titanium dioxide to be oxidized, which results in the release of highly reactive hydroxyl species that disrupt bacteria. The effectiveness of the photoactive antimicrobial polymer has been demonstrated against the bacteria E.coli. Another class of antibacterial polymers includes those whose activity is introduced through the incorporation of antibiotics into the polymer matrix. The chemical triclosan is commonly utilized for its antibacterial properties. Triclosan mixed with the copolymer styrene-acrylate exhibits antibacterial activity against E. faecalis. In addition, triclosan combined with the polymer polyvinyl alcohol has increased antibacterial activity compared to triclosan not incorporated in a polymer. The polymer polyethylenimine has also been modified to include antibiotics. Polyethylenimine is used to make bacterial cell walls more permeable, therefore increasing the sensitivity of bacteria to antibiotics. Polyethylenimine increases the effectiveness of the antibiotics including ampicillin, rifampin, cefotaxime, as well as others. Protein-Mimicking Polymers Magainin and defensin are natural peptides, short polymers composed of amino acids, which display exceptional antimicrobial activity. The antimicrobial activity is a product of the peptides’ structure, including its highly rigid backbone. These peptides have organized pendant groups, making one side of the polymer hydrophobic and the other side cationic. This group of polymers efficiently induce cell death through cell wall penetration. Polymer mimics of these antimicrobial peptides have been developed. Protein-mimicking polymers emulate the structure of magainin and defensin. Examples of protein mimicking polymers include poly(phenylene ethynylene)-based and N-carboxyanhydride-based polymers. Poly(phenylene ethynylene) polymers with amino acid pendant groups were manufactured to have positively charged side groups and a stiff backbone. The synthetic polymer had low toxicity and strong antimicrobial activity. In addition, N-carboxyanhydride-based polymers with the hydrophilic amino acid lysine and different hydrophobic amino acids were developed. The polymers displayed antimicrobial activity against E. coli, C. Albicans, and others. Factors that Affect Antimicrobial Activity Molecular Weight The molecular weight of the polymer is perhaps one of the most important properties to consider when determining antimicrobial properties because antimicrobial activity is markedly dependent on the molecular weight. It has been determined that optimal activity is achieved when polymers have a molecular weight in the range of 1.4x104 Da to 9.4x104 Da. Weights larger than this range show a decrease in activity. This dependence on weight can be attributed to the sequence of steps necessary for biocidal action. Extremely large molecular weight polymers will have trouble diffusing through the bacterial cell wall and cytoplasm. Thus much effort has been directed towards controlling the molecular weight of the polymer. Counter Ion Most bacterial cell walls are negatively charged, therefore most antimicrobial polymers must be positively charged to facilitate the adsorption process. The structure of the counter ion, or the ion associated with the polymer to balance charge, also affects the antimicrobial activity. Counter anions that form a strong ion-pair with the polymer impede the antimicrobial activity because the counter ion will prevent the polymer from interacting with the bacteria. However, ions that form a loose ion-pair or readily dissociate from the polymer, exhibit a positive influence on the activity because it allows the polymer to interact freely with the bacteria. Spacer Length/Alkyl Chain Length The spacer length or alkyl chain length refers to the length of the carbon chain that composes the polymer backbone. The length of this chain has been investigated to see if it affects the antimicrobial activity of the polymer. Results have generally shown that longer alkyl chains have resulted in higher activity. There are two primary explanations for this effect. Firstly, longer chains have more active sites available for adsorption with the bacteria cell wall and cytoplasmic membrane. Secondly, longer chains aggregate differently than shorter chains, which again may provide a better means for adsorption. However, shorter chain lengths diffuse more easily. Disadvantages A major disadvantage of antimicrobial polymers is that macromolecules are very large and thus may not act as fast as small molecule agents. Biocidal polymers that require contact times on the order of hours to provide substantial reductions in pathogens, really have no practical value. Seconds, or minutes at most, should be the contact time goal for a real application. Furthermore, if the structural modification to the polymer caused by biocidal functionalization adversely affects the intended use, the polymer will be of no practical value. For example, if a fiber that must be exposed to aqueous bleach to render it antimicrobial (an N-halamine polymer) is weakened by that exposure, or its dye is bleached, it will have limited use. Synthetic Methods Synthesis from Antimicrobial Monomers This synthetic method involves covalently linking antimicrobial agents that contain functional groups with high antimicrobial activity, such as hydroxyl, carboxyl, or amino groups to a variety of polymerizable derivatives, or monomers before polymerization. The antimicrobial activity of the active agent may be either reduced or enhanced by polymerization. This depends on how the agent kills bacteria, either by depleting the bacterial food supply or through bacterial membrane disruption and the kind of monomer used. Differences have been reported when homo-polymers are compared to copolymers. Examples of antimicrobial polymers synthesized from antimicrobial monomers are included in Table 2: Table 2: Polymers Synthesized from Antimicrobial Monomers and their Antimicrobial Properties Synthesis by Adding Antimicrobial Agents to Preformed Polymers This synthetic method involves first synthesizing the polymer, followed by modification with an active species. The following kinds of monomers are usually used to form the backbone of homopolymers or copolymers: vinylbenzyl chloride, methyl methacrylate, 2-chloroethyl vinyl ether, vinyl alcohol, maleic anhydride. The polymers are then activated by anchoring antimicrobial species, such as phosphonium salts, ammonium salts, or phenol groups via quaternization, substitution of chloride, or hydrolysis of anhydride. Examples of polymers synthesized from this method are provided in Table 3: Table 3: Antimicrobial Polymers Synthesized from Preformed Polymers and Antimicrobial Properties Synthesis by Adding Antimicrobial Agents to Naturally Occurring Polymers Chitin is the second-most abundant biopolymer in nature. The deacetylated product of chitin—chitosan has been found to have antimicrobial activity without toxicity to humans. This synthetic technique involves making chitosan derivatives to obtain better antimicrobial activity. Currently, work has involved the introduction of alkyl groups to the amine groups to make quaternized N-alkyl chitosan derivatives, introduction of extra quaternary ammonium grafts to the chitosan, and modification with phenolic hydroxyl moieties. Synthesis by insertion of antimicrobial agents into polymer backbone This method involves using chemical reactions to incorporate antimicrobial agents into the polymeric backbones. Polymers with biologically active groups, such as polyamides, polyesters, and polyurethanes are desirable as they may be hydrolyzed to active drugs and small innocuous molecules. For example, a series of polyketones have been synthesized and studied, which show an inhibitory effect on the growth of B. subtilis and P. fluorescens as well as fungi, A. niger and T. viride. There are also studies which incorporate antibiotics into the backbone of the polymer. Requirements of an antimicrobial polymer In order for an antimicrobial polymer to be a viable option for large-scale distribution and use there are several basic requirements that must be first fulfilled: The synthesis of the polymer should be easy and relatively inexpensive. To be produced on an industrial scale the synthetic route should ideally utilize techniques that have already been well developed. The polymer should have a long shelf life, or be stable over long periods of time. It should be able to be stored at the temperature for which it is intended for use. If the polymer is to be used for the disinfection of water, then it should be insoluble in water to prevent toxicity issues (as is the case with some current small molecule antimicrobial agents). The polymer should not decompose during use, or emit toxic residues. The polymer should not be toxic or irritating to those during handling. Antimicrobial activity should be able to be regenerated upon loss of activity. Antimicrobial polymers should be biocidal to a broad range of pathogenic microorganisms in brief times of contact. Applications Water treatment Polymeric disinfectants are ideal for applications in hand-held water filters, surface coatings, and fibrous disinfectants, because they can be fabricated by various techniques and can be made insoluble in water. The design of insoluble contact disinfectants that can inactivate, kill, or remove target microorganisms by mere contact without releasing any reactive agents to the bulk phase being disinfected is desired. Chlorine or water-soluble disinfectants have problems with the residual toxicity, even if minimal amounts of the substance used. Toxic residues can become concentrated in food, water, and in the environment. In addition, because free chlorine ions and other related chemicals can react with organic substances in water to yield trihalomethane analogues that are suspected of being carcinogenic, their use should be avoided. These drawbacks can be solved by the removal of microorganisms from water with insoluble substances. Food applications Antimicrobial substances that are incorporated into packaging materials can control microbial contamination by reducing the growth rate and the maximum growth population. This is done by extending the lagphase of the target microorganism or by inactivating the microorganisms on contact. One of these applications is to extend the shelf life of food and promote safety by reducing the rate of growth of microorganisms when the package is in contact with the surfaces of solid foods, for example, meat, cheese, etc. Second, antimicrobial packaging materials greatly reduce the potential for recontamination of processed products and simplify the treatment of materials to eliminate product contamination. For example, self-sterilizing packaging might eliminate the need for peroxide treatment in aseptic packaging. Antimicrobial polymers can also be used to cover surfaces of food processing equipment as self-sanitizer. Examples include filter gaskets, conveyors, gloves, garments, and other personal hygiene equipment. Some polymers are inherently antimicrobial and have been used in films and coatings. Cationic polymers such as chitosan promote cell adhesion. This is because charged amines interact with negative charges on the cell membrane, and can cause leakage of intracellular constituents. Chitosan has been used as a coating and appears to protect fresh vegetables and fruits from fungal degradation. Although the antimicrobial effect is attributed to antifungal properties of chitosan, it may be possible that the chitosan acts as a barrier between the nutrients contained in the produce and microorganisms. Medicine and healthcare Antimicrobial polymers are powerful candidates for controlled delivery systems and implants in dental restorative materials because of their high activities. This can be ascribed to their characteristic nature of carrying a high local charge density of active groups in the vicinity of the polymer chains. For example, electrospun fibers containing tetracycline hydrochloride based on poly(ethylene-co-vinyl acetate), poly(lactic acid), and blending were prepared to use as an antimicrobial wound dressing. Cellulose derivatives are commonly used in cosmetics as skin and hair conditioners. Quaternary ammonium cellulose derivatives are of particular interest as conditioners in hair and skin products. Future work in this field The field of antimicrobial polymers has progressed steadily, but slowly over the past years, and appears to be on the verge of rapid expansion. This is evidenced by a broad variety of new classes of compounds that have been prepared and studied in the past few years. Modification of polymers and fibrous surfaces, and changing the porosity, wettability, and other characteristics of the polymeric substrates, should produce implants and biomedical devices with greater resistance to microbial adhesion and biofilm formation. A number of polymers have been developed that can be incorporated into cellulose and other materials, which should provide significant advances in many fields such as food packaging, textiles, wound dressing, coating of catheter tubes, and necessarily sterile surfaces. The greater need for materials that fight infection will give incentive for discovery and use of antimicrobial polymers. References Bibliography Cowie, J.M.G. Polymers: Chemistry and Physics of Modern Materials, Chapman and Hall, 3rd edition (2007); United States. Congress. Office of Technology Assessment. Biopolymers : making materials nature's way, Washington, DC:The Office, (1993); Marsh, J. Antimicrobial peptides, J. Wiley,(1994); Wool, R.P. Bio-based polymers and composites, Elsevier Academic Press, (2005). External links Antimicrobial Polymer Technologies for Food Application Antimicrobial Materials Antimicrobial Polymer Surfaces Polymers Antimicrobials
Antimicrobial polymer
[ "Chemistry", "Materials_science", "Biology" ]
5,310
[ "Polymers", "Biocides", "Antimicrobials", "Polymer chemistry" ]
26,856,232
https://en.wikipedia.org/wiki/Thor%20washing%20machine
The Thor washing machine was the first electric clothes washer sold commercially in the United States. Produced by the Chicago-based Hurley Electric Laundry Equipment Company, the 1907 Thor is believed to be the first electrically powered washer ever manufactured, crediting Hurley as the inventor of the first automatic washing machine. Designed by Hurley engineer Alva J. Fisher, a patent for the new electric Thor was issued on August 9, 1910, three years after its initial invention. The idea of an automatic washing machine had been around for many years. However, these were crude mechanical efforts that typically involved a manually operated crank or similar design. In many ways, the patent of the new Thor washer sounds modern, even today. The patent states that a "perforated cylinder is rotatably mounted within the tub containing the wash water". A series of blades lifted the clothes as the cylinder rotated. After 8 rotations in one direction, the machine would reverse rotation to "prevent the cloths from wadding up into a compact mass". Drive belts attached to a Westinghouse motor connected to three wheels of different sizes, which moved the drum during operation. The design also included a clutch, which allowed the machine to switch direction, and an emergency stop rod. The new Thor washer was mass marketed throughout the United States beginning in 1908. Controversy There is a dispute over who was the first inventor of the automatic washer. A company called Nineteen Hundred Washing Machine Company of Binghamton, NY, claims to have produced the first electric washer in 1906; a year before Thor's release. Additionally, it has been stated in various articles on the Internet that a Ford Motor Company employee invented the electric washer in late 19th century or early 20th century. Since Ford was incorporated in 1903, the Ford story seems unlikely to be valid. Regardless, Thor remains one of the first (if not the first) company to manufacture and sell an automatic washing machine on a large scale. Other Thor innovations Tilt-a-whirl agitator Thor invented the tilt-a-whirl system in which the agitator, typically in the shape of disk, tilted back and forth within the washer drum while simultaneously rotating. The early 1930s tilt-a-whirl design was the first agitator to move water in both a horizontal and vertical motion. The 1936 version of the Thor tilt-a-whirl incorporated sculpted hands embossed on the agitator. At the time, some Thor dealers painted the fingernails of the hands on demonstration machines. Automagic washer/dishwasher In the 1940s, Thor introduced the Automagic hybrid washer/dishwasher. The top-loading machine included both a removable clothes washing drum and a dish-washing drum. The Automagic was widely marketed but disappeared from the marketplace soon after its introduction, as many consumers soured on the idea of washing dirty clothing and dishes in the same machine. Thor today The Thor trademark was acquired in 2008 by Los Angeles–based Appliances International, a supplier of washer dryer combos and stacking washers and dryers. Soon after the brand acquisition, the company introduced a new line of laundry appliances under the Thor brand. References External links Thor Appliance Company archived from the original on 2011-09-23 Cleaning tools Home appliances Laundry washing equipment
Thor washing machine
[ "Physics", "Technology" ]
680
[ "Physical systems", "Machines", "Home appliances" ]
25,360,962
https://en.wikipedia.org/wiki/C16H24N2O4
The molecular formula C16H24N2O4 (molar mass: 308.37 g/mol) may refer to: Diacetolol Hydroxycarteolol Nitracaine Ubenimex, or bestatin Molecular formulas
C16H24N2O4
[ "Physics", "Chemistry" ]
55
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
25,363,094
https://en.wikipedia.org/wiki/High-valent%20iron
High-valent iron commonly denotes compounds and intermediates in which iron is found in a formal oxidation state > 3 that show a number of bonds > 6 with a coordination number ≤ 6. The term is rather uncommon for hepta-coordinate compounds of iron. It has to be distinguished from the terms hypervalent and hypercoordinate, as high-valent iron compounds neither necessarily violate the 18-electron rule nor necessarily show coordination numbers > 6. The ferrate(VI) ion [FeO4]2− was the first structure in this class synthesized. The synthetic compounds discussed below contain highly oxidized iron in general, as the concepts are closely related. Oxoiron compounds Oxoferryl species are commonly proposed as intermediates in catalytic cycles, especially biological systems in which O2 activation is required. Diatomic oxygen has a high reduction potential (E0 = 1.23 V), but the first step required to harness this potential is a thermodynamically unfavorable one electron reduction E0 = -0.16 V. This reduction occurs in nature by the formation of a superoxide complex in which a reduced metal is oxidized by O2. The product of this reaction is a peroxide radical that is more readily reactive. A widely applicable method for the generation of high-valent oxoferryl species is the oxidation with iodosobenzene: symbolic oxidation of an iron compound using iodosobenzene; L denotes the supporting ligand Fe(IV)O Several syntheses of oxoiron(IV) species have been reported. The simplest are mixed-metal oxides of the form MFeO3, with M=Ba, Ca, or Sr. However, those compounds do not have discrete iron anions. Isolated oxoiron(IV) species are known with more complicated ligands. These compounds model biological complexes such as cytochrome P450, NO synthase, and isopenicillin N synthase. Two such reported compounds are thiolate-ligated oxoiron(IV) and cyclam-acetate oxoiron(IV). Thiolate-ligated oxoiron(IV) is formed by the oxidation of a precursor, [FeII(TMCS)](PF6) (TMCS = 1-mercaptoethyl-4,8,11-trimethyl-1,4,8,11-tetraza cyclotetradecane), and 3-5 equivalents of H2O2 at -60 ˚C in methanol. The iron(IV) compound is deep blue in color and shows intense absorption features at 460 nm, 570 nm, 850 nm, and 1050 nm. This species FeIV(=O)(TMCS)+ is stable at -60 ˚C, but decomposition is reported as temperature increases. Compound 2 was identified by Mössbauer spectroscopy, high resolution electrospray ionization mass spectrometry (ESI-MS), X-ray absorption spectroscopy, extended X-ray absorption fine structure (EXAFS), ultraviolet–visible spectroscopy (UV-vis), Fourier-transform infrared spectroscopy (FT-IR), and results were compared to density functional theory (DFT) calculations. Tetramethylcyclam oxoiron(IV) is formed by the reaction of FeII(TMC)(OTf)2, TMC = 1,4,8,11-tetramethyl-1,4,8,11-tetraazacyclotetradecane; OTf = CF3SO3, with iodosylbenzene (PhIO) in CH3CN at -40 ˚C. A second method for formation of cyclam oxoiron(IV) is reported as the reaction of FeII(TMC)(OTf)2 with 3 equivalents of H2O2 for 3 hours. This species is pale green in color and has an absorption maximum at 820 nm. It is reported to be stable for at least 1 month at -40 ˚C. It has been characterized by Mössbauer spectroscopy, ESI-MS, EXAFS, UV-vis, Raman spectroscopy, and FT-IR. High-valent iron bispidine complexes can oxidize cyclohexane to cyclohexanol and cyclohexanone in 35% yield with an alcohol to ketone ratio up to 4. Fe(V)O FeVTAML(=O), TAML = tetra-amido macrocyclic ligand, is formed by the reaction of [FeIII(TAML)(H2O)](PPh4) with 2-5 equivalents of meta-chloroperbenzoic acid at -60 ˚C in n-butyronitrile. This deep green compound (two λmax at 445 and 630 nm respectively) is stable at 77 K. The stabilization of Fe(V) is attributed to the strong π–donor capacity of deprotonated amide nitrogens. Fe(VI)O Ferrate(VI) is an inorganic anion of chemical formula [FeO4]2−. It is photosensitive and contributes a pale violet colour to its compounds and solutions. It is one of the strongest water-stable oxidising species known. Although it is classified as a weak base, concentrated solutions of ferrate(VI) are only stable at high pH. Electronic structure The electronic structure of porphyrin oxoiron compounds has been reviewed. Nitridoiron and imidoiron compounds Nitridoiron and imidoiron compounds are closely related to iron-dinitrogen chemistry. The biological significance of nitridoiron(V) porphyrins has been reviewed. A widely applicable method to generate high-valent nitridoiron species is the thermal or photochemical oxidative elimination of molecular nitrogen from an azide complex. symbolic oxidative elimination of nitrogen yields a nitridoiron complex; L denotes the supporting ligand. Fe(IV)N Several structurally characterized nitridoiron(IV) compounds exist. Fe(V)N The first nitridoiron(V) compound was synthesised and characterized by Wagner and Nakamoto (1988, 1989) using photolysis and Raman spectroscopy at low temperatures. Fe(VI)N A second FeVI species apart from the ferrate(VI) ion, [(Me3cy-ac)FeN](PF6)2, has been reported. This species, is formed by oxidation followed by photolysis to yield the Fe(VI) species. Characterization of the Fe(VI) complex was done by Mossbauer, EXAFS, IR, and DFT calculations. Unlike the ferrate(VI) ion, compound 5 is diamagnetic. μ-Nitrido compounds and oxidation catalysis Bridged μ-nitrido di-iron phthalocyanine compounds such as iron(II) phthalocyanine catalyze the oxidation of methane to methanol, formaldehyde, and formic acid using hydrogen peroxide as sacrificial oxidant. Electronic structure Nitridoiron(IV) and nitridoiron(V) species were first explored theoretically in 2002. See also Jacobsen's catalyst (high-valent manganese) References Further reading Solomon et al.; Angewandte Chemie International Edition Volume 47, Issue 47, pages 9071–9074, November 10, 2008; Iron Ferrates Iron_complexes Oxidizing agents Inorganic chemistry Cluster chemistry Coordination complexes Iron, high-valent
High-valent iron
[ "Chemistry" ]
1,611
[ "Redox", "Coordination complexes", "Cluster chemistry", "Coordination chemistry", "Oxidizing agents", "Salts", "nan", "Ferrates", "Organometallic chemistry" ]
25,367,566
https://en.wikipedia.org/wiki/Rayleigh%20distance
Rayleigh distance in optics is the axial distance from a radiating aperture to a point at which the path difference between the axial ray and an edge ray is λ / 4. An approximation of the Rayleigh Distance is , in which Z is the Rayleigh distance, D is the aperture of radiation, λ the wavelength. This approximation can be derived as follows. Consider a right angled triangle with sides adjacent , opposite and hypotenuse . According to Pythagorean theorem, . Rearranging, and simplifying The constant term can be neglected. In antenna applications, the Rayleigh distance is often given as four times this value, i.e. which corresponds to the border between the Fresnel and Fraunhofer regions and denotes the distance at which the beam radiated by a reflector antenna is fully formed (although sometimes the Rayleigh distance it is still given as per the optical convention e.g.). The Rayleigh distance is also the distance beyond which the distribution of the diffracted light energy no longer changes according to the distance Z from the aperture. It is the reduced Fraunhofer diffraction limitation. Lord Rayleigh's paper on the subject was published in 1891. Optical quantities
Rayleigh distance
[ "Physics", "Mathematics" ]
252
[ "Optical quantities", "Quantity", "Physical quantities" ]
25,367,575
https://en.wikipedia.org/wiki/Hotspot%20Ecosystem%20Research%20and%20Man%27s%20Impact%20On%20European%20Seas
Hotspot Ecosystem Research and Man's Impact On European Seas (HERMIONE) is an international multidisciplinary project, started in April 2009, that studies deep-sea ecosystems. HERMIONE scientists study the distribution of hotspot ecosystems, how they function and how they interconnect, partially in the context of how these ecosystems are being affected by climate change and impacted by humans through overfishing, resource extraction, seabed installations (oil platforms, etc.) and pollution. Major aims of the project are to understand how humans are affecting the deep-sea environment and to provide policy makers with accurate scientific information, enabling effective management strategies to protect deep sea ecosystems. The HERMIONE project is funded by the European Commission's Seventh Framework Programme, and is the successor to the HERMES project, which concluded in March 2009. Introduction Europe's deep-ocean margin, from the Arctic to the Iberian Margin, and across the Mediterranean to the Black Sea, spans a distance of over 15,000 km and hosts a number of diverse habitats and ecosystems. Deep water coral reefs, undersea mountains populated by a multitude of organisms, vast submarine canyon systems, and hydrothermal vents are some of the features contained therein. The traditional view of the deep-sea realm as a hostile and barren place was discredited long ago, and scientists now know that much of Europe's deep sea is rich and diverse. However, the deep sea is increasingly threatened by humans: most of this deep-ocean frontier lies within Europe's Exclusive Economic Zone (EEZ) and has significant potential for the exploitation of biological, energy, and mineral resources. Research and exploration over the last two decades has shown clear signs of direct and indirect anthropogenic impacts in the deep sea, resulting from such activities as overfishing, littering and pollution. This raises concerns because deep-sea processes and ecosystems are not only important for the marine web of life, but also fundamentally contribute to the global biogeochemical cycle. Continuing with the knowledge obtained by the HERMES project (EC FP6), which contributed significantly to our understanding of deep-sea ecosystems, the HERMIONE project investigates ecosystems at critical sites on Europe's deep-ocean margin, aiming to make major advances in knowledge of their distribution and functioning, and their contribution to ecosystem goods and services. HERMIONE places special emphasis on human impact on the deep sea and on the translation of scientific information into science policy for the sustainable use of marine resources. To design and implement effective governance strategies and management plans to protect our deep seas for the future, understanding the extent, natural dynamics and interconnection of ocean ecosystems, and integrating socio-economic research with natural science, are important. To achieve this, HERMIONE uses a highly interdisciplinary and integrated approach, engaging experts in biology, ecology, biodiversity, oceanography, geology, sedimentology, geophysics and biogeochemistry, who will work alongside socio-economists and policy-makers. Hotspot research The HERMIONE project focuses on deep-sea "hotspot" ecosystems including submarine canyons, open slopes and deep basins, chemosynthetic environments, deep water coral reefs, and seamounts. Hotspot ecosystems support high species diversity, numbers of individuals, or both, and are therefore important in maintaining margin-wide biodiversity and abundance. HERMIONE research ranges from investigation of the ecosystems' dimensions, distribution, interconnection and functioning, to understanding the potential impacts of climate change and anthropogenic disturbance. The ultimate objective is to provide stakeholders and policymakers with the scientific knowledge necessary to support deep-sea governance, sustainable management and conservation of these ecosystems. To obtain the data needed, HERMIONE scientists are spending over 1000 days at sea, using more than 50 research vessels across Europe. Sharing vessels and equipment between partners will bring benefits through shared knowledge, expertise and data, and will also maximise the research effort, increasing efficiency and productivity. State-of-the-art technology will be used, with Remotely Operated Vehicles (ROVs) one of the critical pieces of equipment being used for a wide range of delicate manoeuvres and high-resolution surveys, from precision sampling of methane gas at cold seeps to microbathymetry mapping to examine the structure of the seabed. Large arrays of instrumented moorings, shared by different partner institutions, will be deployed in common experimental areas, allowing HERMIONE to develop experimental strategies beyond any national capacity. Study areas The HERMIONE study sites were selected on the following basis: The Arctic because of its importance in monitoring climate change; Nordic margin with abundant cold-water corals, extensive hydrocarbon exploration and the Haakon-Mosby mud volcano (HMMV) natural laboratory; Celtic margin with a mid-latitude canyon, cold water corals and the long term Porcupine Abyssal Plain (PAP) monitoring site; Portuguese margin with the highly diverse Nazare and Setubal canyons; Seamounts in the Atlantic and western Mediterranean as important biodiversity hotspots potentially under threat; Mid Atlantic Ridge (MAR) ESONET site to link cold seep to hot seep chemosynthetic studies; Mediterranean cold water cascading sites in the Gulf of Lions and outflows of the Adriatic and Aegean Seas. The HMMV, PAP, MAR and central Mediterranean sites link to the ESONET long-term monitoring sites and will provide valuable background information. Hotspot ecosystems Cold-water coral reefs Deep water coral reefs are found along the northeast Atlantic and central Mediterranean margins, and are important biodiversity hotspots. The recent HERMES project lists more than 2000 species associated with cold-water coral reefs worldwide. As well as flourishing live coral, the dead coral frameworks and rubble that are frequently found close by attract a myriad of fauna from the microscopic to the mega, and may be fundamental in coral ecosystem replenishment. Coral reefs provide a habitat for fish, a refuge from predators, a rich food source, a nursery for young fish, and are also potential sources of a wide range of medicines to treat ailments from cancer to cardiovascular disease. There are several known coral hotspot areas on Europe's deep-ocean margin, including the Scandinavian, Rockall-Porcupine and central Mediterranean margins, and there remain many questions about them, such as how each of the sites are connected to one another, how they arose, what drives the distribution of the reefs, how the larvae disperse and settle, how the corals and associated species reproduce, finding their physiological thresholds, how they will fare with increased ocean warming, and whether ocean warming induces a spread of coral reefs further north into the Arctic Ocean. New research will also build on previous work to define the physical environment around cold-water coral reefs such as hydrodynamic and sedimentary regimes, which will help to understand biological responses. HERMIONE scientists use cutting-edge technology to try to answer these questions. High-resolution mapping of the seafloor will be carried out to determine the location and distribution of cold-water corals, and photographic observations will be made to assess changes in the status of known reefs over time, such as their response to climatic variation or their recovery from destruction by fishing trawlers. To assess biodiversity and its relationship with environmental factors such as climate change, DNA barcoding and other molecular techniques will be used. Submarine canyons Submarine canyons are deep, steep-sided valleys that form on continental margins. Stretching from the shelf to the deep sea, they dissect much of the European margin. They are one of the most complex seascapes known to humans; their rugged topography and challenging environmental conditions mean that they are also one of the least explored. Advances in technology over the last two decades have allowed scientists to uncover some of the mysteries of canyons, the size of which often rival the Grand Canyon, USA. One of the most important discoveries is that canyons are major sources and sinks for sediment and organic matter on continental margins. They act as fast-track pathways for sediment and organic matter from the shelf to the deep sea, and can act as temporary depots for sediment and carbon storage. Particle flux through canyons has been found to be between two and four times greater than on the open slope, though the transfer of particles through canyons is thought to be largely "event-driven", which introduces a highly variable aspect to canyon conditions. Determining what drives sediment transport and deposition within canyons is one of the major challenges for HERMIONE. The capacity of canyons to focus and concentrate organic matter can promote high abundances and diversity of fauna. However, variability in environmental conditions and topography is very high, both within and between canyons, and this is reflected in the variability of the structure and dynamics of the biological communities. Our understanding of biological processes in canyons has greatly improved with the use of submersibles and ROVs, but this research has also revealed that the relationships between fauna and canyons are more complex than previously thought. The diversity of submarine canyons and their fauna means that it is difficult to make generalisations that can be used to create policies for canyon ecosystem management. It is important that the role of canyons in maintaining biodiversity, and how potential anthropogenic impacts may affect this, is better understood. HERMIONE will address this challenge by examining canyon ecosystems from different biogeochemical provinces and topographic settings, in light of the complex interactions among habitat (topography, water masses, currents), mass and energy transfer, and biological communities. Open slopes and deep basins Open slopes and deep basins make up > 90% of the ocean floor and 65% of the Earth's surface, and many of the goods and services provided by the deep sea (e.g., oil, gas, climate regulation and food) are produced and stored by them. They are intricately involved in global biogeochemical and ecological processes, and so are essential for the functioning of our biosphere and human wellbeing. Recent research in the HERMES (EC-FP6) project gathered a large body of information on local biodiversity at large scales, different latitudes and in different hotspot ecosystems, but the research also highlighted the high degree of complexity of deep-sea habitats. This information is fundamental to our understanding of the factors that control biodiversity at much larger scales, from hundreds to thousands of kilometres. HERMIONE will conduct further studies on the mosaic of habitats found in deep-sea slopes and basins, and will investigate the relationships within and between these habitats, their biodiversity and ecology, and their interconnection with other hotspot ecosystems. Investigating the impacts of anthropogenic activities and climate change in the deep sea is a theme that runs through all HERMIONE research. To the biological communities on open slopes and in deep basins, seafloor warming through climate change is a major threat. Up to 85% of methane reservoirs along the continental margin could be destabilised, which would not only release climate-warming methane gas into the atmosphere, but would also have unknown and potentially devastating consequences on benthic communities. The role of climatic variation on deep-sea benthos is not well understood, although large-scale changes in the structure of seafloor communities have been observed over the last two decades. The use of long-term, deep-sea observatories, e.g., the Hausgarten deep-sea observatory in the Arctic and the time-series analysis of the Catalan margin and Southern Adriatic Sea, will help HERMIONE scientists to examine recent changes in benthic communities, and to study decadal variability in physical processes, such as the dense shelf water cascading events in submarine canyons. HERMIONE aims to provide quantitative estimates of the potential consequences of biodiversity loss on ecosystem functioning, to examine how deep-sea benthos adapt to large-scale changes, and, for the first time, to create conceptual models integrating deep-sea biodiversity and quantitative analyses of ecosystem functioning and processes. Seamounts Seamounts are underwater mountains that rise from the depths of the ocean, and whose summits can sometimes be found just a few hundred metres below the sea surface. To be classified as a seamount the summit must be 1000 m higher than the surrounding seafloor, and under this definition there are an estimated 1000–2800 seamounts in the Atlantic Ocean and around 60 in the Mediterranean Sea. Seamounts enhance water flow through localised tides, eddies, and upwelling, and these physical processes may enhance primary production. Seamounts may therefore be considered as hotspots of marine life; fauna benefit from the enhanced hydrodynamics and phytoplankton supply, and thrive on the slopes and summits. Suspension feeders, such as gorgonian sea fans and the cold-water corals like Lophelia pertusa, often dominate the rich benthic (seafloor-dwelling) communities. The enhanced abundance and diversity of fauna is not limited to benthic species, as fish are known to aggregate over seamounts. Unfortunately, this knowledge has led to increasing commercial exploitation of seamount fish by the fishing industry, and a number of seamount fish populations have already been depleted. Part of HERMIONE research will assess the threats and impacts of human activities on seamounts, including comparing data from seamounts in different stages of fisheries exploitation to understand more about the impacts of fishing activities., both on target species and non-target species, and their habitats. Despite our increasing knowledge on seamounts, there is still very little known about the relationships between their ecosystem functioning and biodiversity, and that of the surrounding areas. This information is vital in order to improve our understanding of connectivity between seamount hotspots and adjacent areas, and HERMIONE research will aim to discover whether seamounts act as centres of speciation (the evolution of new species), or if they play a role as "stepping stones", allowing fauna to colonise and disperse across the oceans. Chemosynthetic ecosystems Chemosynthetic environments - such as hot vents, cold seeps, mud volcanoes and sulphidic brine pools - show the highest biomass and productivity of all deep-sea ecosystems. The chemicals found in the fluids, gases and mud that escape from such systems provide an energy source for chemosynthetic bacteria and archaea, which are the primary producers in these systems. A huge variety of fauna profits from the association with chemosynthetic microbes, supporting large communities that can exist independently of sunlight. Some of these environments, such as methane (cold) seeps, can support up to 50,000 times more biomass than communities that rely on photosynthetic production alone. Owing to the extreme gradients and diversity in physical and chemical factors, hydrothermal vents also remain incredibly fascinating ecosystems. HERMIONE researchers aim to illustrate the tight coupling between geosphere and biosphere processes, as well as their immense heterogeneity and interconnectivity, by observing and comparing the spatial and temporal variation of chemosynthetic environments in European Sea’s. Methane cycling and carbonate formation by microorganisms in chemosynthetic environments have implications for the control of greenhouse gases. Methane can be trapped and stored under the seabed as a gas hydrate, and under different conditions, can either be controlled by microbial consumption, or can escape into the surrounding seawater, and ultimately the atmosphere. Our understanding of the biological controls of methane seepage and feedback mechanisms for global warming is limited. The distribution and structure of cold seep communities can act as an indicator for changes in methane fluxes in the deep sea, e.g. by seafloor warming. Using multibeam echosounder data and 3D seismic data with in situ studies at seep sites, and by investigating the life histories of fauna at such ecosystems, HERMIONE scientists aim to understand more about their interconnectivity and resilience, and the implications for climate change. The great variety of fauna present in chemosynthetic environments is a real challenge to scientists. Only a tiny fraction of microorganisms at vents and seeps has been identified, and a huge amount is still to be discovered. Their identification, their association with fauna, and the relationship between their diversity, function and habitat, are vital areas of research as biological communities act as important filters, controlling up to 100% of vent and seep emissions. By using DNA barcoding and genome analysis in addition to traditional methods of identification and experimentation, HERMIONE scientists will study the relationship between community structure and ecosystem functioning at a variety of vents, seeps, brine pools and mud volcanoes. Socio-economics, governance and science-policy interfaces With increasing ocean exploration over the last two decades has come the realisation that humans have had an extensive impact on the world’s oceans, not just close to our shores, but also reaching down into the deep sea. From destructive fishing practices and exploitation of mineral resources to pollution and litter, evidence of human impact can be found in virtually all deep-sea ecosystems. In response, the international community has set a series of ambitious goals aimed at protecting the marine environment and its resources for future generations. Three of these initiatives, decided on by world leaders during the 2002 World Summit on Sustainable Development (Johannesburg), are to achieve a significant reduction in biodiversity loss by 2010, to introduce an ecosystems approach to marine resource assessment and management by 2010, and to designate a network of marine protected areas by 2012. A crucial requirement for implementing these is the availability of high-quality scientific data and knowledge, as well as effective science-policy interfaces to ensure the policy relevance of research and to enable the rapid translation of scientific information into science policy. HERMIONE aims to provide this by filling the knowledge gap about threatened deep-sea ecosystems and their current status with respect to anthropogenic impacts (e.g. litter, chemical contamination). Socio-economists and natural scientists work together in HERMIONE, researching the socio-economics of anthropogenic impacts, mapping human activities that affect the deep sea, assessing the potential for valuing deep-sea ecosystem goods and services, studying governance options and designing and implementing real-time science-policy interfaces. HERMIONE natural and social science results will provide national, regional (EU), and global policy-makers and other stakeholders with the information needed to establish policies to ensure the sustainable use of the deep ocean and conservation of deep-sea ecosystems. References Hydrology Oceanography Climatological research Environmental impact of fishing Climate change and the environment
Hotspot Ecosystem Research and Man's Impact On European Seas
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
3,818
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Environmental engineering" ]
25,368,725
https://en.wikipedia.org/wiki/Biochemical%20switches%20in%20the%20cell%20cycle
A series of biochemical switches control transitions between and within the various phases of the cell cycle. The cell cycle is a series of complex, ordered, sequential events that control how a single cell divides into two cells, and involves several different phases. The phases include the G1 and G2 phases, DNA replication or S phase, and the actual process of cell division, mitosis or M phase. During the M phase, the chromosomes separate and cytokinesis occurs. The switches maintain the orderly progression of the cell cycle and act as checkpoints to ensure that each phase has been properly completed before progression to the next phase. For example, Cdk, or cyclin dependent kinase, is a major control switch for the cell cycle and it allows the cell to move from G1 to S or G2 to M by adding phosphate to protein substrates. Such multi-component (involving multiple inter-linked proteins) switches have been shown to generate decisive, robust (and potentially irreversible) transitions and trigger stable oscillations. As a result, they are a subject of active research that tries to understand how such complex properties are wired into biological control systems. Feedback loops Many biological circuits produce complex outputs by exploiting one or more feedback loops. In a sequence of biochemical events, feedback would refer to a downstream element in the sequence (B in the adjacent image) affecting some upstream component (A in the adjacent image) to affect its own production or activation (output) in the future. If this element acts to enhance its own output, then it engages in positive feedback (blue arrow). A positive feedback loop is also known as a self-reinforcing loop, and it is possible that these loops can be part of a larger loop, as this is characteristic of regulatory circuits. Conversely, if this element leads to its own inhibition through upstream elements, this is canonically negative feedback (red blunt arrow). A negative feedback loop is also known as a balancing loop, and it may be common to see oscillations in which a delayed negative feedback signal is used to maintain homeostatic balance in the system. Feedback loops can be used for amplification (positive) or self-correction (negative). The right combination of positive and negative feedback loops can generate ultrasensitivity and bistability, which in turn can generate decisive transitions and oscillations. Combination of positive and negative feedback loops Positive and negative feedback loops do not always operate distinctly. In the mechanism of biochemical switches, they work together to create a flexible system. For example, according to Pfeuty & Kaneko (2009), to overcome a drawback in biochemical systems, positive feedback regulation loops may interact with negative regulation loops to facilitate escape from stable states. The coexistence of two stable states is known as bistability, which is often the result of positive feedback regulations. An example that reveals the interaction of the multiple negative and positive feedback loops is the activation of cyclin-dependent protein kinases, or Cdks14. Positive feedback loops play a role by switching cells from low to high Cdk-activity. The interaction between the two types of loops is evident in mitosis. While positive feedback initiates mitosis, a negative feedback loop promotes the inactivation of the cyclin-dependent kinases by the anaphase-promoting complex. This example clearly shows the combined effects that positive and negative feedback loops have on cell-cycle regulation. Ultrasensitivity An "all-or-none" response to a stimulus is termed ultrasensitivity. In other words, a very small change in stimulus causes a very large change in response, producing a sigmoidal dose-response curve. An ultrasensitive response is described by the general equation V = Sn/(Sn + Km), known as the Hill equation, when n, the Hill coefficient, is more than 1. The steepness of the sigmoidal curve depends on the value of n. A value of n = 1 produces a hyperbolic or Michaelian response. Ultrasensitivity is achieved in a variety of systems; a notable example is the cooperative binding of the enzyme hemoglobin to its substrate. Since an ultrasensitive response is almost ‘digital’, it can be used to amplify a response to a stimulus or cause a decisive sharp transition (between ‘off’ and ‘on’ states). Ultrasensitivity plays a large role in cell-cycle regulation. For example, Cdk1 and Wee1 are mitotic regulators, and they are able to inactivate each other through inhibitory phosphorylation. This represents a double negative feedback loop in which both regulators inactivate each other. According to Kim et al. (2007), there must be an ultrasensitive element to generate a bistable response. It turns out that Wee1 has an ultrasensitive response to Cdk1, and this likely arises because of substrate competition among the various phosphorylation sites on Wee1. Bistability Bistability implies hysteresis, and hysteresis implies multistability. Multistability indicates the presence of two or more stable states for a given input. Therefore, bistability is the ability of a system to exist in two steady states. In other words, there is a range of stimulus values for which the response can have two steady-state values. Bistability is accompanied by hysteresis, which means that the system approaches one of the two steady states preferentially depending on its history. Bistability requires feedback as well as an ultrasensitive circuit element. Under the proper circumstances, positive and negative feedback loops can provide the conditions for bistability; for example, by having positive feedback coupled to an ultrasensitive response element with the circuit. A hysteretic bistable system can act as a robust reversible switch because it is harder for the system to transition between ‘on’ and ‘off’ states (compared to the equivalent monostable ultrasensitive response). The system could also be poised such that one of the transitions is physically unattainable; for example, no amount of reduction in the stimulus will return the system to the ‘off’-state once it is already in the ‘on’ state. This would form a robust irreversible switch. How to design a simple biological switch is described in a conference paper. There is no one-to-one correspondence between network topology, since many networks have a similar input and output relationship. A network topology does not imply input or output, and similarly input or output does not imply network topology. It is for this reason that parameterization is very important for circuit function. If the dynamics of the input are comparable or faster than the response of the system, the response may appear hysteretic. Three cell cycle switches are described below that achieve abrupt and/or irreversible transitions by exploiting some of the mechanisms described above. The G1/S switch The G1/S transition, more commonly known as the Start checkpoint in budding yeast (the restriction point in other organisms) regulates cell cycle commitment. At this checkpoint, cells either arrest before DNA replication (due to limiting nutrients or a pheromone signal), prolong G1 (size control), or begin replication and progress through the rest of the cell cycle. The G1/S regulatory network or regulon in budding yeast includes the G1 cyclins Cln1, Cln2 and Cln3, Cdc28 (Cdk1), the transcription factors SBF and MBF, and the transcriptional inhibitor Whi5. Cln3 interacts with Cdk1 to initiate the sequence of events by phosphorylating a large number of targets, including SBF, MBF and Whi5. Phosphorylation of Whi5 causes it to translocate out of the nucleus, preventing it from inhibiting SBF and MBF. Active SBF/MBF drive the G1/S transition by turning on the B-type cyclins and initiating DNA replication, bud formation and spindle body duplication. Moreover, SBF/MBF drives expression of Cln1 and Cln2, which can also interact with Cdk1 to promote phosphorylation of its targets. This G1/S switch was initially thought to function as a linear sequence of events starting with Cln3 and ending in S phase. However, the observation that any one of the Clns was sufficient to activate the regulon indicated that Cln1 and Cln2 might be able to engage positive feedback to activate their own transcription. This would result in a continuously accelerating cycle that could act as an irreversible bistable trigger. Skotheim et al. used single-cell measurements in budding yeast to show that this positive feedback does indeed occur. A small amount of Cln3 induces Cln1/2 expression and then the feedback loop takes over, leading to rapid and abrupt exit of Whi5 from the nucleus and consequently coherent expression of G1/S regulon genes. In the absence of coherent gene expression, cells take longer to exit G1 and a significant fraction even arrest before S phase, highlighting the importance of positive feedback in sharpening the G1/S switch. The G1/S cell cycle checkpoint controls the passage of eukaryotic cells from the first gap phase, G1, into the DNA synthesis phase, S. In this switch in mammalian cells, there are two cell cycle kinases that help to control the checkpoint: cell cycle kinases CDK4/6-cyclin D and CDK2-cyclin E. The transcription complex that includes Rb and E2F is important in controlling this checkpoint. In the first gap phase, the Rb-HDAC repressor complex binds to the E2F-DP1 transcription factors, therefore inhibiting the downstream transcription. The phosphorylation of Rb by CDK4/6 and CDK2 dissociates the Rb-repressor complex and serves as an on/off switch for the cell cycle. Once Rb is phosphorylated, the inhibition is released on the E2F transcriptional activity. This allows for the transcription of S phase genes encoding for proteins that amplify the G1 to S phase switch. Many different stimuli apply checkpoint controls including TGFb, DNA damage, contact inhibition, replicative senescence, and growth factor withdrawal. The first four act by inducing members of the INK4 or Kip/Cip families of cell cycle kinase inhibitors. TGFb inhibits the transcription of Cdc25A, a phosphatase that activates the cell cycle kinases, and growth factor withdrawal activates GSK3b, which phosphorylates cyclin D. This leads to its rapid ubiquitination. The G2/M switch G2 is commenced by E2F-mediated transcription of cyclin A, which forms the cyclin A-Cdk2 complex. In order to proceed into mitosis, the cyclin B-Cdk1 complex (first discovered as MPF or M-phase promoting factor; Cdk1 is also known as Cdc2 in fission yeast and Cdc28 in budding yeast) is activated by Cdc25, a protein phosphatase. As mitosis starts, the nuclear envelope disintegrates, chromosomes condense and become visible, and the cell prepares for division. Cyclin B-Cdk1 activation results in nuclear envelope breakdown, which is a characteristic of the initiation of mitosis. The cyclin B-Cdk1 complex participates in a regulatory circuit in which Cdk1 can phosphorylate and activate its activator, Cdc25 (positive feedback), and phosphorylate and inactivate its inactivator, the kinase Wee1 (double-negative feedback). This circuit could act as a bistable trigger with one stable steady state in G2 (Cdk1 and Cdc25 off, Wee1 on) and a second stable steady state in M phase (Cdk1 and Cdc25 active, Wee1 off). However, Wee1 is itself regulated by other factors, such as Cdr2. It was suggested and defended by Jin et al. in their series of experiments with the human HeLa cell line in 1998 that it is the spatial location of cyclin B within the cell that initiates mitosis. Known from previous experiments in both human cells and starfish oocytes, Jin et al. summarize that cyclin B1 is abundant in the cytoplasm during non-dividing phases of mitosis, but is identified in the nucleus, in complex with Cdk1, immediately before the cell enters mitosis. Other experimenters showed that cells would not divide if cyclin B remains in the cytoplasm. In order to further investigate the effect of spatial location of cyclin B on cell division and cycle control, Jin et al. tagged cyclin B with a nuclear localization signal (NLS) that would keep the cyclin within the nucleus. Initially, this NLS cyclin B did not induce the expected effect of accelerated mitotic entry. This result is due to the inhibition detailed in the figure below. Wee1, an inhibitor on the cyclin B-Cdk1 complex, is localized in the nucleus, and likely phosphorylating the NLS cyclin B, rendering it unable to perform as predicted. This postulation was confirmed when Jin et al. employed Cdc2AF, an unphosphorylatable mutant of Cdk1, and saw accelerated entry to cell division due to the nuclear localization of the cyclin B. Therefore, nuclear localization of cyclin B is necessary but not sufficient to trigger cell division. In investigation of cell cycle regulation, Jin et al. manipulated cells in order to evaluate the localization of cyclin B in cells with DNA damage. Through combination of DNA damage and nuclear localization of exogenous cyclin B, they were able to determine that cells would divide even with DNA damage if the cyclin B were forced to be expressed in the nucleus. This suggests that spatial localization of cyclin B may play a role as a checkpoint of mitosis. If the cells, under normal circumstances, don't divide when their genetic information is damaged, but will enter mitosis if endogenous cyclin B is expressed in the nucleus, it is likely that the translocation of the cyclin B to the cytoplasm is a mechanism that prevents immature mitotic entry. This hypothesis was further supported by Jin et al.’s analysis of cells arrested in G2 due to DNA damage. In these cells, Jin et al. observed high levels of cyclin B-Cdc2 complex activity in the cytoplasm. This is supporting evidence for the previously mentioned theory because it shows that the Cdc2 can activate the cyclin without immediate translocation to the nucleus. Additionally, the accumulation of cyclin B-Cdk1 complexes in the cytoplasm of cells that are not dividing due to DNA damage supports the theory that it is nuclear localization of cyclin B that initiates mitotic entry. To conclude, spatial localization of cyclin B plays a role in mitotic entry. Translocation of cyclin B from the cytoplasm to the nucleus is necessary for cell division, but not sufficient, as its inhibitors do not allow the cell to enter mitosis prematurely. In addition to the back up inhibition of the cyclin B-Cdk1 complex, premature cellular division is prevented by the translocation of the cyclin B itself. The cyclin B-Cdk1 complex will remain in the cytoplasm in cells with DNA damage, rather than translocate to the nucleus, keeping the cell inhibiting the cell from entering mitosis. The next question addressed by researchers in this field is by which specific mechanism is this translocation regulated. Santos et al. hypothesized that the translocation of cyclin B is regulated by a mechanism of positive feedback, similar to that which regulates the activation of the cyclin B-Cdk1 complex. They believed that the positive feedback loop involves the phosphorylation of the cyclin B and its translocation to the nucleus. To begin to investigate this, they first reconfirmed some of the results of the Jin et al. experiments, utilizing immunofluorescence to show cyclin B in the cytoplasm prior to division, and translocation to the nucleus to initiate mitosis, which they operationalized by comparing relative to nuclear envelope breakdown (NEB). Using nuclear cyclin that cannot be inactivated by Wee1 or Myt1, Santos et al. observed that active nuclear cyclin recruits more cyclin from the cytoplasm to be translocated to the nucleus. They confirmed this observation by employing a rapamycin treatment, iRap. iRap induces translocation of tagged cyclin B from the cytoplasm to the nucleus. Remarkably, Santos et al. saw that untagged cyclin B migrated with the cyclin B influenced by iRap. The untagged cyclin is insensitive to the treatment, and moves independently from the treated cyclin. This supports the first part of the positive feedback loop, that nuclear localization of cyclin B, which leads to mitotic entry, promotes increased translocation of cytoplasmic cyclin B to the nucleus, further promoting the remaining cytoplasmic cyclin B to migrate to the nucleus, etc. Santos et al. further hypothesize that phosphorylation of the cyclin B is another component of the positive feedback loop. They observed that the cyclin B naturally enters the nucleus before NEB. In contrast, mutated, unphosphorylatable cyclin B enters the nucleus during NEB. This is unexpected because it is characteristic of the cell cycle for the cyclin to translocate to the nucleus prior to NEB in order to induce cell cycle progression into mitotic division. Therefore, Santos et al. conclude that the phosphorylation of the cyclin B promotes the translocation to the nucleus. However, in addition, translocation to the nucleus promotes phosphorylation of the cyclin. It is noted by the authors that phosphorylation of cyclin B is nineteen times more favorable in the nucleus than in the cytoplasm, due to the smaller overall volume of the nucleus, allowing a faster phosphorylation rate. The increased translocation due to phosphorylation and increased phosphorylation due to translocation exemplify the positive feedback loop that resembles that previously discovered, which activates the cyclin B-Cdk1 complex. In conclusion, nuclear localization of cyclin B is necessary for cellular entry into mitosis. The translocation of the cyclin from the cytoplasm to the nucleus, which allows for cellular division, is regulated by a positive feedback loop. Active cyclin B translocates to the nucleus and promotes activation and translocation of additional units of cyclin residing in the nucleus. This phenomenon is enhanced when considering phosphorylation. Phosphorylation of cyclin B promotes translocation to the nucleus, and cyclin B in the nucleus is much more likely to be phosphorylated, so nuclear localization promotes cyclin B phosphorylation in return. Once cells are in mitosis, cyclin B-Cdk1 activates the anaphase-promoting complex (APC), which in turn inactivates cyclin B-Cdk1 by degrading cyclin B, eventually leading to exit from mitosis. Coupling the bistable Cdk1 response function to the negative feedback from the APC could generate what is known as a relaxation oscillator, with sharp spikes of Cdk1 activity triggering robust mitotic cycles. However, in a relaxation oscillator, the control parameter moves slowly relative to the system's response dynamics which may be an accurate representation of mitotic entry, but not necessarily mitotic exit. It is necessary to inactivate the cyclin B-Cdk1 complex in order to exit the mitotic stage of the cell cycle. The cells can then return to the first gap phase G1 and wait until the cycle proceeds yet again. In 2003 Pomerening et al. provided strong evidence for this hypothesis by demonstrating hysteresis and bistability in the activation of Cdk1 in the cytoplasmic extracts of Xenopus oocytes. They first demonstrated a discontinuous sharp response of Cdk1 to changing concentrations of non-destructible Cyclin B (to decouple the Cdk1 response network from APC-mediated negative feedback). However, such a response would be consistent with both a monostable, ultrasensitive transition and a bistable transition. To distinguish between these two possibilities, they measured the steady-state levels of active Cdk1 in response to changing cyclin levels, but in two separate experiments, one starting with an interphase extract and one starting with an extract already in mitosis. At intermediate concentrations of cyclin they found two steady-state concentrations of active Cdk1. Which of the two steady states was occupied depended on the history of the system, i.e.whether they started with interphase or mitotic extract, effectively demonstrating hysteresis and bistability. In the same year, Sha et al. independently reached the same conclusion revealing the hysteretic loop also using Xenopus laevis egg extracts. In this article, three predictions of the Novak-Tyson model were tested in an effort to conclude that hysteresis is the driving force for "cell-cycle transitions into and out of mitosis". The predictions of the Novak-Tyson model are generic to all saddle-node bifurcations. Saddle-node bifurcations are extremely useful bifurcations in an imperfect world because they help describe biological systems which are not perfect. The first prediction was that the threshold concentration of cyclin to enter mitosis is higher than the threshold concentration of cyclin to exit mitosis, and this was confirmed by supplementing cycling egg extracts with non-degradable cyclin B and measuring the activation and inactivation threshold after the addition of cycloheximide (CHX), which is a protein synthesis inhibitor. Furthermore, the second prediction of the Novak-Tyson model was also validated: unreplicated deoxyribonucleic acid, or DNA, increases the threshold concentration of cyclin that is required to enter mitosis. In order to arrive at this conclusion, cytostatic factor released extracts were supplemented with CHX, APH (a DNA polymerase inhibitor), or both, and non-degradable cyclin B was added. The third and last prediction that was tested and proven true in this article was that the rate of Cdc2 activation slows down near the activation threshold concentration of cyclin. These predictions and experiments demonstrate the toggle-like switching behavior that can be described by hysteresis in a dynamical system. Metaphase-anaphase switch In the transition from metaphase to anaphase, it is crucial that sister chromatids are properly and simultaneously separated to opposite ends of the cell. Separation of sister-chromatids is initially strongly inhibited to prevent premature separation in late mitosis, but this inhibition is relieved through destruction of the inhibitory elements by the anaphase-promoting complex (APC) once sister-chromatid bi-orientation is achieved. One of these inhibitory elements is securin, which prevents the destruction of cohesin, the complex that holds the sister-chromatids together, by binding the protease separase which targets Scc1, a subunit of the cohesin complex, for destruction. In this system, the phosphatase Cdc14 can remove an inhibitory phosphate from securin, thereby facilitating the destruction of securin by the APC, releasing separase. As shown by Uhlmann et al., during the attachment of chromosomes to the mitotic spindle the chromatids remain paired because cohesion between the sisters prevents separation. Cohesion is established during DNA replication and depends on cohesin, which is a multisubunit complex composed of Scc1, Scc3, Smc2, and Smc3. In yeast at the metaphase-to-anaphase transition, Scc1 dissociates from the chromosomes and the sister chromatids separate. This action is controlled by the Esp1 protein, which is tightly bound by the anaphase inhibitor Pds1 that is destroyed by the anaphase-promoting complex. In order to verify that Esp1 does play a role in regulating Scc1 chromosome association, cell strains were arrested in G1 with an alpha factor. These cells stayed in arrest during the development. Esp1-1 mutant cells were used and the experiment was repeated, and Scc1 successfully bound to the chromosomes and remained associated even after the synthesis was terminated. This was crucial in showing that with Esp1, Scc1 is hindered in its ability to become stably associated with chromosomes during G1, and Esp1 can in fact directly remove Scc1 from chromosomes. It has been shown by Holt et al. that separase activates Cdc14, which in turn acts on securin, thus creating a positive feedback loop that increases the sharpness of the metaphase to anaphase transition and coordination of sister-chromatid separation. Holt et al. probed the basis for the effect of positive feedback in securin phosphorylation by using mutant 'securin' strains of yeast, and testing how changes in the phosphoregulation of securin affects the synchrony of sister chromatid separation. Their results indicate that interfering with this positive securin-separase-cdc14 loop decreases sister chromatid separation synchrony. This positive feedback can hypothetically generate bistability in the transition to anaphase, causing the cell to make the irreversible decision to separate sister-chromatids. Mitotic exit Mitotic exit is an important transition point that signifies the end of mitosis and the onset of new G1 phase for a cell, and the cell needs to rely on specific control mechanisms to ensure that once it exits mitosis, it never returns to mitosis until it has gone through G1, S, and G2 phases and passed all the necessary checkpoints. Many factors including cyclins, cyclin-dependent kinases (CDKs), ubiquitin ligases, inhibitors of cyclin-dependent kinases, and reversible phosphorylations regulate mitotic exit to ensure that cell cycle events occur in correct order with the fewest errors. The end of mitosis is characterized by spindle breakdown, shortened kinetochore microtubules, and pronounced outgrowth of astral (non-kinetochore) microtubules. For a normal eukaryotic cell, mitotic exit is irreversible. Proteolytic degradation Many speculations were made with regard to the control mechanisms employed by a cell to promote the irreversibility of mitotic exit in a eukaryotic model organism, the budding yeast Saccharomyces cerevisiae. Proteolytic degradation of cell cycle regulators and corresponding effects on the levels of cyclin-dependent kinases were proposed as a mechanism that promotes eukaryotic cell cycle and metaphase-to-anaphase transition in particular. In this theory, anaphase promoting complex (APC), a class of ubiquitin ligase, facilitates degradation of mitotic cyclins (Clb2) and anaphase-inhibiting factors (PDS1, CUT2) to promote mitotic exit. APC ubiquitinates nine-amino acid motif known as the destruction box (D box) in the NH2-terminal domain of mitotic cyclins for degradation by proteasome. APC in association with Cdc20 (APC-Cdc20) ubiquitinates and targets mitotic cyclins (Clb2) for degradation at initial phase. Simultaneously, APC-Cdc20 mediates degradation of securins, which inhibit separases through binding, at anaphase onset. Released and active separase cleaves cohesin that held sister chromatids together, facilitating separation of sister chromatids and initiates mitotic exit by promoting release of Cdc14 from nucleolus. At later phase, downregulation of Cdk1 and activation of Cdc14, a Cdh1-activating phosphatase, promotes formation of APC in association with Cdh1 (APC-Cdh1) to degrade Clb2s. Cdc20 and Cdh1, which are the activators of APC, recruit substrates such as securin and B-type cyclins(Clb) for ubiquitination. Without Cdk1-Clb2 complexes to phosphorylate proteins that are involved in spindle dynamics such as Sli15, Ase1, and Ask1, spindle elongation and chromosomal segregation are promoted, facilitating mitotic exit. The importance of proteolytic degradation in eukaryotic cell cycle changed the view of cell division as a simple kinase cascade to a more complex process in which interactions among phosphorylation, ubiquitination, and proteolysis are necessary. However, experiments using budding yeast cells with cdc28-as1, an INM-PP1 (ATP analog)-sensitive Cdk allele, proved that destruction of B-type cyclins (Clb) is not necessary for triggering irreversible mitotic exit. Clb2 degradation did shorten the Cdk1-inhibition period required for triggering irreversible mitotic exit indicating that cyclin proteolysis contributes to the dynamic nature of the eukaryotic cell cycle due to slower timescale of its action but is unlikely to be the major determining factor in triggering irreversible cell cycle transitions. Sic1 levels Discoveries were made which indicated the importance of the level of the inhibitors of cyclin-dependent kinases in regulating eukaryotic cell cycle. In particular, the level of Sic1, a stoichiometric inhibitor of Clb-CDK complexes in budding yeast, was shown to be particularly important in irreversible G1-S transition by irreversibly activating S phase kinases. Sic1 level was shown to play a major role in triggering irreversible mitotic exit (M-G1 transition) as well as in G1-S transition. During mitosis, decreasing levels of Cdk1 leads to the activation of Cdc14, a phosphatase that counteracts Cdk1 via activation of Cdh1 and Swi5, a transcriptional activator of Sic1 proteins. While degradation of Sic1 to a certain low level triggered the onset of S phase, accumulation of Sic1 to a certain high level was required to trigger irreversible mitotic exit. Cdk1-inhibitors could induce mitotic exit even when degradation of B-type cyclins was blocked by expression of non-degradable Clbs or proteasome inhibitors. However, sister chromatids failed to segregate, and cells reverted to mitosis once the inhibitors were washed away, indicating that a threshold level of the inhibitors needs to be achieved to trigger irreversible mitotic exit independently of cyclin degradations. Despite different thresholds of Sic1 level that are required to trigger mitotic exit compared to G1-S transition, the level of Sic1 was shown to play a key role in regulating eukaryotic cell cycle by inhibiting the activity of CDKs. Dynamical systems approach Because eukaryotic cell cycle involves a variety of proteins and regulatory interactions, dynamical systems approach can be taken to simplify a complex biological circuit into a general framework for better analysis. Among the four possible input/output relationships, the relationship between Sic1 level and mitotic exit seems to show the characteristics of an irreversible bistable switch, driven by feedback between APC-Cdh1, Sic1, and Clb2-Cdk1. Bistability is known to control biological functions such as cell cycle control and cellular differentiation and play a key role in many cellular regulatory networks. Bistable input/output relationship is characterized by two stable states with two bifurcation points. Multiple outputs are possible for one specific input in the region of bistability, marked by two bifurcation points. In addition, the bistable relationship displays hysteresis: the final state/output depends on the history of the input as well as the current value of input because the system has a memory. One bifurcation point has a negative control parameter value (the bifurcation point is on the other side of the axis), resulting in disconnection between the two stable states and irreversibility of the transition from one state to the other. With regard to mitotic exit, the two stable states are defined by mitosis and G1 phase. Once Sic1 level (input) accumulates beyond the threshold, irreversible transition occurs from mitosis (stable state I) to G1 phase (stable state II). In the imperfect environment, the only bifurcation that remains intact is saddle-node bifurcation. Saddle-node bifurcation does not break down (saddle-node is the expected generic behavior), while transcritical and pitchfork bifurcations break down in the presence of imperfections. Thus, the only one-dimensional bifurcation that can exist in imperfect biological world is the saddle-node bifurcation. The bistable relation between M-G1 transition and Sic1 level can be represented as a diagram of two saddle-node bifurcations in which the system's behavior changes qualitatively with a small change in control parameter, the amount of Sic1. Systems-level feedback Because the behavior of cell cycle critically depends on the amount of Sic1 at the M-G1 transition state, the amount of Sic1 is tightly regulated by systems-level feedbacks. Because Cdk1-Clb2 inhibits Sic1 by phosphorylating Sic1 and making Sic1 available for degradation via ubiquitylation, APC-Cdh1-dependent degradation of Cdk1-Clb2 not only decreases the level of available Cdk1-Clb2 complexes but also increases the level of Sic1 which in turn further inhibits the function of Cdk1-Clb2. This activation of the double negative feedback loop is initiated from APC-Cdc20-dependent degradation of Cdk1-Clb2 and release of Cdc14 from nucleolar protein Net1/Cfi1. FEAR (Cdc14 early anaphase release) pathway facilitates Clb2-Cdk1-dependent phosphorylation of Net1 which transiently releases Cdc14 from Net1. The released Cdc14 and Clb2-Cdk1 complexes go onto form spindles that activates mitotic exit network (MEN). MEN allows sustained release of Cdc14 from the nucleolus, and Cdc14 counters the activity of Clb2-Cdk1 by activating Cdh1 and stabilizing Sic1 through activation of Sic1-transcriptional activator Swi5. Sic1 positively regulates itself by inhibiting Cdk1-Clb2 to release inhibition of Swi5, and Cdh1 also positively regulates itself by inhibiting Clb2-Cdk1 to release inhibition of MEN which can activate Cdc14 and subsequently Cdh1 itself. The double-negative feedback loop, formed by APC-Cdh1 and Sic1, is required to maintain low Clb2-Cdk1 activity because Clb2 auto-activates its synthesis by activating transcriptional factors, Fkh2–Mcm1 Ndd1 complex. Implications Eukaryotic cell cycle consists of various checkpoints and feedback loops to ensure faithful and successful cell division. During mitosis for example, when duplicated chromosomes are improperly attached to mitotic spindle, spindle assembly checkpoint (SAC) proteins including Mad and Bub inhibit APC-Cdc20 to delay entry into anaphase and B-type cyclin degradations. In addition, when mitotic spindles are misaligned, MEN and subsequently Cdc14 are inhibited in a Bub2 and Bfa1-dependent manner to prevent degradation of mitotic cyclins and anaphase entry. Sic1 is a nice example demonstrating how systems-level feedbacks interact to sense the environmental conditions and trigger cell cycle transitions. Even though actual M-G1 transition is vastly complex with numerous proteins and regulations involved, dynamical systems approach allows simplification of this complex system into bistable input/output relation with two saddle-node bifurcations in which the output (mitotic exit) depends on critical concentration of Sic1. Using one-dimensional analysis, it might be possible to explain many of the irreversible transition points in the eukaryotic cell cycle that are governed by systems-level control and feedback. Other examples of irreversible transition points include Start (irreversible commitment to a new cell division cycle) that can be explained by irreversible bistable switch whose control parameter is tightly regulated by the systemic feedbacks involving Cln2, Whi5, and SBF. Relevant information Cdc25 Cell biology Cell cycle Cell cycle checkpoint Cell cycle mathematical model Mitosis Spindle checkpoint References External links Cells alive Cell Cycle and Cytokinesis - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Cell Cycle Portal CCO The Cell-Cycle Ontology Science Creative Quarterly's overview of the cell cycle Cell cycle regulators Cellular processes
Biochemical switches in the cell cycle
[ "Chemistry", "Biology" ]
8,062
[ "Cellular processes", "Cell cycle regulators", "Signal transduction" ]
25,369,256
https://en.wikipedia.org/wiki/Peres%20metric
In mathematical physics, the Peres metric is defined by the proper time for any arbitrary function f. If f is a harmonic function with respect to x and y, then the corresponding Peres metric satisfies the Einstein field equations in vacuum. Such a metric is often studied in the context of gravitational waves. The metric is named for Israeli physicist Asher Peres, who first defined it in 1959. See also Introduction to the mathematics of general relativity Stress–energy tensor Metric tensor (general relativity) References Metric tensors Spacetime Coordinate charts in general relativity General relativity Gravity
Peres metric
[ "Physics", "Mathematics", "Engineering" ]
115
[ "Tensors", "Vector spaces", "Coordinate systems", "Space (mathematics)", "General relativity", "Metric tensors", "Relativity stubs", "Theory of relativity", "Spacetime", "Coordinate charts in general relativity" ]
25,372,042
https://en.wikipedia.org/wiki/Linked%20timestamping
Linked timestamping is a type of trusted timestamping where issued time-stamps are related to each other. Description Linked timestamping creates time-stamp tokens which are dependent on each other, entangled in some authenticated data structure. Later modification of the issued time-stamps would invalidate this structure. The temporal order of issued time-stamps is also protected by this data structure, making backdating of the issued time-stamps impossible, even by the issuing server itself. The top of the authenticated data structure is generally published in some hard-to-modify and widely witnessed media, like printed newspaper or public blockchain. There are no (long-term) private keys in use, avoiding PKI-related risks. Suitable candidates for the authenticated data structure include: Linear hash chain Merkle tree (binary hash tree) Skip list The simplest linear hash chain-based time-stamping scheme is illustrated in the following diagram: The linking-based time-stamping authority (TSA) usually performs the following distinct functions: Aggregation For increased scalability the TSA might group time-stamping requests together which arrive within a short time-frame. These requests are aggregated together without retaining their temporal order and then assigned the same time value. Aggregation creates a cryptographic connection between all involved requests; the authenticating aggregate value will be used as input for the linking operation. Linking Linking creates a verifiable and ordered cryptographic link between the current and already issued time-stamp tokens. Publishing The TSA periodically publishes some links, so that all previously issued time-stamp tokens depend on the published link and that it is practically impossible to forge the published values. By publishing widely witnessed links, the TSA creates unforgeable verification points for validating all previously issued time-stamps. Security Linked timestamping is inherently more secure than the usual, public-key signature based time-stamping. All consequential time-stamps "seal" previously issued ones - hash chain (or other authenticated dictionary in use) could be built only in one way; modifying issued time-stamps is nearly as hard as finding a preimage for the used cryptographic hash function. Continuity of operation is observable by users; periodic publications in widely witnessed media provide extra transparency. Tampering with absolute time values could be detected by users, whose time-stamps are relatively comparable by system design. Absence of secret keys increases system trustworthiness. There are no keys to leak and hash algorithms are considered more future-proof than modular arithmetic based algorithms, e.g. RSA. Linked timestamping scales well - hashing is much faster than public key cryptography. There is no need for specific cryptographic hardware with its limitations. The common technology for guaranteeing long-term attestation value of the issued time-stamps (and digitally signed data) is periodic over-time-stamping of the time-stamp token. Because of missing key-related risks and of the plausible safety margin of the reasonably chosen hash function this over-time-stamping period of hash-linked token could be an order of magnitude longer than of public-key signed token. Research Foundations Stuart Haber and W. Scott Stornetta proposed in 1990 to link issued time-stamps together into linear hash-chain, using a collision-resistant hash function. The main rationale was to diminish TSA trust requirements. Tree-like schemes and operating in rounds were proposed by Benaloh and de Mare in 1991 and by Bayer, Haber and Stornetta in 1992. Benaloh and de Mare constructed a one-way accumulator in 1994 and proposed its use in time-stamping. When used for aggregation, one-way accumulator requires only one constant-time computation for round membership verification. Surety started the first commercial linked timestamping service in January 1995. Linking scheme is described and its security is analyzed in the following article by Haber and Sornetta. Buldas et al. continued with further optimization and formal analysis of binary tree and threaded tree based schemes. Skip-list based time-stamping system was implemented in 2005; related algorithms are quite efficient. Provable security Security proof for hash-function based time-stamping schemes was presented by Buldas, Saarepera in 2004. There is an explicit upper bound for the number of time stamps issued during the aggregation period; it is suggested that it is probably impossible to prove the security without this explicit bound - the so-called black-box reductions will fail in this task. Considering that all known practically relevant and efficient security proofs are black-box, this negative result is quite strong. Next, in 2005 it was shown that bounded time-stamping schemes with a trusted audit party (who periodically reviews the list of all time-stamps issued during an aggregation period) can be made universally composable - they remain secure in arbitrary environments (compositions with other protocols and other instances of the time-stamping protocol itself). Buldas, Laur showed in 2007 that bounded time-stamping schemes are secure in a very strong sense - they satisfy the so-called "knowledge-binding" condition. The security guarantee offered by Buldas, Saarepera in 2004 is improved by diminishing the security loss coefficient from to . The hash functions used in the secure time-stamping schemes do not necessarily have to be collision-resistant or even one-way; secure time-stamping schemes are probably possible even in the presence of a universal collision-finding algorithm (i.e. universal and attacking program that is able to find collisions for any hash function). This suggests that it is possible to find even stronger proofs based on some other properties of the hash functions. At the illustration above hash tree based time-stamping system works in rounds (, , , ...), with one aggregation tree per round. Capacity of the system () is determined by the tree size (, where denotes binary tree depth). Current security proofs work on the assumption that there is a hard limit of the aggregation tree size, possibly enforced by the subtree length restriction. Standards ISO 18014 part 3 covers 'Mechanisms producing linked tokens'. American National Standard for Financial Services, "Trusted Timestamp Management and Security" (ANSI ASC X9.95 Standard) from June 2005 covers linking-based and hybrid time-stamping schemes. There is no IETF RFC or standard draft about linking based time-stamping. (Evidence Record Syntax) encompasses hash tree and time-stamp as an integrity guarantee for long-term archiving. References External links "Series of mini-lectures about cryptographic hash functions"; includes application in time-stamping and provable security; by A. Buldas, 2011. Computer security Time
Linked timestamping
[ "Physics", "Mathematics" ]
1,402
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
23,995,819
https://en.wikipedia.org/wiki/McStas
McStas is free and open-source (GNU General Public License) software simulator for neutron scattering experiments. McStas is an abbreviation for Monte carlo Simulation of triple axis spectrometers, but the software can be used to simulate all types of neutron scattering instruments. The software is based on both Monte Carlo methods and ray tracing. A special compiler translates a domain-specific language describing the neutron instrument geometry and component definitions (written in C) to a stand-alone C code. The basics of McStas was written in 1997 at Risø for simulation of their neutron experiments, that were based at the DR3 reactor that was shut down in year 2000. After the fusion of Risø with the Technical University of Denmark, McStas is currently developed at the Physics department of DTU and Institut Laue-Langevin, with involvement from the Niels Bohr Institute and Paul Scherrer Institute. The Copenhagen-based Data Management and Software Centre of the European Spallation Source is also expected to become a partner since many of the future instruments are being simulated using McStas. McXtrace, an equivalent simulation package using X-rays instead of neutrons, started being developed in 2009 and it is now freely available. Official partner sites are The Physics department at DTU The European Spallation Source The Institut Laue-Langevin The Niels Bohr Institute The Paul Scherrer Institute See also Neutron-acceptance diagram shading (NADS) VITESS, another neutron raytracing software package References External links Neutron scattering
McStas
[ "Physics", "Chemistry" ]
315
[ "Neutron scattering", "Scattering stubs", "Scattering", "Nuclear and atomic physics stubs", "Nuclear physics" ]
4,894,035
https://en.wikipedia.org/wiki/Gas%20dynamic%20laser
A gas dynamic laser (GDL) is a laser based on differences in relaxation velocities of molecular vibrational states. The lasing medium gas has such properties that an energetically lower vibrational state relaxes faster than a higher vibrational state, and so a population inversion is achieved in a particular time. It was invented by Edward Gerry and Arthur Kantrowitz at Avco Everett Research Laboratory in 1966. Pure gas dynamic lasers usually use a combustion chamber, supersonic expansion nozzle, and CO2, in a mixture with nitrogen or helium, as the laser medium. Gas dynamic lasers can be pumped by combustion or adiabatic expansion of gas. Any hot and compressed gas with appropriate vibrational structure could be utilized. The explosively pumped gas dynamic laser is a version of GDL pumped by expansion of explosion products. Hexanitrobenzene and/or tetranitromethane with metal powder is the preferred explosive. This device could have very high pulsed peak power output suitable for laser weapons. Function Hot compressed gas is generated. Gas expands through subsonic or supersonic expansion nozzle, the temperature of the gas becomes lower and according to Maxwell–Boltzmann distribution the gas isn't in thermodynamic equilibrium until the vibrational states relax. The gas flows through the tube of a particular length for a particular time. In this time lower vibrational state does relax but higher vibrational state doesn't. Thus population inversion is achieved. Gas flows through mirror area where stimulated emission takes place. Gas returns to equilibrium and becomes warm. It must be removed from the laser cavity or it will interfere with the thermodynamics and vibrational state relaxation of the freshly expanded gas. Application Almost any chemical laser uses gas-dynamic processes to increase its efficiency. High energy efficiency (as high as 30%) and very high power output make GDL suitable for some (especially military) applications. See also Gas laser Chemical laser Carbon-dioxide laser References Laser History - Gasdynamic LEOT Laser Tutorial - Course 3: Laser Technology - Module 9: CO2 Laser Systems Laser History - Airborne Star Wars Laser United States Patent 4099142 : Condensed explosive gas dynamic laser Chemical lasers Gas lasers Military lasers
Gas dynamic laser
[ "Chemistry" ]
456
[ "Chemical reaction engineering", "Chemical lasers" ]
4,894,039
https://en.wikipedia.org/wiki/Boundary%20layer%20thickness
This page describes some of the parameters used to characterize the thickness and shape of boundary layers formed by fluid flowing along a solid surface. The defining characteristic of boundary layer flow is that at the solid walls, the fluid's velocity is reduced to zero. The boundary layer refers to the thin transition layer between the wall and the bulk fluid flow. The boundary layer concept was originally developed by Ludwig Prandtl and is broadly classified into two types, bounded and unbounded. The differentiating property between bounded and unbounded boundary layers is whether the boundary layer is being substantially influenced by more than one wall. Each of the main types has a laminar, transitional, and turbulent sub-type. The two types of boundary layers use similar methods to describe the thickness and shape of the transition region with a couple of exceptions detailed in the Unbounded Boundary Layer Section. The characterizations detailed below consider steady flow but is easily extended to unsteady flow. The bounded boundary layer description Bounded boundary layers is a name used to designate fluid flow along an interior wall such that the other interior walls induce a pressure effect on the fluid flow along the wall under consideration. The defining characteristic of this type of boundary layer is that the velocity profile normal to the wall often smoothly asymptotes to a constant velocity value denoted as ue(x). The bounded boundary layer concept is depicted for steady flow entering the lower half of a thin flat plate 2-D channel of height H in Figure 1 (the flow and the plate extends in the positive/negative direction perpendicular to the x-y-plane). Examples of this type of boundary layer flow occur for fluid flow through most pipes, channels, and wind tunnels. The 2-D channel depicted in Figure 1 is stationary with fluid flowing along the interior wall with time-averaged velocity u(x,y) where x is the flow direction and y is the normal to the wall. The H/2 dashed line is added to acknowledge that this is an interior pipe or channel flow situation and that there is a top wall located above the pictured lower wall. Figure 1 depicts flow behavior for H values that are larger than the maximum boundary layer thickness but less than thickness at which the flow starts to behave as an exterior flow. If the wall-to-wall distance, H, is less than the viscous boundary layer thickness then the velocity profile, defined as u(x,y) at x for all y, takes on a parabolic profile in the y-direction and the boundary layer thickness is just H/2. At the solid walls of the plate the fluid has zero velocity (no-slip boundary condition), but as you move away from the wall, the velocity of the flow increases without peaking, and then approaches a constant mean velocity ue(x). This asymptotic velocity may or may not change along the wall depending on the wall geometry. The point where the velocity profile essentially reaches the asymptotic velocity is the boundary layer thickness. The boundary layer thickness is depicted as the curved dashed line originating at the channel entrance in Figure 1. It is impossible to define an exact location at which the velocity profile reaches the asymptotic velocity. As a result, a number of boundary layer thickness parameters, generally denoted as , are used to describe characteristic thickness scales in the boundary layer region. Also of interest is the velocity profile shape which is useful in differentiating laminar from turbulent boundary layer flows. The profile shape refers to the y-behavior of the velocity profile as it transitions to ue(x). The 99% boundary layer thickness The boundary layer thickness, , is the distance normal to the wall to a point where the flow velocity has essentially reached the 'asymptotic' velocity, . Prior to the development of the Moment Method, the lack of an obvious method of defining the boundary layer thickness led much of the flow community in the later half of the 1900s to adopt the location , denoted as and given by as the boundary layer thickness. For laminar boundary layer flows along a flat plate channel that behave according to the Blasius solution conditions, the value is closely approximated by where is constant, and where is the Reynolds number, is the freestream velocity, is the asymptotic velocity, is the distance downstream from the start of the boundary layer, and is the kinematic viscosity. For turbulent boundary layers along a flat plate channel, the boundary layer thickness, , is given by This turbulent boundary layer thickness formula assumes 1) the flow is turbulent right from the start of the boundary layer and 2) the turbulent boundary layer behaves in a geometrically similar manner (i.e. the velocity profiles are geometrically similar along with the flow in the x-direction, differing only by scaling parameters in and ). Neither one of these assumptions is true for the general turbulent boundary layer case so care must be exercised in applying this formula. Displacement thickness The displacement thickness, or , is the normal distance to a reference plane representing the lower edge of a hypothetical inviscid fluid of uniform velocity that has the same flow rate as occurs in the real fluid with the boundary layer. The displacement thickness essentially modifies the shape of a body immersed in a fluid to allow, in principle, an inviscid solution if the displacement thicknesses were known a priori. The definition of the displacement thickness for compressible flow, based on mass flow rate, is where is the density. For incompressible flow, the density is constant so the definition based on volumetric flow rate becomes For turbulent boundary layer calculations, the time-averaged density and velocity are used. For laminar boundary layer flows along a flat plate that behave according to the Blasius solution conditions, the displacement thickness is where is constant. The displacement thickness is not directly related to the boundary layer thickness but is given approximately as . It has a prominent role in calculating the Shape Factor. It also shows up in various formulas in the Moment Method. Momentum thickness The momentum thickness, or , is the normal distance to a reference plane representing the lower edge of a hypothetical inviscid fluid of uniform velocity that has the same momentum flow rate as occurs in the real fluid with the boundary layer. The momentum thickness definition for compressible flow based on the mass flow rate is For incompressible flow, the density is constant so that the definition based on volumetric flow rate becomes where are the density and is the 'asymptotic' velocity. For turbulent boundary layer calculations, the time averaged density and velocity are used. For laminar boundary layer flows along a flat plate that behave according to the Blasius solution conditions, the momentum thickness is where is constant. The momentum thickness is not directly related to the boundary layer thickness but is given approximately as . It has a prominent role in calculating the Shape Factor. A related parameter called the Energy Thickness is sometimes mentioned in reference to turbulent energy distribution but is rarely used. Shape factor A shape factor is used in boundary layer flow to help to differentiate laminar and turbulent flow. It also shows up in various approximate treatments of the boundary layer including the Thwaites method for laminar flows. The formal definition is given by where is the shape factor, is the displacement thickness and is the momentum thickness. Conventionally, = 2.59 (Blasius boundary layer) is typical of laminar flows, while = 1.3 - 1.4 is typical of turbulent flows near the laminar-turbulent transition. For turbulent flows near separation, 2.7. The dividing line defining laminar-transitional and transitional-turbulent values is dependent on a number of factors so it is not always a definitive parameter for differentiating laminar, transitional, or turbulent boundary layers. Moment method A relatively new method for describing the thickness and shape of the boundary layer uses the mathematical moment methodology which is commonly used to characterize statistical probability functions. The boundary layer moment method was developed from the observation that the plot of the second derivative of the Blasius boundary layer for laminar flow over a plate looks very much like a Gaussian distribution curve. The implication of the second derivative Gaussian-like shape is that the velocity profile shape for laminar flow is closely approximated as a twice integrated Gaussian function. The moment method is based on simple integrals of the velocity profile that use the entire profile, not just a few tail region data points as does . The moment method introduces four new parameters that help describe the thickness and shape of the boundary layer. These four parameters are the mean location, the boundary layer width, the velocity profile skewness, and the velocity profile excess. The skewness and excess are true shape parameters as opposed to the simple ratio parameters like the H12. Applying the moment method to the first and second derivatives of the velocity profile generates additional parameters that, for example, determine the location, shape, and thickness of the viscous forces in a turbulent boundary layer. A unique property of the moment method parameters is that it is possible to prove that many of these velocity thickness parameters are also similarity scaling parameters. That is, if similarity is present in a set of velocity profiles, then these thickness parameters must also be similarity length scaling parameters. It is straightforward to cast the properly scaled velocity profile and its first two derivatives into suitable integral kernels. The central moments based on the scaled velocity profiles are defined as where is the displacement thickness and the mean location, is given by There are some advantages to also include descriptions of moments of the boundary layer profile derivatives with respect to the height above the wall. Consider the first derivative velocity profile central moments given by where the first derivative mean location is the displacement thickness . Finally the second derivative velocity profile central moments are given by where the second derivative mean location, , is given by where is the viscosity and where is the wall shear stress. The mean location, , for this case is formally defined as ue(x) divided by the area under the second derivative curve. The above equations work for both laminar and turbulent boundary layers as long as the time-averaged velocity is used for the turbulent case. With the moments and the mean locations defined, the boundary layer thickness and shape can be described in terms of the boundary layer widths (variance), skewnesses, and excesses (excess kurtosis). Experimentally, it is found that the thickness defined as where , tracks the very well for turbulent boundary layer flows. Taking a cue from the boundary layer momentum balance equations, the second derivative boundary layer moments, track the thickness and shape of that portion of the boundary layer where the viscous forces are significant. Hence the moment method makes it possible to track and quantify the laminar boundary layer and the inner viscous region of turbulent boundary layers using moments whereas the boundary layer thickness and shape of the total turbulent boundary layer is tracked using and moments. Calculation of the 2nd derivative moments can be problematic since under certain conditions the second derivatives can become positive in the very near-wall region (in general, it is negative). This appears to be the case for interior flow with an adverse pressure gradient (APG). Integrand values do not change sign in standard probability framework so the application of the moment methodology to the second derivative case will result in biased moment measures. A simple fix is to exclude the problematic values and define a new set of moments for a truncated second derivative profile starting at the second derivative minimum. If the width, , is calculated using the minimum as the mean location, then the viscous boundary layer thickness, defined as the point where the second derivative profile becomes negligible above the wall, can be properly identified with this modified approach. For derivative moments whose integrands do not change sign, the moments can be calculated without the need to take derivatives by using integration by parts to reduce the moments to simply integrals based on the displacement thickness kernel given by For example, the second derivative value is and the first derivative skewness, , can be calculated as This parameter was shown to track the boundary layer shape changes that accompany the laminar to turbulent boundary layer transition. Numerical errors encountered in calculating the moments, especially the higher-order moments, are a serious concern. Small experimental or numerical errors can cause the nominally free stream portion of the integrands to blow up. There are certain numerical calculation recommendations that can be followed to mitigate these errors. The unbounded boundary layer description Unbounded boundary layers, as the name implies, are typically exterior boundary layer flows along walls (and some very large gap interior flows in channels and pipes). Although not widely appreciated, the defining characteristic of this type of flow is that the velocity profile goes through a peak near the viscous boundary layer edge and then slowly asymptotes to the free stream velocity u0. An example of this type of boundary layer flow is near-wall air flow over a wing in flight. The unbounded boundary layer concept is depicted for steady laminar flow along a flat plate in Figure 2. The lower dashed curve represents the location of the maximum velocity umax(x) and the upper dashed curve represents the location where u(x,y) essentially becomes u0, i.e. the boundary layer thickness location. For the very thin flat plate case, the peak is small resulting in the flat plate exterior boundary layer closely resembling the interior flow flat channel case. This has led much of the fluid flow literature to incorrectly treat the bounded and unbounded cases as equivalent. The problem with this equivalence thinking is that the maximum peak value can easily exceed 10-15% of u0 for flow along a wing in flight. The differences between the bounded and unbounded boundary layer was explored in a series of Air Force Reports. The unbounded boundary layer peak means that some of the velocity profile thickness and shape parameters that are used for interior bounded boundary layer flows need to be revised for this case. Among other differences, the laminar unbounded boundary layer case includes viscous and inertial dominated regions similar to turbulent boundary layer flows. Moment method For exterior unbounded boundary layer flows, it is necessary to modify the moment equations to achieve the desired goal of estimating the various boundary layer thickness locations. The peaking behavior of the velocity profile means the area normalization of the moments becomes problematic. To avoid this problem, it has been suggested that the unbounded boundary layer be divided into viscous and inertial regions and that the boundary layer thickness can then be calculated using separate moment integrals specific to that region. That is, the inner viscous region of laminar and turbulent unbounded boundary layer regions can be tracked using modified moments whereas the inertial boundary layer thickness can be tracked using modified and moments. The slow rate at which the peak asymptotes to the free stream velocity means that the calculated boundary layer thickness values are typically much larger than the bounded boundary layer case. The modified and moments for the inertial boundary layer region are created by: 1) replacing the lower integral limit by the location of the velocity peak designated by , 2) changing the upper integral limit to h where h is located deep in the free stream, and 3) changing the velocity scale from to . The displacement thickness in the modified moments must be calculated using the same integral limits as the modified moment integrals. By taking as the mean location, the modified 3-sigma boundary layer thickness becomes where is the modified width. The modified second derivative moments can be calculated using the same integrals as defined above but with replacing H/2 for the upper integral limit. To avoid numerical errors, certain calculation recommendations should be followed. The same concerns for the second derivative moments in regards to APG bounded boundary layers for the bounded case above also apply to the modified moments for the unbounded case. An example of the modified moments are shown for unbounded boundary layer flow along a wing section in Figure 3. This figure was generated from a 2-D simulation for laminar airflow over a NACA_0012 wing section. Included in this figure are the modified 3-sigma , the modified 3-sigma , and the locations. The modified ratio value is 311, the modified ratio value is ~2, and the value is 9% higher than the value. The large difference between the and compared to the value demonstrates the inadequacy of the boundary layer thickness. Furthermore, the large velocity peak demonstrates the problem with treating interior bounded boundary layers as equivalent to exterior unbounded boundary layers. δmax thickness The location of the velocity peak, denoted as is an obvious demarcation location for the unbounded boundary layer. The main appeal of this choice is that this location is approximately the dividing location between the viscous and inertial regions. For the laminar flow simulation along a wing, umax located at δmax is found to approximate the viscous boundary layer thickness given as + indicating the velocity peaks just above the viscous boundary layer thickness δv. For the inertial regions of both laminar and turbulent flows, is a convenient lower boundary for the moment integrals. If the width, , is calculated using as the mean location then the boundary layer thickness, defined as the point where the velocity essentially becomes u0 above the wall, can then be properly identified. The 99% boundary layer thickness A significant implication of the peaking behavior is that the 99% thickness, , is NOT recommended as a thickness parameter for the exterior flow, unbounded boundary layer since it no longer corresponds to a boundary layer location of consequence. It is only useful for unbounded laminar flow along a very thin flat plate at zero incidence angle to the flow direction since the peak for this case will be very small and the velocity profile will be closely approximated as the bounded boundary layer case. For thick plates-walls, non-zero incidence angles, or flow around most solid surfaces, the excess flow due to form drag results in a near-wall peak in the velocity profile making not useful. Displacement thickness, momentum thickness, and shape factor The displacement thickness, momentum thickness, and shape factor can, in principle, all be calculated using the same approach described above for the bounded boundary layer case. However, the peaked nature of the unbounded boundary layer means the inertial section of the displacement thickness and momentum thickness will tend to cancel the near wall portion. Hence, the displacement thickness and momentum thickness will behave differently for the bounded and unbounded cases. One option to make the unbounded displacement thickness and momentum thickness approximately behave as the bounded case is to use umax as the scaling parameter and δmax as the upper integral limit. Notes References Ludwig Prandtl (1904), “Über Flüssigkeitsbewegung bei sehr kleiner Reibung,” Verhandlungen des Dritten Internationalen Mathematiker-Kongresses in Heidelberg 1904, A. Krazer, ed., Teubner, Leipzig, 484–491(1905). Hermann Schlichting (1979), Boundary-Layer Theory, 7th ed., McGraw Hill, New York, U.S.A. Swanson, R. Charles and Langer, Stefan (2016), “Comparison of NACA 0012 Laminar Flow Solutions: Structured and Unstructured Grid Methods,” NASA/TM-2016-219003. Wang, Xia, William K. George and Luciano Castillo (2004), "Separation Criterion for Turbulent Boundary Layers Via Similarity Analysis," J. of Fluids Eng., vol. 126, pp. 297–304. Weyburne, David (2006). "A mathematical description of the fluid boundary layer," Applied Mathematics and Computation, vol. 175, pp.  1675–1684 Weyburne, David (2014). "New thickness and shape parameters for the boundary layer velocity profile," Experimental Thermal and Fluid Science, vol. 54, pp. 22–28 Weyburne, David (2017), "Inner/Outer Ratio Similarity Scaling for 2-D Wall-bounded Turbulent Flows," arXiv:1705.02875 [physics.flu-dyn]. Weyburne, David (2020a). "A Boundary Layer Model for Unbounded Flow Along a Wall," Air Force Tech Report: AFRL-RY-WP-TR-2020-0004,DTIC Accession # AD1091170. Weyburne, David (2020b). "The Unbounded and Bounded Boundary Layer Models for Flow Along a Wall," Air Force Tech Report: AFRL-RY-WP-TR-2020-0005, DTIC Accession # AD1094086. Weyburne, David (2020c). "A New Conceptual Model for Laminar Boundary Layer Flow," Air Force Tech Report: AFRL-RY-WP-TR-2020-0006, DTIC Accession # AD1091187. Whitfield, David (1978). "Integral Solution of Compressible Turbulent Boundary Layers Using Improved Velocity Profiles," AEDO-TR-78-42. Further reading Louis Rosenhead editor (1963) Laminar Boundary Layers, Clarendon Press Paco Lagerstrom (1996) Laminar Flow Theory, Princeton University Press Frank M. White, (2003) Fluid Mechanics, 5th edition, McGraw-Hill Boundary layers Aerodynamics
Boundary layer thickness
[ "Chemistry", "Engineering" ]
4,392
[ "Boundary layers", "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
4,896,789
https://en.wikipedia.org/wiki/Log%20analysis
In computer log management and intelligence, log analysis (or system and network log analysis) is an art and science seeking to make sense of computer-generated records (also called log or audit trail records). The process of creating such records is called data logging. Typical reasons why people perform log analysis are: Compliance with security policies Compliance with audit or regulation System troubleshooting Forensics (during investigations or in response to a subpoena) Security incident response Understanding online user behavior Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector. Log messages must usually be interpreted concerning the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login, or a systems error). Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as a search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. The terminology may also vary; for example, the authentication of a user to an application may be described as a log in, a logon, a user connection or an authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration to make useful comparisons to messages from different log sources. Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages to understand the complete domain from which the messages must be interpreted. A log analyst may map varying terminology from different log sources into a uniform, normalized terminology so that reports and statistics can be derived from a heterogeneous environment. For example, log messages from Windows, Unix, network firewalls, and databases may be aggregated into a "normalized" report for the auditor. Different systems may signal different message priorities with a different vocabulary, such as "error" and "warning" vs. "err", "warn", and "critical". Hence, log analysis practices exist on the continuum from text retrieval to reverse engineering of software. Functions and technologies Pattern recognition is a function of selecting incoming messages and compare with a pattern book to filter or handle different ways. Normalization is the function of converting message parts to the same format (e.g. common date format or normalized IP address). Classification and tagging is ordering messages into different classes or tagging them with different keywords for later usage (e.g. filtering or display). Correlation analysis is a technology of collecting messages from different systems and finding all the messages belonging to one single event (e.g., messages generated by malicious activity on different systems: network devices, firewalls, servers, etc.). It is usually connected with alerting systems. Artificial Ignorance is a type of machine learning that is a process of discarding log entries that are known to be uninteresting. Artificial ignorance is a method to detect anomalies in a working system. In log analysis, this means recognizing and ignoring the regular, common log messages that result from the normal operation of the system, and therefore are not too interesting. However, new messages that have not appeared in the logs before can signal important events, and should be therefore investigated. In addition to anomalies, the algorithm will identify common events that did not occur. For example, a system update that runs every week, has failed to run. Log Analysis is often compared to other analytics tools such as application performance management (APM) and error monitoring. While much of their functionality is clear overlap, the difference is rooted in process. APM has an emphasis on performance and is utilized most in production. Error monitoring is driven by developers versus operations, and integrates into code in exception handling blocks. See also Audit trail Data logger Log monitor Server log System monitor Web log analysis software List of web analytics software References Computer systems Web log analysis software
Log analysis
[ "Technology", "Engineering" ]
857
[ "Computer engineering", "Computer systems", "Web log analysis software", "Computer logging", "Computer science", "Computers" ]
4,898,962
https://en.wikipedia.org/wiki/Quantum%20point%20contact
A quantum point contact (QPC) is a narrow constriction between two wide electrically conducting regions, of a width comparable to the electronic wavelength (nano- to micrometer). The importance of QPC lies in the fact that they prove quantisation of ballistic conductance in mesoscopic systems. The conductance of a QPC is quantized in units of , the so-called conductance quantum. Quantum point contacts were first reported in 1988 by a Dutch team from Delft University of Technology and Philips Research and, independently, by a British team from the Cavendish Laboratory. They are based on earlier work by the British group which showed how split gates could be used to convert a two-dimensional electron gas into one-dimension, first in silicon and then in gallium arsenide. This quantisation is reminiscent of the quantisation of the Hall conductance, but is measured in the absence of a magnetic field. The zero-field conductance quantisation and the smooth transition to the quantum Hall effect on applying a magnetic field are essentially consequences of the equipartition of current among an integer number of propagating modes in the constriction. Fabrication There are several different ways of fabricating a quantum point contact. It can be realized in a break-junction by pulling apart a piece of conductor until it breaks. The breaking point forms the point contact. In a more controlled way, quantum point contacts are formed in a two-dimensional electron gas (2DEG), e.g. in GaAs/AlGaAs heterostructures. By applying a voltage to suitably shaped gate electrodes, the electron gas can be locally depleted and many different types of conducting regions can be created in the plane of the 2DEG, among them quantum dots and quantum point contacts. Another means of creating a QPC is by positioning the tip of a scanning tunneling microscope close to the surface of a conductor. Properties Geometrically, a quantum point contact is a constriction in the transverse direction which presents a resistance to the motion of electrons. Applying a voltage across the point contact induces a current to flow, the magnitude of this current is given by , where is the conductance of the contact. This formula resembles Ohm's law for macroscopic resistors. However, there is a fundamental difference here resulting from the small system size which requires a quantum mechanical analysis. It is most common to study QPC in two dimensional electron gases. This way the geometric constriction of the point contact turns the conductance through the opening to a one dimensional system. Moreover, it requires a quantum mechanical description of the system that results in the quantisation of conductance. Quantum mechanically, the current through the point contact is equipartitioned among the 1D subands, or transverse modes, in the constriction. It is important to state that the previous discussion does not take into account possible transitions among modes. The Landauer formula can actually be generalized to express this possible transitions , where is the transition matrix which incorporates non-zero probabilities of transmission from mode n to m. At low temperatures and voltages, unscattered and untrapped electrons contributing to the current have a certain energy/momentum/wavelength called Fermi energy/momentum/wavelength. Much like in a waveguide, the transverse confinement in the quantum point contact results in a "quantization" of the transverse motion—the transverse motion cannot vary continuously, but has to be one of a series of discrete modes. The waveguide analogy is applicable as long as coherence is not lost through scattering, e.g., by a defect or trapping site. The electron wave can only pass through the constriction if it interferes constructively, which for a given width of constriction, only happens for a certain number of modes . The current carried by such a quantum state is the product of the velocity times the electron density. These two quantities by themselves differ from one mode to the other, but their product is mode independent. As a consequence, each state contributes the same amount of per spin direction to the total conductance . This is a fundamental result; the conductance does not take on arbitrary values but is quantized in multiples of the conductance quantum , which is expressed through the electron charge and the Planck constant . The integer number is determined by the width of the point contact and roughly equals the width divided by half the electron wavelength. As a function of the width of the point contact (or gate voltage in the case of GaAs/AlGaAs heterostructure devices), the conductance shows a staircase behavior as more and more modes (or channels) contribute to the electron transport. The step-height is given by . On increasing the temperature, one finds experimentally that the plateaux acquire a finite slope until they are no longer resolved. This is a consequence of the thermal smearing of the Fermi-Dirac distribution. The conductance steps should disappear for (here ∆E is the subband splitting at the Fermi level). This is confirmed both by experiment and by numerical calculations. An external magnetic field applied to the quantum point contact lifts the spin degeneracy and leads to half-integer steps in the conductance. In addition, the number of modes that contribute becomes smaller. For large magnetic fields, is independent of the width of the constriction, given by the theory of the quantum Hall effect. The 0.7 anomaly Anomalous features on the quantized conductance steps are often observed in transport measurements of quantum point contacts. A notable example is the plateau at , the so-called 0.7-structure, arising due to enhanced electron-electron interactions arising from a smeared van Hove singularity in the local 1D density of states in the vicinity of the charge constriction. Unlike the conductance steps, the 0.7-structure becomes more pronounced at higher temperature. 0.7-structure analogues are sometimes observed on higher conductance steps. Quasi-bound states arising from impurities, charge traps, and reflections within the constriction may also result in conductance structure close to the 1D limit. Applications Apart from studying fundamentals of charge transport in mesoscopic conductors, quantum point contacts can be used as extremely sensitive charge detectors. Since the conductance through the contact strongly depends on the size of the constriction, any potential fluctuation (for instance, created by other electrons) in the vicinity will influence the current through the QPC. It is possible to detect single electrons with such a scheme. In view of quantum computation in solid-state systems, QPCs can be used as readout devices for the state of a quantum bit (qubit). In device physics, the configuration of QPCs is used for demonstrating a fully ballistic field-effect transistor. Another application of the device is its use as a switch. A nickel wire is brought close enough to a gold surface and then, by the use of a piezoelectric actuator, the distance between the wire and the surface can be changed and thus, the transport characteristics of the device change between electron tunneling and ballistic. References Further reading Quantum mechanics Nanoelectronics Quantum electronics Mesoscopic physics
Quantum point contact
[ "Physics", "Materials_science" ]
1,493
[ "Quantum electronics", "Theoretical physics", "Quantum mechanics", "Condensed matter physics", "Nanoelectronics", "Nanotechnology", "Mesoscopic physics" ]
4,900,609
https://en.wikipedia.org/wiki/Effective%20evolutionary%20time
The hypothesis of effective evolutionary time attempts to explain gradients, in particular latitudinal gradients, in species diversity. It was originally named "time hypothesis". Background Low (warm) latitudes contain significantly more species than high (cold) latitudes. This has been shown for many animal and plant groups, although exceptions exist (see latitudinal gradients in species diversity). An example of an exception is helminths of marine mammals, which have the greatest diversity in northern temperate seas, possibly because of small population densities of hosts in tropical seas that prevented the evolution of a rich helminth fauna, or because they originated in temperate seas and had more time for speciations there. It has become more and more apparent that species diversity is best correlated with environmental temperature and more generally environmental energy. These findings are the basis of the hypothesis of effective evolutionary time. Species have accumulated fastest in areas where temperatures are highest. Mutation rates and speed of selection due to faster physiological rates are highest, and generation times which also determine speed of selection, are smallest at high temperatures. This leads to a faster accumulation of species, which are absorbed into the abundantly available vacant niches, in the tropics. Vacant niches are available at all latitudes, and differences in the number of such niches can therefore not be the limiting factor for species richness. The hypothesis also incorporates a time factor: habitats with a long undisturbed evolutionary history will have greater diversity than habitats exposed to disturbances in evolutionary history. The hypothesis of effective evolutionary time offers a causal explanation of diversity gradients, although it is recognized that many other factors can also contribute to and modulate them. Historical aspects Some aspects of the hypothesis are based on earlier studies. Bernhard Rensch, for example, stated that evolutionary rates also depend on temperature: numbers of generation in poikilotherms, but sometimes also in homoiotherms (homoiothermic), are greater at higher temperatures and the effectiveness of selection is therefore greater. Ricklefs refers to this hypothesis as "hypothesis of evolutionary speed" or "higher speciation rates". Genera of Foraminifera in the Cretaceous and families of Brachiopoda in the Permian have greater evolutionary rates at low than at high latitudes. That mutation rates are greater at high temperatures has been known since the classical investigations of Nikolay Timofeev-Ressovsky et al. (1935), although few later studies have been conducted. Also, these findings were not applied to evolutionary problems. The hypothesis of effective evolutionary time differs from these earlier approaches as follows. It proposes that species diversity is a direct consequence of temperature-dependent processes and the time ecosystems have existed under more or less equal conditions. Since vacant niches into which new species can be absorbed are available at all latitudes, the consequence is accumulation of more species at low latitudes. All earlier approaches remained without basis without the assumption of vacant niches, as there is no evidence that niches are generally narrower in the tropics, i.e., an accumulation of species cannot be explained by subdivision of previously utilized niches (see also Rapoport's rule). The hypothesis, in contrast to most other hypotheses attempting to explain latitudinal or other gradients in diversity, does not rely on the assumption that different latitudes or habitats generally have different "ceilings" for species numbers, which are higher in the tropics than in cold environments. Such different ceilings are thought to be, for example, determined by heterogeneity or area of the habitat. But such factors, although not setting ceilings, may well modulate the gradients. Recent studies A considerable number of recent studies support the hypothesis. Thus, diversity of marine benthos, interrupted by some collapses and plateaus, has risen from the Cambrian to the Recent, and there is no evidence that saturation has been reached. Rates of diversification per time unit for birds and butterflies increase towards the tropics. Allen et al. found a general correlation between environmental temperature and species richness for North and Central American trees, for amphibians, fish, Prosobranchia and fish parasites. They showed that species richness can be predicted from the biochemical kinetics of metabolism, and concluded that evolutionary rates are determined by generation times and mutation rates both correlated with metabolic rates which have the same Boltzmann relation with temperature. They further concluded that these findings support the mechanisms for latitudinal gradients proposed by Rohde. Gillooly et al. (2002) described a general model also based on first principles of allometry and biochemical kinetics which makes predictions about generation times as a function of body size and temperature. Empirical findings support the predictions: in all cases that were investigated (birds, fish, amphibians, aquatic insects, zooplankton) generation times are negatively correlated with temperature. Brown et al.(2004) further developed these findings to a general metabolic theory of ecology. Indirect evidence points to increased mutation rates at higher temperatures, and the energy-speciation hypothesis is the best predictor for species richness of ants. Finally, computer simulations using the Chowdhury ecosystem model have shown that results correspond most closely to empirical data when the number of vacant niches is kept large. Rohde gives detailed discussions of these and other examples. Of particular importance is the study by Wright et al. (2006) which was specifically designed to test the hypothesis. It showed that molecular substitution rates of tropical woody plants are more than twice as large as those of temperate species, and that more effective genetic drift in smaller tropical populations cannot be responsible for the differences, leaving only direct temperature effects on mutation rates as an explanation. Gillman et al. (2009) examined 260 mammal species of 10 orders and 29 families and found that substitution rates in the cytochrome B gene were substantially faster in species at warm latitudes and elevations, compared with those from cold latitudes and elevations. A critical examination of the data showed that this cannot be attributed to gene drift or body mass differentials. The only possibilities left are a Red Queen effect or direct effects of thermal gradients (including possibly an effect of torpor/hibernation differentials). Rohde (1992, 1978) had already pointed out that “it may well be that mammalian diversity is entirely determined by the diversity of plants and poikilothermic animals further down in the hierarchy”, i.e., by a Red Queen effect. He also pointed out that exposure to irradiation including light is known to cause mutations in mammals, and that some homoiothermic animals have shorter generation times in the tropics, which - either separately or jointly - may explain the effect found by Gillman et al. Gillman et al. (2010) extended their earlier study on plants by determining whether the effect is also found within highly conserved DNA. They examined the 18S ribosomal gene in the same 45 pairs of plants. And indeed, the rate of evolution was 51% faster in the tropical than their temperate sister species. Furthermore, the substitution rate in 18S correlated positively with that in the more variable ITS. These result lend further very strong support to the hypothesis. Wright et al. (2010) tested the hypothesis on 188 species of amphibians belonging to 18 families, using mitochondrial RNA genes 12S and 16S, and found substantially faster substitution rates for species living in warmer habitats at both lower latitudes and lower elevations. Thus, the hypothesis has now been confirmed for several genes and for plants and animals. Vázquez, D.P. and Stevens, R.D. (2004) conducted a metanalysis of previous studies and found no evidence that niches are generally narrower in the tropics than at high latitudes. This can be explained only by the assumption that niche space was not and is not saturated, having the capacity to absorb new species without affecting the niche width of species already present, as predicted by the hypothesis. Depth gradients Species diversity in the deepsea has been largely underestimated until recently (e.g., Briggs 1994: total marine diversity less than 200,000 species). Although our knowledge is still very fragmentary, some recent studies appear to suggest much greater species numbers (e.g., Grassle and Maciolek 1992: 10 million macroinvertebrates in soft bottom sediments of the deepsea). Further studies must show whether this can be verified. A rich diversity in the deepsea can be explained by the hypothesis of effective evolutionary time: although temperatures are low, conditions have been more or less equal over large time spans, certainly much larger than in most or all surface waters. References Evolutionary ecology Biogeography Systems ecology
Effective evolutionary time
[ "Biology", "Environmental_science" ]
1,789
[ "Biogeography", "Environmental social science", "Systems ecology" ]
842,004
https://en.wikipedia.org/wiki/Piano%20nobile
Piano nobile (Italian for "noble floor" or "noble level", also sometimes referred to by the corresponding French term, bel étage) is the architectural term for the principal floor of a palazzo. This floor contains the main reception and bedrooms of the house. The German term is Beletage (meaning "beautiful storey", from the French bel étage). Both date to the 17th century. Characteristics The piano nobile is usually the first floor (in European terminology; second floor in American terms) or sometimes the second storey and contains major rooms, located above the rusticated ground floor containing the minor rooms and service rooms. The reasons were so that the rooms above the ground floor would have finer views and to avoid the dampness and odours of the street level. That is especially true in Venice, where the piano nobile of the many palazzi is especially obvious from the exterior by virtue of its larger windows and balconies and open loggias. Examples are Ca' Foscari, Ca' d'Oro, Ca' Vendramin Calergi and Palazzo Barbarigo. Larger windows than those on other floors are usually the most obvious feature of the piano nobile. In England and Italy, the piano nobile is often reached by an ornate outer staircase, which avoided for the floor's inhabitants of the need to enter the house by the servant's floor below. Kedleston Hall is an example of this in England, as is Villa Capra "La Rotonda" in Italy. Most houses contained a secondary floor above the piano nobile, which contained more intimate withdrawing and bedrooms for private use by the family of the house when no honoured guests were present. Above that floor would often be an attic floor containing staff bedrooms. In Italy, especially in Venetian palazzi, the floor above the piano nobile is sometimes referred to as the "secondo piano nobile" (second principal floor), especially if the loggias and balconies reflect those below on a slightly smaller scale. In those instances and occasionally in museums, the principal piano nobile is described as the primo piano nobile to differentiate it. The arrangement of floors continued throughout Europe as large houses continued to be built in the classical style. The arrangement was designed at Buckingham Palace as recently as the mid-19th century. Holkham Hall, Osterley Park and Chiswick House are among the innumerable 18th-century English houses that employed the design. Bibliography Copplestone, Trewin (1963). World Architecture. Hamlyn. Dal Lago, Adalbert (1966). Ville Antiche. Milan: Fratelli Fabbri. Halliday, E. E. (1967). Cultural History of England. London: Thames and Hudson. Harris, John; de Bellaigue, Geoffrey; & Miller, Oliver (1968). Buckingham Palace. Hussey, Christopher (1955). English Country Houses: Early Georgian 1715–1760 London, Country Life. Jackson-Stops, Gervase (1990). The Country House in Perspective. Pavilion Books Ltd. Kaminski Marion, Art and Architecture of Venice, 1999, Könemann, London:Nelson. Architectural elements Floors Italian words and phrases
Piano nobile
[ "Technology", "Engineering" ]
674
[ "Structural engineering", "Building engineering", "Floors", "Architectural elements", "Components", "Architecture" ]
842,224
https://en.wikipedia.org/wiki/Elastomer
An elastomer is a polymer with viscoelasticity (i.e. both viscosity and elasticity) and with weak intermolecular forces, generally low Young's modulus (E) and high failure strain compared with other materials. The term, a portmanteau of elastic polymer, is often used interchangeably with rubber, although the latter is preferred when referring to vulcanisates. Each of the monomers which link to form the polymer is usually a compound of several elements among carbon, hydrogen, oxygen and silicon. Elastomers are amorphous polymers maintained above their glass transition temperature, so that considerable molecular reconformation is feasible without breaking of covalent bonds. At ambient temperatures, such rubbers are thus relatively compliant (E ≈ 3 MPa) and deformable. Rubber-like solids with elastic properties are called elastomers. Polymer chains are held together in these materials by relatively weak intermolecular bonds, which can permit the polymers to stretch in response to macroscopic stresses. Elastomers are usually thermosets (requiring vulcanization) but may also be thermoplastic (see thermoplastic elastomer). The long polymer chains cross-link during curing (i.e., vulcanizing). The molecular structure of elastomers can be imagined as a 'spaghetti and meatball' structure, with the meatballs signifying cross-links. The elasticity is derived from the ability of the long chains to reconfigure themselves to distribute an applied stress. The covalent cross-linkages ensure that the elastomer will return to its original configuration when the stress is removed. Crosslinking most likely occurs in an equilibrated polymer without any solvent. The free energy expression derived from the Neohookean model of rubber elasticity is in terms of free energy change due to deformation per unit volume of the sample. The strand concentration, v, is the number of strands over the volume which does not depend on the overall size and shape of the elastomer. Beta relates the end-to-end distance of polymer strands across crosslinks over polymers that obey random walk statistics. In the specific case of shear deformation, the elastomer besides abiding to the simplest model of rubber elasticity is also incompressible. For pure shear we relate the shear strain, to the extension ratios lambdas. Pure shear is a two-dimensional intense stress state making lambda equal to 1, reducing the energy strain function above to: To get shear stress, then the energy strain function is differentiated with respect to shear strain to get the shear modulus, G, times the shear strain: Shear stress is then proportional to the shear strain even at large strains. Notice how a low shear modulus correlates to a low deformation strain energy density and vice versa. Shearing deformation in elastomers, require less energy to change shape than volume. Examples Unsaturated rubbers that can be cured by sulfur vulcanization: Natural polyisoprene: cis-1,4-polyisoprene natural rubber (NR) and trans-1,4-polyisoprene gutta-percha Synthetic polyisoprene (IR for isoprene rubber) Polybutadiene (BR for butadiene rubber) Chloroprene rubber (CR), polychloroprene, neoprene Butyl rubber (copolymer of isobutene and isoprene, IIR) Halogenated butyl rubbers (chloro butyl rubber: CIIR; bromo butyl rubber: BIIR) Styrene-butadiene rubber (copolymer of styrene and butadiene, SBR) Nitrile rubber (copolymer of butadiene and acrylonitrile, NBR), also called Buna N rubbers Hydrogenated nitrile rubbers (HNBR) Therban and Zetpol Saturated rubbers that cannot be cured by sulfur vulcanization: EPM (ethylene propylene rubber, a copolymer of ethene and propene) and EPDM rubber (ethylene propylene diene rubber, a terpolymer of ethylene, propylene and a diene-component) Epichlorohydrin rubber (ECO) Acrylic rubber (ACM, ABR) Silicone rubber (SI, Q, VMQ) Fluorosilicone rubber (FVMQ) Fluoroelastomers (FKM, and FEPM) Viton, Tecnoflon, Fluorel, Aflas and Dai-El Perfluoroelastomers (FFKM) Tecnoflon PFR, Kalrez, Chemraz, Perlast Polyether block amides (PEBA) Chlorosulfonated polyethylene (CSM) Ethylene-vinyl acetate (EVA) Various other types of elastomers: Thermoplastic elastomers (TPE) The proteins resilin and elastin Polysulfide rubber Elastolefin, elastic fiber used in fabric production Poly(dichlorophosphazene), an "inorganic rubber" from hexachlorophosphazene polymerization See also Liquid elastomer molding Rubber elasticity References External links Efficient and eco-friendly polymerization of elastomers, By Andreas Diener, Product Manager at List AG Materials science Polymer physics
Elastomer
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,164
[ "Polymer physics", "Applied and interdisciplinary physics", "Synthetic materials", "Materials science", "Elastomers", "Polymer chemistry", "nan" ]
842,236
https://en.wikipedia.org/wiki/Torque%20density
Torque density is a measure of the torque-carrying capability of a mechanical component. It is the ratio of torque capability to volume and is expressed in units of torque per volume. Torque density is a system property since it depends on the design of each element of the component being examined and their interconnection. While torque is a Pseudovector, volume only by definition exists in three Euclidean dimensions, must always be positive, and never can be negative. Examples and uses Torque density of magnetic gearboxes, wind turbines, magnetic trains, and mechanical trains are used to compare the energy efficiency of machines; 150 kilo-Newtons per cubic meter per stage is considered the highest attainable as of 2024. Torque density is useful during the concept evaluation stage of mechanical designs, especially in power train design problems. Typically, it will be one of many factors used to assign potential success measures to each concept. For example, in the upgrade of a drive train for a set of rolls in a rolling mill, space is often dictated by the configuration of current components. There may be several types of devices that can perform the function of an existing component that must be replaced. The relative torque densities of the devices may be an important determinant for which design is ultimately selected, although it will often compete with other factors such as cost, ease of maintenance, time to install, operating costs and potential failure modes. Units In SI units, torque density is expressed in joules per cubic metre or equivalently newton-metres per cubic metre. Although dimensionally equivalent to the pascal, that is usually not used for this purpose. Small amounts can be expressed in newton-millimetres per cubic millimetre. In U.S. customary units, torque density is expressed in foot-pounds force per cubic foot, or inch-pounds force per cubic inch or ounce-force inches per cubic inch. See also Angular momentum Automobile design Density Mechanical engineering Newton-meter Pressure Renewable energy Superconductivity Sustainability References Mechanical engineering Density
Torque density
[ "Physics", "Mathematics", "Engineering" ]
404
[ "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Mass", "Density", "Mechanical engineering", "Wikipedia categories named after physical quantities", "Matter" ]
842,430
https://en.wikipedia.org/wiki/Inflaton
The inflaton field is a hypothetical scalar field which is conjectured to have driven cosmic inflation in the very early universe. The field, originally postulated by Alan Guth, provides a mechanism by which a period of rapid expansion from 10−35 to 10−34 seconds after the initial expansion can be generated, forming a universe not inconsistent with observed spatial isotropy and homogeneity. Cosmological inflation The basic model of inflation proceeds in three phases: Expanding vacuum state with high potential energy Phase transition to true vacuum Slow roll and reheating Expanding vacuum state with high potential energy A "vacuum" or "vacuum state" in quantum field theory is a state of quantum fields which is at locally minimal potential energy. Quantum particles are excitations which deviate from this minimal potential energy state, therefore a vacuum state has no particles in it. Depending on the specifics of a quantum field theory, it can have more than one vacuum state. Different vacua, despite all "being empty" (having no particles), will generally have different vacuum energy. Quantum field theory stipulates that the pressure of the vacuum energy is always negative and equal in magnitude to its energy density. Inflationary theory postulates that there is a vacuum state with very large vacuum energy, caused by a non-zero vacuum expectation value of the inflaton field. Any region of space in this state will rapidly expand. Even if initially it is not empty (contains some particles), very rapid exponential expansion dilutes any particles that might have previously been present to essentially zero density. Phase transition to true vacuum Inflationary theory further postulates that this "inflationary vacuum" state is not the state with globally lowest energy; rather, it is a "false vacuum", also known as a metastable state. For each observer at any chosen point of space, the false vacuum eventually tunnels into a state with the same potential energy, but which is not a vacuum (it is not at a local minimum of the potential energy—it can "decay"). This state can be seen as a true vacuum, filled with a large number of inflaton particles. However, the rate of expansion of the true vacuum does not change at that moment: Only its exponential character changes to much slower expansion of the FLRW metric. This ensures that expansion rate precisely matches the energy density. Slow roll and reheating In the true vacuum, inflaton particles decay, eventually giving rise to the observed Standard Model particles. The shape of the potential energy function near "tunnel exit" from false vacuum state must have a shallow slope, otherwise particle production would be confined to the boundary of expanding true vacuum bubble, which contradicts observation (the universe we see around us is not built of huge completely void bubbles). In other words, the quantum state should "roll to the bottom slowly". When complete, the decay of inflaton particles fills the space with hot and dense Big Bang plasma. Field quanta Just like every other quantum field, excitations of the inflaton field are expected to be quantized. The field quanta of the inflaton field are known as inflatons. Depending on the modeled potential energy density, the inflaton field's ground state might, or might not, be zero. The term inflaton follows the typical style of other quantum particles’ names – such as photon, gluon, boson, and fermion – deriving from the word inflation. The term was first used in a paper by . The nature of the inflaton field is currently not known. One of the obstacles for narrowing its properties is that current quantum theory is not able to correctly predict the observed vacuum energy, based on the particle content of a chosen theory (see vacuum catastrophe). Atkins (2012) suggested that it is possible that no new field is necessary – that a modified version of the Higgs field could function as an inflaton. Non-minimally coupled inflation Non-minimally coupled inflation is an inflationary model in which the constant which couples gravity to the inflaton field is not small. The coupling constant is usually represented by (letter xi), which features in the action (constructed by modifying the Einstein–Hilbert action): , with representing the strength of the interaction between and , which respectively relate to the curvature of space and the magnitude of the inflaton field. See also References Inflation (cosmology) Hypothetical particles Dark energy
Inflaton
[ "Physics", "Astronomy" ]
914
[ "Hypothetical particles", "Matter", "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Energy (physics)", "Dark energy", "Wikipedia categories named after physical quantities", "Physics beyond the Standard Model", "Subatomic particl...
842,484
https://en.wikipedia.org/wiki/Recirculating%20ball
Recirculating ball, also known as recirculating ball and nut or worm and sector, is a steering mechanism commonly found in older automobiles, off-road vehicles, and some trucks. Most newer cars use the more economical rack and pinion steering instead, but some upmarket manufacturers (such as BMW and Mercedes-Benz) held on to the design until well into the 1990s for the durability and strength inherent in the design. A few, including Chrysler, General Motors, Lada and Ineos, still use this technology in certain models including the Jeep Wrangler, the Ineos Grenadier Quartermaster and the Lada Niva. Mechanism The recirculating ball steering mechanism contains a worm gear inside a block with a threaded hole in it; this block has gear teeth cut into the outside to engage the sector shaft (also called a sector gear) which moves the Pitman arm. The steering wheel connects to a shaft, which rotates the worm gear inside of the block. Instead of twisting further into the block, the worm gear is fixed so that when it rotates, it moves the block, which transmits the motion through the gear to the Pitman arm, causing the roadwheels to turn. Bearing balls The worm gear is similar in design to a ball screw; the threads are filled with steel balls that recirculate through the gear and rack as it turns. The balls serve to reduce friction and wear in the gear, and reduce slop. Slop, when the gears come out of contact with each other, would be felt when changing the direction of the steering wheel, causing the wheel to feel loose. Power assistance Power steering in a recirculating-ball system works similarly to that in a rack-and-pinion system. Assistance is provided by supplying higher-pressure fluid to one side of the block. See also Burman and Sons Ltd - defunct manufacturer of recirculating ball steering gear List of auto parts References Mechanisms (engineering) Automotive steering technologies
Recirculating ball
[ "Engineering" ]
412
[ "Mechanical engineering", "Mechanisms (engineering)" ]
842,493
https://en.wikipedia.org/wiki/High-electron-mobility%20transistor
A high-electron-mobility transistor (HEMT or HEM FET), also known as heterostructure FET (HFET) or modulation-doped FET (MODFET), is a field-effect transistor incorporating a junction between two materials with different band gaps (i.e. a heterojunction) as the channel instead of a doped region (as is generally the case for a MOSFET). A commonly used material combination is GaAs with AlGaAs, though there is wide variation, dependent on the application of the device. Devices incorporating more indium generally show better high-frequency performance, while in recent years, gallium nitride HEMTs have attracted attention due to their high-power performance. Like other FETs, HEMTs can be used in integrated circuits as digital on-off switches. FETs can also be used as amplifiers for large amounts of current using a small voltage as a control signal. Both of these uses are made possible by the FET’s unique current–voltage characteristics. HEMT transistors are able to operate at higher frequencies than ordinary transistors, up to millimeter wave frequencies, and are used in high-frequency products such as cell phones, satellite television receivers, voltage converters, and radar equipment. They are widely used in satellite receivers, in low power amplifiers and in the defense industry. Applications The applications of HEMTs include microwave and millimeter wave communications, imaging, radar, radio astronomy, and power switching. They are found in many types of equipment ranging from cellphones, power supply adapters and DBS receivers to radio astronomy and electronic warfare systems such as radar systems. Numerous companies worldwide develop, manufacture, and sell HEMT-based devices in the form of discrete transistors, as 'monolithic microwave integrated circuits' (MMICs), or within power switching integrated circuits. HEMTs are suitable for applications where high gain and low noise at high frequencies are required, as they have shown current gain to frequencies greater than 600 GHz and power gain to frequencies greater than 1THz. Gallium nitride based HEMTs are used as power switching transistors for voltage converter applications due to their low on-state resistances, low switching losses, and high breakdown strength. These gallium nitride enhanced voltage converter applications include AC adapters, which benefit from smaller package sizes due to the power circuitry requiring smaller passive electronic components. History The invention of the high-electron-mobility transistor (HEMT) is usually attributed to physicist Takashi Mimura (三村 高志), while working at Fujitsu in Japan. The basis for the HEMT was the GaAs (gallium arsenide) MOSFET (metal–oxide–semiconductor field-effect transistor), which Mimura had been researching as an alternative to the standard silicon (Si) MOSFET since 1977. He conceived the HEMT in Spring 1979, when he read about a modulated-doped heterojunction superlattice developed at Bell Labs in the United States, by Ray Dingle, Arthur Gossard and Horst Störmer who filed a patent in April 1978. Mimura filed a patent disclosure for a HEMT in August 1979, and then a patent later that year. The first demonstration of a HEMT device, the D-HEMT, was presented by Mimura and Satoshi Hiyamizu in May 1980, and then they later demonstrated the first E-HEMT in August 1980. Independently, Daniel Delagebeaudeuf and Tranc Linh Nuyen, while working at Thomson-CSF in France, filed a patent for a similar type of field-effect transistor in March 1979. It also cites the Bell Labs patent as an influence. The first demonstration of an "inverted" HEMT was presented by Delagebeaudeuf and Nuyen in August 1980. One of the earliest mentions of a GaN-based HEMT is in the 1993 Applied Physics Letters article, by Khan et al. Later, in 2004, P.D. Ye and B. Yang et al demonstrated a GaN (gallium nitride) metal–oxide–semiconductor HEMT (MOS-HEMT). It used atomic layer deposition (ALD) aluminum oxide (Al2O3) film both as a gate dielectric and for surface passivation. Operation Field effect transistors whose operation relies on the formation of a two-dimensional electron gas (2DEG) are known as HEMTs. In HEMTS electric current flows between a drain and source element via the 2DEG, which is located at the interface between two layers of differing band gaps, termed the heterojunction. Some examples of previously explored heterojunction layer compositions (heterostructures) for HEMTs include AlGaN/GaN, AlGaAs/GaAs, InGaAs/GaAs, and Si/SiGe. Advantages The advantages of HEMTs over other transistor architectures, like the bipolar junction transistor and the MOSFET, are the higher operating temperatures, higher breakdown strengths, and lower specific on-state resistances, all in the case of GaN-based HEMTs compared to Si-based MOSFETs. Furthermore, InP-based HEMTs exhibit low noise performance and higher switching speeds. 2DEG channel creation The wide band element is doped with donor atoms; thus it has excess electrons in its conduction band. These electrons will diffuse to the adjacent narrow band material’s conduction band due to the availability of states with lower energy. The movement of electrons will cause a change in potential and thus an electric field between the materials. The electric field will push electrons back to the wide band element’s conduction band. The diffusion process continues until electron diffusion and electron drift balance each other, creating a junction at equilibrium similar to a p–n junction. Note that the undoped narrow band gap material now has excess majority charge carriers. The fact that the charge carriers are majority carriers yields high switching speeds, and the fact that the low band gap semiconductor is undoped means that there are no donor atoms to cause scattering and thus yields high mobility. In the case of GaAs HEMTs, they make use of high mobility electrons generated using the heterojunction of a highly doped wide-bandgap n-type donor-supply layer (AlGaAs in our example) and a non-doped narrow-bandgap channel layer with no dopant impurities (GaAs in this case). The electrons generated in the thin n-type AlGaAs layer drop completely into the GaAs layer to form a depleted AlGaAs layer, because the heterojunction created by different band-gap materials forms a quantum well (a steep canyon) in the conduction band on the GaAs side where the electrons can move quickly without colliding with any impurities because the GaAs layer is undoped, and from which they cannot escape. The effect of this is the creation of a very thin layer of highly mobile conducting electrons with very high concentration, giving the channel very low resistivity (or to put it another way, "high electron mobility"). Electrostatic mechanism Since GaAs has higher electron affinity, free electrons in the AlGaAs layer are transferred to the undoped GaAs layer where they form a two dimensional high mobility electron gas within 100 ångström (10 nm) of the interface. The n-type AlGaAs layer of the HEMT is depleted completely through two depletion mechanisms: Trapping of free electrons by surface states causes the surface depletion. Transfer of electrons into the undoped GaAs layer brings about the interface depletion. The Fermi level of the gate metal is matched to the pinning point, which is 1.2 eV below the conduction band. With the reduced AlGaAs layer thickness, the electrons supplied by donors in the AlGaAs layer are insufficient to pin the layer. As a result, band bending is moving upward and the two-dimensional electrons gas does not appear. When a positive voltage greater than the threshold voltage is applied to the gate, electrons accumulate at the interface and form a two-dimensional electron gas. Modulation doping in HEMTs An important aspect of HEMTs is that the band discontinuities across the conduction and valence bands can be modified separately. This allows the type of carriers in and out of the device to be controlled. As HEMTs require electrons to be the main carriers, a graded doping can be applied in one of the materials, thus making the conduction band discontinuity smaller and keeping the valence band discontinuity the same. This diffusion of carriers leads to the accumulation of electrons along the boundary of the two regions inside the narrow band gap material. The accumulation of electrons leads to a very high current in these devices. The term "modulation doping" refers to the fact that the dopants are spatially in a different region from the current carrying electrons. This technique was invented by Horst Störmer at Bell Labs. Manufacture MODFETs can be manufactured by epitaxial growth of a strained SiGe layer. In the strained layer, the germanium content increases linearly to around 40-50%. This concentration of germanium allows the formation of a quantum well structure with a high conduction band offset and a high density of very mobile charge carriers. The end result is a FET with ultra-high switching speeds and low noise. InGaAs/AlGaAs, AlGaN/InGaN, and other compounds are also used in place of SiGe. InP and GaN are starting to replace SiGe as the base material in MODFETs because of their better noise and power ratios. Versions of HEMTs By growth technology: pHEMT and mHEMT Ideally, the two different materials used for a heterojunction would have the same lattice constant (spacing between the atoms). In practice, the lattice constants are typically slightly different (e.g. AlGaAs on GaAs), resulting in crystal defects. As an analogy, imagine pushing together two plastic combs with a slightly different spacing. At regular intervals, you'll see two teeth clump together. In semiconductors, these discontinuities form deep-level traps and greatly reduce device performance. A HEMT where this rule is violated is called a pHEMT or pseudomorphic HEMT. This is achieved by using an extremely thin layer of one of the materials – so thin that the crystal lattice simply stretches to fit the other material. This technique allows the construction of transistors with larger bandgap differences than otherwise possible, giving them better performance. Another way to use materials of different lattice constants is to place a buffer layer between them. This is done in the mHEMT or metamorphic HEMT, an advancement of the pHEMT. The buffer layer is made of AlInAs, with the indium concentration graded so that it can match the lattice constant of both the GaAs substrate and the GaInAs channel. This brings the advantage that practically any Indium concentration in the channel can be realized, so the devices can be optimized for different applications (low indium concentration provides low noise; high indium concentration gives high gain). By electrical behaviour: eHEMT and dHEMT HEMTs made of semiconductor hetero-interfaces lacking interfacial net polarization charge, such as AlGaAs/GaAs, require positive gate voltage or appropriate donor-doping in the AlGaAs barrier to attract the electrons towards the gate, which forms the 2D electron gas and enables conduction of electron currents. This behaviour is similar to that of commonly used field-effect transistors in the enhancement mode, and such a device is called enhancement HEMT, or eHEMT. When a HEMT is built from AlGaN/GaN, higher power density and breakdown voltage can be achieved. Nitrides also have different crystal structure with lower symmetry, namely the wurtzite one, which has built-in electrical polarisation. Since this polarization differs between the GaN channel layer and AlGaN barrier layer, a sheet of uncompensated charge in the order of 0.01-0.03 C/m is formed. Due to the crystal orientation typically used for epitaxial growth ("gallium-faced") and the device geometry favorable for fabrication (gate on top), this charge sheet is positive, causing the 2D electron gas to be formed even if there is no doping. Such a transistor is normally on, and will turn off only if the gate is negatively biased - thus this kind of HEMT is known as depletion HEMT, or dHEMT. By sufficient doping of the barrier with acceptors (e.g. Mg), the built-in charge can be compensated to restore the more customary eHEMT operation, however high-density p-doping of nitrides is technologically challenging due to dopant diffusion into the channel. Induced HEMT In contrast to a modulation-doped HEMT, an induced high electron mobility transistor provides the flexibility to tune different electron densities with a top gate, since the charge carriers are "induced" to the 2DEG plane rather than created by dopants. The absence of a doped layer enhances the electron mobility significantly when compared to their modulation-doped counterparts. This level of cleanliness provides opportunities to perform research into the field of Quantum Billiard for quantum chaos studies, or applications in ultra stable and ultra sensitive electronic devices. References External links Modulation-doped FET Transistor types Microwave technology Terahertz technology Field-effect transistors French inventions Japanese inventions MOSFETs Vietnamese inventions
High-electron-mobility transistor
[ "Physics" ]
2,827
[ "Spectrum (physical sciences)", "Electromagnetic spectrum", "Terahertz technology" ]
842,495
https://en.wikipedia.org/wiki/Auxotrophy
Auxotrophy ( "to increase"; τροφή "nourishment") is the inability of an organism to synthesize a particular organic compound required for its growth (as defined by IUPAC). An auxotroph is an organism that displays this characteristic; auxotrophic is the corresponding adjective. Auxotrophy is the opposite of prototrophy, which is characterized by the ability to synthesize all the compounds needed for growth. Prototrophic cells (also referred to as the 'wild type') are self-sufficient producers of all required metabolites (e.g. amino acids, lipids, cofactors), while auxotrophs require to be on medium with the metabolite that they cannot produce. For example saying a cell is methionine auxotrophic means that it would need to be on a medium containing methionine or else it would not be able to replicate. In this example this is because it is unable to produce its own methionine (methionine auxotroph). However, a prototroph or a methionine prototrophic cell would be able to function and replicate on a medium with or without methionine. Replica plating is a technique that transfers colonies from one plate to another in the same spot as the last plate so the different media plates can be compared side by side. It is used to compare the growth of the same colonies on different plates of media to determine which environments the bacterial colony can or cannot grow in (this gives insight to possible auxotrophic characteristics. The method of replica plating implemented by Joshua Lederberg and Esther Lederberg included auxotrophs that were temperature-sensitive; that is, their ability to synthesize was temperature-dependent. (Auxotrophs are usually not temperature-dependent. They can also depend on other factors.) It is also possible that an organism is auxotrophic to more than just one organic compound that it requires for growth. Applications Genetics In genetics, a strain is said to be auxotrophic if it carries a mutation that renders it unable to synthesize an essential compound. For example, a yeast mutant with an inactivated uracil synthesis pathway gene is a uracil auxotroph (e.g., if the yeast Orotidine 5'-phosphate decarboxylase gene is inactivated, the resultant strain is a uracil auxotroph). Such a strain is unable to synthesize uracil and will only be able to grow if uracil can be taken up from the environment. This is the opposite of a uracil prototroph, or in this case a wild-type strain, which can still grow in the absence of uracil. Auxotrophic genetic markers are often used in molecular genetics; they were famously used in Beadle and Tatum's Nobel Prize-winning work on the one gene-one enzyme hypothesis, connecting mutations of genes to protein mutations. This then allows for biosynthetic or biochemical pathway mapping that can help determine which enzyme or enzymes are mutated and dysfunctional in the auxotrophic strains of bacteria being studied. Researchers have used strains of E. coli auxotrophic for specific amino acids to introduce non-natural amino acid analogues into proteins. For instance cells auxotrophic for the amino acid phenylalanine can be grown in media supplemented with an analogue such as para-azido phenylalanine. Many living things, including humans, are auxotrophic for large classes of compounds required for growth and must obtain these compounds through diet (see vitamin, essential nutrient, essential amino acid, essential fatty acid). The complex pattern of evolution of vitamin auxotrophy across the eukaryotic tree of life is intimately connected with the interdependence between organisms. The Mutagenicity test (or Ames test) The Salmonella Mutagenesis test (Ames test) uses multiple strains of Salmonella typhimurium that are auxotrophic to histidine to test whether a given chemical can cause mutations by observing its auxotrophic property in response to an added chemical compound. The mutation a chemical substance or compound causes is measured by applying it to the bacteria on a plate containing histidine then moving the bacteria to a new plate without sufficient histidine for continual growth. If the substance does not mutate the genome of the bacteria from auxotrophic to histidine back to prototrophic to histidine, then the bacteria would not show growth on the new plate. So by comparing the ratio of the bacteria on the new plate to the old plate and the same ratio for the control group, it is possible to quantify how mutagenic a substance is, or rather, how likely it is to cause mutations in DNA. A chemical is considered positive for Ames test if it causes mutations increasing the observed reversion rate and negative if presents similar to the control group. There is a normal, but small, number of revertant colonies expected when an auxotrophic bacteria is plated on a media without the metabolite it needs because it could mutate back to prototrophy. The chances of this are low and therefore cause very small colonies to be formed. If a mutagenic substance is added, however, the number of revertants would be visibly higher than without the mutagenic substance. The Ames test, basically, is considered positive if a substance increases chance of mutation in the DNA of the bacteria enough to cause a quantifiable difference in the revertants of the mutagen plate and the control group plate. Negative Ames test means the possible mutagen DID not cause increase in revertants and positive Ames test signifies that the possible mutagen DID increase the chance of mutation. These mutagenic effects on bacteria are researched as a possible indicator of the same effects on larger organisms, like humans. It is suggested that if a mutation can arise in bacterial DNA under presence of a mutagen then the same effect would occur for larger organisms causing cancer. A negative Ames test result could suggest that the substance is not a mutagen and would not cause tumor formation in living organisms. However only few of the positive Ames Test resulting chemicals were considered insignificant when tested in larger organisms but the positive Ames test for bacteria still could not be conclusively linked to expression of cancer in larger organisms. While it can be a possible determinant of tumors for living organisms, humans, animals, and so on, more studies must be completed to come to a conclusion. Auxotrophy-based methods to incorporate unnatural amino acids into proteins and proteomes A large number of unnatural amino acids, which are similar to their canonical counterparts in shape, size and chemical properties, are introduced into the recombinant proteins by means of auxotrophic expression hosts. For example, methionine (Met) or tryptophan (Trp) auxotrophic Escherichia coli strains can be cultivated in a defined minimal medium. In this experimental setup it is possible to express recombinant proteins whose canonical Trp and Met residues are completely substituted with different medium-supplemented related analogs. This methodology leads to a new form of protein engineering, which is not performed by codon manipulation at the DNA level (e.g. oligonucleotide-directed mutagenesis), but by codon reassignments at the level of protein translation under efficient selective pressure. Therefore, the method is referred as selective pressure incorporation (SPI). No organism studied so far encodes other amino acids than the canonical twenty; two additional canonical amino acids (selenocysteine, pyrrolysine) are inserted into proteins by recoding translation termination signals. This boundary can be crossed by adaptive laboratory evolution of metabolically stable auxotrophic microbial strains. For example, the first clearly successful attempt to evolve Escherichia coli that can survive solely on the unnatural amino acid thieno[3,2-b]pyrrolyl) alanine as the only substitute for tryptophan was made in 2015. In popular culture The 1993 film Jurassic Park (based on the 1990 Michael Crichton novel of the same name) features dinosaurs that were genetically altered so that they could not produce the amino acid lysine. This was known as the "lysine contingency" and was supposed to prevent the cloned dinosaurs from surviving outside the park, forcing them to be dependent on lysine supplements provided by the park's veterinary staff. In reality, no animals are capable of producing lysine (it is an essential amino acid). See also Autotroph Bradytroph Footnotes External links "Regulation of endosomal clathrin and retromer-mediated endosome to Golgi retrograde transport by the J-domain protein RME-8" - The EMBO Journal "Pleiotropic effects of purine auxotrophy inRhizobium meliloti on cell surface molecules" - Springerlink "Auxotrophy and Organic Compounds in the Nutrition of Marine Phytoplankton" Molecular genetics
Auxotrophy
[ "Chemistry", "Biology" ]
1,919
[ "Molecular genetics", "Molecular biology" ]
842,569
https://en.wikipedia.org/wiki/System%20of%20systems
System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete. referred to system of systems engineering. Overview Commonly proposed descriptions—not necessarily definitions—of systems of systems, are outlined below in order of their appearance in the literature: Linking systems into joint system of systems allows for the interoperability and synergism of Command, Control, Computers, Communications and Information (C4I) and Intelligence, Surveillance and Reconnaissance (ISR) Systems: description in the field of information superiority in modern military. System of systems are large-scale concurrent and distributed systems the components of which are complex systems themselves: description in the field of communicating structures and information systems in private enterprise. System of systems education involves the integration of systems into system of systems that ultimately contribute to evolution of the social infrastructure: description in the field of education of engineers on the importance of systems and their integration. System of systems integration is a method to pursue development, integration, interoperability and optimization of systems to enhance performance in future battlefield scenarios: description in the field of information intensive systems integration in the military. Modern systems that comprise system of systems problems are not monolithic, rather they have five common characteristics: operational independence of the individual systems, managerial independence of the systems, geographical distribution, emergent behavior and evolutionary development: description in the field of evolutionary acquisition of complex adaptive systems in the military. Enterprise systems of systems engineering is focused on coupling traditional systems engineering activities with enterprise activities of strategic planning and investment analysis: description in the field of information intensive systems in private enterprise. System of systems problems are a collection of trans-domain networks of heterogeneous systems that are likely to exhibit operational and managerial independence, geographical distribution, and emergent and evolutionary behaviors that would not be apparent if the systems and their interactions are modeled separately: description in the field of National Transportation System, Integrated Military and Space Exploration. Taken together, all these descriptions suggest that a complete system of systems engineering framework is needed to improve decision support for system of systems problems. Specifically, an effective system of systems engineering framework is needed to help decision makers to determine whether related infrastructure, policy and/or technology considerations as an interrelated whole are good, bad or neutral over time. The need to solve system of systems problems is urgent not only because of the growing complexity of today's challenges, but also because such problems require large monetary and resource investments with multi-generational consequences. System-of-systems topics The system-of-systems approach While the individual systems constituting a system of systems can be very different and operate independently, their interactions typically expose and deliver important emergent properties. These emergent patterns have an evolving nature that stakeholders must recognize, analyze and understand. The system of systems approach does not advocate particular tools, methods or practices; instead, it promotes a new way of thinking for solving grand challenges where the interactions of technology, policy, and economics are the primary drivers. System of systems study is related to the general study of designing, complexity and systems engineering, but also brings to the fore the additional challenge of design. Systems of systems typically exhibit the behaviors of complex systems, but not all complex problems fall in the realm of systems of systems. Inherent to system of systems problems are several combinations of traits, not all of which are exhibited by every such problem: Operational Independence of Elements Managerial Independence of Elements Evolutionary Development Emergent Behavior Geographical Distribution of Elements Interdisciplinary Study Heterogeneity of Systems Networks of Systems The first five traits are known as Maier's criteria for identifying system of systems challenges. The remaining three traits have been proposed from the study of mathematical implications of modeling and analyzing system of systems challenges by Dr. Daniel DeLaurentis and his co-researchers at Purdue University. Research Current research into effective approaches to system of systems problems includes: Establishment of an effective frame of reference Crafting of a unifying lexicon Developing effective methodologies to visualize and communicate complex systems Distributed resource management Study of designing architecture Interoperability Data distribution policies: policy definition, design guidance and verification Formal modelling language with integrated tools platform Study of various modeling, simulation and analysis techniques network theory agent based modeling general systems theory probabilistic robust design (including uncertainty modeling/management) object-oriented simulation and programming multi-objective optimization Study of various numerical and visual tools for capturing the interaction of system requirements, concepts and technologies Applications Systems of systems, while still being investigated predominantly in the defense sector, is also seeing application in such fields as national air and auto transportation and space exploration. Other fields where it can be applied include health care, design of the Internet, software integration, and energy management and power systems. Social-ecological interpretations of resilience, where different levels of our world (e.g., the Earth system, the political system) are interpreted as interconnected or nested systems, take a systems-of-systems approach. An application in business can be found for supply chain resilience. Educational institutions and industry Collaboration among a wide array of organizations is helping to drive the development of defining system of systems problem class and methodology for modeling and analysis of system of systems problems. There are ongoing projects throughout many commercial entities, research institutions, academic programs, and government agencies. Major stakeholders in the development of this concept are: Universities working on system of systems problems, including Purdue University, the Georgia Institute of Technology, Old Dominion University, George Mason University, the University of New Mexico, the Massachusetts Institute of Technology, Naval Postgraduate School and Carnegie Mellon University. Corporations active in this area of research such as The MITRE Corporation, Airbus, BAE Systems, Northrop Grumman, Boeing, Raytheon, Thales Group, CAE Inc., Altair Engineering, Saber Astronautics and Lockheed Martin. Government agencies that perform and support research in systems of systems research and applications, such as DARPA, the U.S. Federal Aviation Administration, NASA and Department of Defense (DoD) For example, DoD recently established the National Centers for System of Systems Engineering to develop a formal methodology for system-of-systems engineering for applications in defense-related projects. In another example, according to the Exploration Systems Architecture Study, NASA established the Exploration Systems Mission Directorate (ESMD) organization to lead the development of a new exploration "system-of-systems" to accomplish the goals outlined by President G.W. Bush in the 2004 Vision for Space Exploration. A number of research projects and support actions, sponsored by the European Commission, were performed in the Seventh Framework Programme. These target Strategic Objective IST-2011.3.3 in the FP7 ICT Work Programme (New paradigms for embedded systems, monitoring and control towards complex systems engineering). This objective had a specific focus on the "design, development and engineering of System-of-Systems". These projects included: T-AREA-SoS (Trans-Atlantic Research and Education Agenda on Systems of Systems), which aims "to increase European competitiveness in, and improve the societal impact of, the development and management of large complex systems in a range of sectors through the creation of a commonly agreed EU-US Systems of Systems (SoS) research agenda". COMPASS (Comprehensive Modelling for Advanced Systems of Systems), aiming to provide a semantic foundation and open tools framework to allow complex SoSs to be successfully and cost-effectively engineered, using methods and tools that promote the construction and early analysis of models. DANSE (Designing for Adaptability and evolutioN in System of systems Engineering), which aims to develop "a new methodology to support evolving, adaptive and iterative System of Systems life-cycle models based on a formal semantics for SoS inter-operations and supported by novel tools for analysis, simulation, and optimisation". ROAD2SOS (Roadmaps for System-of-System Engineering), aiming to develop "strategic research and engineering roadmaps in Systems of Systems Engineering and related case studies". DYMASOS (DYnamic MAnagement of physically-coupled Systems Of Systems), aiming to develop theoretical approaches and engineering tools for dynamic management of SoS based on industrial use cases. AMADEOS (Architecture for Multi-criticality Agile Dependable Evolutionary Open System-of-Systems) aiming to bring time awareness and evolution into the design of System-of- Systems (SoS) with possible emergent behavior, to establish a sound conceptual model, a generic architectural framework and a design methodology. Ongoing European projects which are using a System of Systems approach include: Arctic PASSION (Pan-Arctic observing System of Systems: Implementing Observations for societal Needs; July 2021 - June 2025) is a Horizon 2020 research project with the key motivation of co-creating and implementing a coherent, integrated Arctic observing system: the Pan-Arctic Observing System of Systems - pan-AOSS. The project aims to overcome shortcomings in the present observing system by refining its operability, improving and extending pan-Arctic scientific and community-based monitoring and the integration with indigenous and local knowledge. COLOSSUS (Collaborative System of Systems Exploration of Aviation Products, Services and Business Models; Feb 2023 - Jan 2026) is a Horizon Europe research project for the development of a system-of-systems design methodology which for the first time will enable the combined optimization of aircraft, operations and business models. The project aims at establishing a transformative digital collaborative (TDC) framework to enable European aviation to conduct research, technology development, and innovation. The TDC framework will support the simulation, analysis, optimization and evaluation of complex products and services in real-world scenarios. See also Inheritance Software library Object-oriented programming Model-based systems engineering Enterprise systems engineering Complex adaptive system Systems architecture Process architecture Software architecture Enterprise architecture Ultra-large-scale systems Department of Defense Architecture Framework New Cybernetics References Further reading Yaneer Bar-Yam et al. (2004) "The Characteristics and Emerging Behaviors of System-of-Systems" in: NECSI: Complex Physical, Biological and Social Systems Project, January 7, 2004. Kenneth E. Boulding (1954) "General Systems Theory - The Skeleton of Science," Management Science, Vol. 2, No. 3, ABI/INFORM Global, pp. 197–208. Crossley, W.A., System-of-Systems:, Introduction of Purdue University Schools of Engineering's Signature Area. Mittal, S., Martin, J.L.R. (2013) Netcentric System of Systems Engineering with DEVS Unified Process, CRC Press, Boca Raton, FL DeLaurentis, D. "Understanding Transportation as a System of Systems Design Problem," 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10–13, 2005. AIAA-2005-0123. J. Lewe, D. Mavris, Foundation for Study of Future Transportation Systems Through Agent-Based Simulation}, in: Proceedings of 24th International Congress of the Aeronautical Sciences (ICAS), Yokohama, Japan, August 2004. Session 8.1. Held, J.M.,The Modelling of Systems of Systems, PhD Thesis, University of Sydney, 2008 D. Luzeaux & J.R. Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010 D. Luzeaux, J.R. Ruault & J.L. Wippler, "Complex Systems and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011 Popper, S., Bankes, S., Callaway, R., and DeLaurentis, D. (2004) System-of-Systems Symposium: Report on a Summer Conversation, July 21–22, 2004, Potomac Institute for Policy Studies, Arlington, VA. External links System of Systems - video IBM IEEE International Conference on System of Systems Engineering (SoSE) System of Systems Engineering Center of Excellence System of Systems, Systems Engineering Guide (USD AT&L Aug 2008) International Journal of System of Systems Engineering (IJSSE) Systems engineering Systems theory
System of systems
[ "Engineering" ]
2,496
[ "Systems engineering" ]
842,834
https://en.wikipedia.org/wiki/Alquist%20Priolo%20Special%20Studies%20Zone%20Act
The Alquist-Priolo Earthquake Fault Zoning Act was signed into California law on December 22, 1972, to mitigate the hazard of surface faulting to structures for human occupancy. The act in its current form has three main provisions: 1) It directs the state's California Geological Survey agency (then known as the California Division of Mines and Geology) to compile detailed maps of the surface traces of known active faults. These maps include both the best known location where faults cut the surface and a buffer zone around the known trace(s); 2) It requires property owners (or their real estate agents) to formally and legally disclose that their property lies within the zones defined on those maps before selling the property; and 3) It prohibits new construction of houses within these zones unless a comprehensive geologic investigation shows that the fault does not pose a hazard to the proposed structure. The act was one of several that changed building codes and practices to improve earthquake safety. These changes are intended to reduce the damage from future earthquakes. Background This state law was a direct result of the 1971 San Fernando earthquake (also called the 'Sylmar earthquake'), which was associated with extensive surface fault ruptures that damaged numerous homes, commercial buildings, and other structures. Surface rupture is the most easily avoided seismic hazard. In January 1972, Governor Ronald Reagan established the Governor’s Earthquake Council as a reaction to the Sylmar quake, whose recommendations led to the act. It went into effect on March 7, 1973. The law was amended a few years later to include a disclosure obligation for real estate licensees. The act was called the Alquist-Priolo State Special Studies Zone Act prior to 1994. The act was amended September 26, 1974; May 4, 1975; September 28, 1975; September 22, 1976; September 27, 1979; September 21, 1990; and July 29, 1991. Earthquake hazards Earthquakes happen when two blocks of the Earth's crust move relative to one another. The place where the blocks meet is called a fault, and faults tend to show up as relatively straight lines on maps. Any structure built directly on top of the fault will be torn in two when the blocks move. Constructing a building to withstand this sort of movement (often several feet in a matter of seconds) is not practical, so it is best to avoid building directly on top of an active fault. The law requires the California State Geologist to establish regulatory zones (known as Earthquake Fault Zones) around the surface traces of active faults and to issue appropriate maps. ("Earthquake Fault Zones" were called "Special Studies Zones" prior to January 1, 1994.) The maps are distributed to all affected cities, counties, and state agencies for their use in planning and controlling new or renewed construction. Local agencies must regulate most development projects within the zones. Limitations Buildings built before 1972 may still lie on top of active faults, and those buildings can remain where they were originally built, unless they undergo a major remodel where more than 50% of the building changes. When that happens, they are treated the same as new construction (a geologic investigation must be undertaken and the hazard mitigated before a building permit can be issued). Projects include all land divisions and most structures for human occupancy. Single family wood-frame and steel-frame dwellings up to two stories not part of a development of four units or more are exempt. However, local agencies can be more restrictive than state law requires. While the act mandates that owners disclose the fact that their property lies within the Alquist-Priolo zone when they sell it, there are no legal requirements to disclose the fact to renters living on the property. Renters should investigate the location of active faults on their own before signing a lease or rental agreement. Legally, the act only applies to structures for human occupancy (houses, apartments, condominiums, etc.). However, the official geologic maps delineating the fault zones are used to help place a variety of structures on safe ground. For example, the Belmont Learning Center, a large school complex in Downtown Los Angeles was complicated by the discovery of a surface fault on the property in 2002. The Los Angeles Unified School District was required to remove a building that had been built directly atop the fault prior to its discovery. The act only addresses the hazard of surface fault rupture and does not address other earthquake hazards. The Seismic Hazards Mapping Act, passed in 1990, addresses non-surface fault rupture earthquake hazards, including liquefaction and seismically induced landslides. The act only applies to faults which are "sufficiently active" and "well defined"- for example the 1994 Northridge earthquake occurred on a blind thrust fault not zoned by the act because of a lack of surface evidence. References California Department of Conservation, CGS, Special Publication 42, Revised 2018 External links California Geological Survey page on Alquist-Priolo California Geological Survey Special Publication 42 Text of the Alquist Priolo Act Alquist Priolo Earthquake Fault Zoning Act – California Geological Survey California statutes . Earthquake and seismic risk mitigation 1972 in American law 1972 in California California Geological Survey Disaster preparedness in the United States
Alquist Priolo Special Studies Zone Act
[ "Engineering" ]
1,058
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
843,211
https://en.wikipedia.org/wiki/Fracture%20mechanics
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture. Theoretically, the stress ahead of a sharp crack tip becomes infinite and cannot be used to describe the state around a crack. Fracture mechanics is used to characterise the loads on a crack, typically using a single parameter to describe the complete loading state at the crack tip. A number of different parameters have been developed. When the plastic zone at the tip of the crack is small relative to the crack length the stress state at the crack tip is the result of elastic forces within the material and is termed linear elastic fracture mechanics (LEFM) and can be characterised using the stress intensity factor . Although the load on a crack can be arbitrary, in 1957 G. Irwin found any state could be reduced to a combination of three independent stress intensity factors: Mode I – Opening mode (a tensile stress normal to the plane of the crack), Mode II – Sliding mode (a shear stress acting parallel to the plane of the crack and perpendicular to the crack front), and Mode III – Tearing mode (a shear stress acting parallel to the plane of the crack and parallel to the crack front). When the size of the plastic zone at the crack tip is too large, elastic-plastic fracture mechanics can be used with parameters such as the J-integral or the crack tip opening displacement. The characterising parameter describes the state of the crack tip which can then be related to experimental conditions to ensure similitude. Crack growth occurs when the parameters typically exceed certain critical values. Corrosion may cause a crack to slowly grow when the stress corrosion stress intensity threshold is exceeded. Similarly, small flaws may result in crack growth when subjected to cyclic loading. Known as fatigue, it was found that for long cracks, the rate of growth is largely governed by the range of the stress intensity experienced by the crack due to the applied loading. Fast fracture will occur when the stress intensity exceeds the fracture toughness of the material. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. Motivation The processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the manufacturing process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Fracture mechanics is the analysis of flaws to discover those that are safe (that is, do not grow) and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure. Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide quantitative answers to the following questions: What is the strength of the component as a function of crack size? What crack size can be tolerated under service loading, i.e. what is the maximum permissible crack size? How long does it take for a crack to grow from a certain initial size, for example the minimum detectable crack size, to the maximum permissible crack size? What is the service life of a structure when a certain pre-existing flaw size (e.g. a manufacturing defect) is assumed to exist? During the period available for crack detection how often should the structure be inspected for cracks? Linear elastic fracture mechanics Griffith's criterion Fracture mechanics was developed during World War I by English aeronautical engineer A. A. Griffith – thus the term Griffith crack – to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts: The stress needed to fracture bulk glass is around . The theoretical stress needed for breaking atomic bonds of glass is approximately . A theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the fracture stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had been used extensively to predict material failure before Griffith, could not be a specimen-independent material property. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material. To verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens. The artificial flaw was in the form of a surface crack which was much larger than other flaws in a specimen. The experiments showed that the product of the square root of the flaw length () and the stress at fracture () was nearly constant, which is expressed by the equation: An explanation of this relation in terms of linear elasticity theory is problematic. Linear elasticity theory predicts that stress (and hence the strain) at the tip of a sharp flaw in a linear elastic material is infinite. To avoid that problem, Griffith developed a thermodynamic approach to explain the relation that he observed. The growth of a crack, the extension of the surfaces on either side of the crack, requires an increase in the surface energy. Griffith found an expression for the constant in terms of the surface energy of the crack by solving the elasticity problem of a finite crack in an elastic plate. Briefly, the approach was: Compute the potential energy stored in a perfect specimen under a uniaxial tensile load. Fix the boundary so that the applied load does no work and then introduce a crack into the specimen. The crack relaxes the stress and hence reduces the elastic energy near the crack faces. On the other hand, the crack increases the total surface energy of the specimen. Compute the change in the free energy (surface energy − elastic energy) as a function of the crack length. Failure occurs when the free energy attains a peak value at a critical crack length, beyond which the free energy decreases as the crack length increases, i.e. by causing fracture. Using this procedure, Griffith found that where is the Young's modulus of the material and is the surface energy density of the material. Assuming and gives excellent agreement of Griffith's predicted fracture stress with experimental results for glass. For the simple case of a thin rectangular plate with a crack perpendicular to the load, the energy release rate, , becomes: where is the applied stress, is half the crack length, and is the Young's modulus, which for the case of plane strain should be divided by the plate stiffness factor . The strain energy release rate can physically be understood as: the rate at which energy is absorbed by growth of the crack. However, we also have that: If ≥ , this is the criterion for which the crack will begin to propagate. For materials highly deformed before crack propagation, the linear elastic fracture mechanics formulation is no longer applicable and an adapted model is necessary to describe the stress and displacement field close to crack tip, such as on fracture of soft materials. Irwin's modification Griffith's work was largely ignored by the engineering community until the early 1950s. The reasons for this appear to be (a) in the actual structural materials the level of energy needed to cause fracture is orders of magnitude higher than the corresponding surface energy, and (b) in structural materials there are always some inelastic deformations around the crack front that would make the assumption of linear elastic medium with infinite stresses at the crack tip highly unrealistic. Griffith's theory provides excellent agreement with experimental data for brittle materials such as glass. For ductile materials such as steel, although the relation still holds, the surface energy (γ) predicted by Griffith's theory is usually unrealistically high. A group working under G. R. Irwin at the U.S. Naval Research Laboratory (NRL) during World War II realized that plasticity must play a significant role in the fracture of ductile materials. In ductile materials (and even in materials that appear to be brittle), a plastic zone develops at the tip of the crack. As the applied load increases, the plastic zone increases in size until the crack grows and the elastically strained material behind the crack tip unloads. The plastic loading and unloading cycle near the crack tip leads to the dissipation of energy as heat. Hence, a dissipative term has to be added to the energy balance relation devised by Griffith for brittle materials. In physical terms, additional energy is needed for crack growth in ductile materials as compared to brittle materials. Irwin's strategy was to partition the energy into two parts: the stored elastic strain energy which is released as a crack grows. This is the thermodynamic driving force for fracture. the dissipated energy which includes plastic dissipation and the surface energy (and any other dissipative forces that may be at work). The dissipated energy provides the thermodynamic resistance to fracture. Then the total energy is: where is the surface energy and is the plastic dissipation (and dissipation from other sources) per unit area of crack growth. The modified version of Griffith's energy criterion can then be written as For brittle materials such as glass, the surface energy term dominates and . For ductile materials such as steel, the plastic dissipation term dominates and . For polymers close to the glass transition temperature, we have intermediate values of between 2 and 1000 . Stress intensity factor Another significant achievement of Irwin and his colleagues was to find a method of calculating the amount of energy available for fracture in terms of the asymptotic stress and displacement fields around a crack front in a linear elastic solid. This asymptotic expression for the stress field in mode I loading is related to the stress intensity factor following: where are the Cauchy stresses, is the distance from the crack tip, is the angle with respect to the plane of the crack, and are functions that depend on the crack geometry and loading conditions. Irwin called the quantity the stress intensity factor. Since the quantity is dimensionless, the stress intensity factor can be expressed in units of . Stress intensity replaced strain energy release rate and a term called fracture toughness replaced surface weakness energy. Both of these terms are simply related to the energy terms that Griffith used: and where is the mode stress intensity, the fracture toughness, and is Poisson's ratio. Fracture occurs when . For the special case of plane strain deformation, becomes and is considered a material property. The subscript arises because of the different ways of loading a material to enable a crack to propagate. It refers to so-called "mode " loading as opposed to mode or : The expression for will be different for geometries other than the center-cracked infinite plate, as discussed in the article on the stress intensity factor. Consequently, it is necessary to introduce a dimensionless correction factor, , in order to characterize the geometry. This correction factor, also often referred to as the geometric shape factor, is given by empirically determined series and accounts for the type and geometry of the crack or notch. We thus have: where is a function of the crack length and width of sheet given, for a sheet of finite width containing a through-thickness crack of length , by: Strain energy release Irwin was the first to observe that if the size of the plastic zone around a crack is small compared to the size of the crack, the energy required to grow the crack will not be critically dependent on the state of stress (the plastic zone) at the crack tip. In other words, a purely elastic solution may be used to calculate the amount of energy available for fracture. The energy release rate for crack growth or strain energy release rate may then be calculated as the change in elastic strain energy per unit area of crack growth, i.e., where U is the elastic energy of the system and a is the crack length. Either the load P or the displacement u are constant while evaluating the above expressions. Irwin showed that for a mode I crack (opening mode) the strain energy release rate and the stress intensity factor are related by: where E is the Young's modulus, ν is Poisson's ratio, and KI is the stress intensity factor in mode I. Irwin also showed that the strain energy release rate of a planar crack in a linear elastic body can be expressed in terms of the mode I, mode II (sliding mode), and mode III (tearing mode) stress intensity factors for the most general loading conditions. Next, Irwin adopted the additional assumption that the size and shape of the energy dissipation zone remains approximately constant during brittle fracture. This assumption suggests that the energy needed to create a unit fracture surface is a constant that depends only on the material. This new material property was given the name fracture toughness and designated GIc. Today, it is the critical stress intensity factor KIc, found in the plane strain condition, which is accepted as the defining property in linear elastic fracture mechanics. Crack tip plastic zone In theory the stress at the crack tip where the radius is nearly zero, would tend to infinity. This would be considered a stress singularity, which is not possible in real-world applications. For this reason, in numerical studies in the field of fracture mechanics, it is often appropriate to represent cracks as round tipped notches, with a geometry dependent region of stress concentration replacing the crack-tip singularity. In actuality, the stress concentration at the tip of a crack within real materials has been found to have a finite value but larger than the nominal stress applied to the specimen. Nevertheless, there must be some sort of mechanism or property of the material that prevents such a crack from propagating spontaneously. The assumption is, the plastic deformation at the crack tip effectively blunts the crack tip. This deformation depends primarily on the applied stress in the applicable direction (in most cases, this is the y-direction of a regular Cartesian coordinate system), the crack length, and the geometry of the specimen. To estimate how this plastic deformation zone extended from the crack tip, Irwin equated the yield strength of the material to the far-field stresses of the y-direction along the crack (x direction) and solved for the effective radius. From this relationship, and assuming that the crack is loaded to the critical stress intensity factor, Irwin developed the following expression for the idealized radius of the zone of plastic deformation at the crack tip: Models of ideal materials have shown that this zone of plasticity is centered at the crack tip. This equation gives the approximate ideal radius of the plastic zone deformation beyond the crack tip, which is useful to many structural scientists because it gives a good estimate of how the material behaves when subjected to stress. In the above equation, the parameters of the stress intensity factor and indicator of material toughness, , and the yield stress, , are of importance because they illustrate many things about the material and its properties, as well as about the plastic zone size. For example, if is high, then it can be deduced that the material is tough, and if is low, one knows that the material is more ductile. The ratio of these two parameters is important to the radius of the plastic zone. For instance, if is small, then the squared ratio of to is large, which results in a larger plastic radius. This implies that the material can plastically deform, and, therefore, is tough. This estimate of the size of the plastic zone beyond the crack tip can then be used to more accurately analyze how a material will behave in the presence of a crack. The same process as described above for a single event loading also applies and to cyclic loading. If a crack is present in a specimen that undergoes cyclic loading, the specimen will plastically deform at the crack tip and delay the crack growth. In the event of an overload or excursion, this model changes slightly to accommodate the sudden increase in stress from that which the material previously experienced. At a sufficiently high load (overload), the crack grows out of the plastic zone that contained it and leaves behind the pocket of the original plastic deformation. Now, assuming that the overload stress is not sufficiently high as to completely fracture the specimen, the crack will undergo further plastic deformation around the new crack tip, enlarging the zone of residual plastic stresses. This process further toughens and prolongs the life of the material because the new plastic zone is larger than what it would be under the usual stress conditions. This allows the material to undergo more cycles of loading. This idea can be illustrated further by the graph of Aluminum with a center crack undergoing overloading events. Limitations But a problem arose for the NRL researchers because naval materials, e.g., ship-plate steel, are not perfectly elastic but undergo significant plastic deformation at the tip of a crack. One basic assumption in Irwin's linear elastic fracture mechanics is small scale yielding, the condition that the size of the plastic zone is small compared to the crack length. However, this assumption is quite restrictive for certain types of failure in structural steels though such steels can be prone to brittle fracture, which has led to a number of catastrophic failures. Linear-elastic fracture mechanics is of limited practical use for structural steels and Fracture toughness testing can be expensive. Elastic–plastic fracture mechanics Most engineering materials show some nonlinear elastic and inelastic behavior under operating conditions that involve large loads. In such materials the assumptions of linear elastic fracture mechanics may not hold, that is, the plastic zone at a crack tip may have a size of the same order of magnitude as the crack size the size and shape of the plastic zone may change as the applied load is increased and also as the crack length increases. Therefore, a more general theory of crack growth is needed for elastic-plastic materials that can account for: the local conditions for initial crack growth which include the nucleation, growth, and coalescence of voids (decohesion) at a crack tip. a global energy balance criterion for further crack growth and unstable fracture. CTOD Historically, the first parameter for the determination of fracture toughness in the elasto-plastic region was the crack tip opening displacement (CTOD) or "opening at the apex of the crack" indicated. This parameter was determined by Wells during the studies of structural steels, which due to the high toughness could not be characterized with the linear elastic fracture mechanics model. He noted that, before the fracture happened, the walls of the crack were leaving and that the crack tip, after fracture, ranged from acute to rounded off due to plastic deformation. In addition, the rounding of the crack tip was more pronounced in steels with superior toughness. There are a number of alternative definitions of CTOD. In the two most common definitions, CTOD is the displacement at the original crack tip and the 90 degree intercept. The latter definition was suggested by Rice and is commonly used to infer CTOD in finite element models of such. Note that these two definitions are equivalent if the crack tip blunts in a semicircle. Most laboratory measurements of CTOD have been made on edge-cracked specimens loaded in three-point bending. Early experiments used a flat paddle-shaped gage that was inserted into the crack; as the crack opened, the paddle gage rotated, and an electronic signal was sent to an x-y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gage. Today, the displacement V at the crack mouth is measured, and the CTOD is inferred by assuming the specimen halves are rigid and rotate about a hinge point (the crack tip). R-curve An early attempt in the direction of elastic-plastic fracture mechanics was Irwin's crack extension resistance curve, Crack growth resistance curve or R-curve. This curve acknowledges the fact that the resistance to fracture increases with growing crack size in elastic-plastic materials. The R-curve is a plot of the total energy dissipation rate as a function of the crack size and can be used to examine the processes of slow stable crack growth and unstable fracture. However, the R-curve was not widely used in applications until the early 1970s. The main reasons appear to be that the R-curve depends on the geometry of the specimen and the crack driving force may be difficult to calculate. J-integral In the mid-1960s James R. Rice (then at Brown University) and G. P. Cherepanov independently developed a new toughness measure to describe the case where there is sufficient crack-tip deformation that the part no longer obeys the linear-elastic approximation. Rice's analysis, which assumes non-linear elastic (or monotonic deformation theory plastic) deformation ahead of the crack tip, is designated the J-integral. This analysis is limited to situations where plastic deformation at the crack tip does not extend to the furthest edge of the loaded part. It also demands that the assumed non-linear elastic behavior of the material is a reasonable approximation in shape and magnitude to the real material's load response. The elastic-plastic failure parameter is designated JIc and is conventionally converted to KIc using the equation below. Also note that the J integral approach reduces to the Griffith theory for linear-elastic behavior. The mathematical definition of J-integral is as follows: where is an arbitrary path clockwise around the apex of the crack, is the density of strain energy, are the components of the vectors of traction, are the components of the displacement vectors, is an incremental length along the path , and and are the stress and strain tensors. Since engineers became accustomed to using KIc to characterise fracture toughness, a relation has been used to reduce JIc to it: where for plane stress and for plane strain. Cohesive zone model When a significant region around a crack tip has undergone plastic deformation, other approaches can be used to determine the possibility of further crack extension and the direction of crack growth and branching. A simple technique that is easily incorporated into numerical calculations is the cohesive zone model method which is based on concepts proposed independently by Barenblatt and Dugdale in the early 1960s. The relationship between the Dugdale-Barenblatt models and Griffith's theory was first discussed by Willis in 1967. The equivalence of the two approaches in the context of brittle fracture was shown by Rice in 1968. Transition flaw size Let a material have a yield strength and a fracture toughness in mode I . Based on fracture mechanics, the material will fail at stress . Based on plasticity, the material will yield when . These curves intersect when . This value of is called as transition flaw size ., and depends on the material properties of the structure. When the , the failure is governed by plastic yielding, and when the failure is governed by fracture mechanics. The value of for engineering alloys is 100 mm and for ceramics is 0.001 mm. If we assume that manufacturing processes can give rise to flaws in the order of micrometers, then, it can be seen that ceramics are more likely to fail by fracture, whereas engineering alloys would fail by plastic deformation. Concrete fracture analysis Concrete fracture analysis is part of fracture mechanics that studies crack propagation and related failure modes in concrete. As it is widely used in construction, fracture analysis and modes of reinforcement are an important part of the study of concrete, and different concretes are characterized in part by their fracture properties. Common fractures include the cone-shaped fractures that form around anchors under tensile strength. Bažant (1983) proposed a crack band model for materials like concrete whose homogeneous nature changes randomly over a certain range. He also observed that in plain concrete, the size effect has a strong influence on the critical stress intensity factor, and proposed the relation = / √(1+{/}),Bažant, Z. P., and Pang, S.-D. (2006) "Mechanics based statistics of failure risk of quasibrittle structures and size effect on safety factors". Proc. Nat'l Acad. Sci., USA 103 (25), pp. 9434–9439 where = stress intensity factor, = tensile strength, = size of specimen, = maximum aggregate size, and = an empirical constant. Atomistic Fracture Mechanics Atomistic Fracture Mechanics (AFM) is a relatively new field that studies the behavior and properties of materials at the atomic scale when subjected to fracture. It integrates concepts from fracture mechanics with atomistic simulations to understand how cracks initiate, propagate, and interact with the microstructure of materials. By using techniques like Molecular Dynamics (MD) simulations, AFM can provide insights into the fundamental mechanisms of crack formation and growth, the role of atomic bonds, and the influence of material defects and impurities on fracture behavior. See also – Fracture mechanics and fatigue crack growth analysis software , a formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures References Further reading Buckley, C.P. "Material Failure", Lecture Notes (2005), University of Oxford. Davidge, R.W., Mechanical Behavior of Ceramics, Cambridge Solid State Science Series, (1979) Demaid, Adrian, Fail Safe, Open University (2004) Green, D., An Introduction to the Mechanical Properties of Ceramics, Cambridge Solid State Science Series, Eds. Clarke, D.R., Suresh, S., Ward, I.M. (1998) Lawn, B.R., Fracture of Brittle Solids, Cambridge Solid State Science Series, 2nd Edn. (1993) Farahmand, B., Bockrath, G., and Glassco, J. (1997) Fatigue and Fracture Mechanics of High-Risk Parts, Chapman & Hall. . Chen, X., Mai, Y.-W., Fracture Mechanics of Electromagnetic Materials: Nonlinear Field Theory and Applications, Imperial College Press, (2012) A.N. Gent, W.V. Mars, In: James E. Mark, Burak Erman and Mike Roland, Editor(s), Chapter 10 – Strength of Elastomers, The Science and Technology of Rubber, Fourth edition, Academic Press, Boston, 2013, pp. 473–516, , 10.1016/B978-0-12-394584-6.00010-8 Zehnder, Alan. Fracture Mechanics, SpringerLink, (2012). External links Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson, Harvard University Notes on Fracture of Thin Films and Multilayers by Prof. John Hutchinson, Harvard University Fracture Mechanics by Piet Schreurs, TU Eindhoven, The Netherlands Glass physics Structural analysis
Fracture mechanics
[ "Physics", "Materials_science", "Engineering" ]
5,437
[ "Structural engineering", "Glass engineering and science", "Fracture mechanics", "Structural analysis", "Materials science", "Glass physics", "Condensed matter physics", "Mechanical engineering", "Aerospace engineering", "Materials degradation" ]
843,904
https://en.wikipedia.org/wiki/Tetrahedrane
Tetrahedrane is a hypothetical platonic hydrocarbon with chemical formula and a tetrahedral structure. The molecule would be subject to considerable angle strain and has not been synthesized . However, a number of derivatives have been prepared. In a more general sense, the term tetrahedranes is used to describe a class of molecules and ions with related structure, e.g. white phosphorus. Organic tetrahedranes In 1978, Günther Maier prepared tetra-tert-butyl-tetrahedrane. The bulky tert-butyl (t-Bu) substituents envelop the tetrahedrane core. Maier suggested that bonds in the core are prevented from breaking because this would force the substituents closer together (corset effect) resulting in Van der Waals strain. Tetrahedrane is one of the possible platonic hydrocarbons and has the IUPAC name tricyclo[1.1.0.02,4]butane. Unsubstituted tetrahedrane () remains elusive, although it is predicted to be kinetically stable. One strategy that has been explored (but thus far failed) is reaction of propene with atomic carbon. Locking away a tetrahedrane molecule inside a fullerene has only been attempted in silico. Due to its bond strain and stoichiometry, tetranitrotetrahedrane has potential as a high-performance energetic material (explosive). Some properties have been calculated based on quantum chemical methods. Tetra-tert-butyltetrahedrane This compound was first synthesised starting from a cycloaddition of an alkyne with t-Bu substituted maleic anhydride, followed by rearrangement with carbon dioxide expulsion to a cyclopentadienone and its bromination, followed by addition of the fourth t-Bu group. Photochemical cheletropic elimination of carbon monoxide of the cyclopentadienone gives the target. Heating tetra-tert-butyltetrahedrane gives tetra-tert-butylcyclobutadiene. Though the synthesis appears short and simple, by Maier's own account, it took several years of careful observation and optimization to develop the correct conditions for the challenging reactions to take place. For instance, the synthesis of tetrakis(t-butyl)cyclopentadienone from the tris(t-butyl)bromocyclopentadienone (itself synthesized with much difficulty) required over 50 attempts before working conditions could be found. The synthesis was described as requiring "astonishing persistence and experimental skill" in one retrospective of the work. In a classic reference work on stereochemistry, the authors remark that "the relatively straightforward scheme shown [...] conceals both the limited availability of the starting material and the enormous amount of work required in establishing the proper conditions for each step." Eventually, a more scalable synthesis was conceived, in which the last step was the photolysis of a cyclopropenyl-substituted diazomethane, which affords the desired product through the intermediacy of tetrakis(tert-butyl)cyclobutadiene: This approach took advantage of the observation that the tetrahedrane and the cyclobutadiene could be interconverted (uv irradiation in the forward direction, heat in the reverse direction). Tetrakis(trimethylsilyl)tetrahedrane Tetrakis(trimethylsilyl)tetrahedrane can be prepared by treatment of the cyclobutadiene precursor with tris(pentafluorophenyl)borane and is far more stable than the tert-butyl analogue. The silicon–carbon bond is longer than a carbon–carbon bond, and therefore the corset effect is reduced. Whereas the tert-butyl tetrahedrane melts at 135 °C concomitant with rearrangement to the cyclobutadiene, tetrakis(trimethylsilyl)tetrahedrane, which melts at 202 °C, is stable up to 300 °C, at which point it cracks to bis(trimethylsilyl)acetylene. The tetrahedrane skeleton is made up of banana bonds, and hence the carbon atoms are high in s-orbital character. From NMR, sp-hybridization can be deduced, normally reserved for triple bonds. As a consequence the bond lengths are unusually short with 152 picometers. Reaction with methyllithium with tetrakis(trimethylsilyl)tetrahedrane yields tetrahedranyllithium. Coupling reactions with this lithium compound gives extended structures. A bis(tetrahedrane) has also been reported. The connecting bond is even shorter with 143.6 pm. An ordinary carbon–carbon bond has a length of 154 pm. Tetrahedranes with non-carbon cores In tetrasilatetrahedrane features a core of four silicon atoms. The standard silicon–silicon bond is much longer (235 pm) and the cage is again enveloped by a total of 16 trimethylsilyl groups, which confer stability. The silatetrahedrane can be reduced with potassium graphite to the tetrasilatetrahedranide potassium derivative. In this compound one of the silicon atoms of the cage has lost a silyl substituent and carries a negative charge. The potassium cation can be sequestered by a crown ether, and in the resulting complex potassium and the silyl anion are separated by a distance of 885 pm. One of the Si−–Si bonds is now 272 pm and the tetravalent silicon atom of that bond has an inverted tetrahedral geometry. Furthermore, the four cage silicon atoms are equivalent on the NMR timescale due to migrations of the silyl substituents over the cage. The dimerization reaction observed for the carbon tetrahedrane compound is also attempted for a tetrasilatetrahedrane. In this tetrahedrane the cage is protected by four so-called supersilyl groups in which a silicon atom has 3 tert-butyl substituents. The dimer does not materialize but a reaction with iodine in benzene followed by reaction with the tri-tert-butylsilaanion results in the formation of an eight-membered silicon cluster compound which can be described as a dumbbell (length 229 pm and with inversion of tetrahedral geometry) sandwiched between two almost-parallel rings. In eight-membered clusters of in the same carbon group, tin and germanium the cluster atoms are located on the corners of a cube. Inorganic and organometallic tetrahedranes The tetrahedrane motif occurs broadly in chemistry. White phosphorus (P4) and yellow arsenic (As4) are examples. Several metal carbonyl clusters are referred to as tetrahedranes, e.g. tetrarhodium dodecacarbonyl. Metallatetrahedranes with a single metal (or phosphorus atom) capping a cyclopropyl trianion also exist. See also Dodecahedrane Prismane Prismane Pnictogen-substituted tetrahedranes References Cycloalkanes Cluster chemistry Hypothetical chemical compounds Tricyclic compounds Tetrahedra
Tetrahedrane
[ "Chemistry" ]
1,552
[ "Cluster chemistry", "Hypotheses in chemistry", "Theoretical chemistry", "Hypothetical chemical compounds", "Organometallic chemistry" ]
843,986
https://en.wikipedia.org/wiki/Annulene
Annulenes are monocyclic hydrocarbons that contain the maximum number of non-cumulated or conjugated double bonds ('mancude'). They have the general formula CnHn (when n is an even number) or CnHn+1 (when n is an odd number). The IUPAC accepts the use of 'annulene nomenclature' in naming carbocyclic ring systems with 7 or more carbon atoms, using the name '[n]annulene' for the mancude hydrocarbon with n carbon atoms in its ring, though in certain contexts (e.g., discussions of aromaticity for different ring sizes), smaller rings (n = 3 to 6) can also be informally referred to as annulenes. Using this form of nomenclature 1,3,5,7-cyclooctatetraene is [8]annulene and benzene is [6]annulene (and occasionally referred to as just 'annulene'). The discovery that [18]annulene possesses a number of key properties associated with other aromatic molecules was an important development in the understanding of aromaticity as a chemical concept. In the related annulynes, one double bond is replaced by a triple bond. Aromaticity Annulenes may be aromatic (benzene, [6]annulene and [18]annulene), non-aromatic ([8] and [10]annulene), or anti-aromatic (cyclobutadiene, [4]annulene). Cyclobutadiene is the only annulene with considerable antiaromaticity, since planarity is unavoidable. With [8]annulene, the molecule takes on a tub shape that allows it to avoid conjugation of double bonds. [10]Annulene is of the wrong size to achieve a planar structure: in a planar conformation, ring strain due to either steric hindrance of internal hydrogens (when some double bonds are trans) or bond angle distortion (when the double bonds are all cis) is unavoidable. Thus, it does not exhibit appreciable aromaticity. When the annulene is large enough, [18]annulene for example, there is enough room internally to accommodate hydrogen atoms without significant distortion of bond angles. [18]Annulene possesses several properties that qualify it as aromatic. However, none of the larger annulenes are as stable as benzene, as their reactivity more closely resembles a conjugated polyene than an aromatic hydrocarbon. In general, charged annulene species of the form (; ; ) are aromatic, provided a planar conformation can be achieved. For instance, , , and are all known aromatic species. Gallery See also Annulyne Circulene Fulvenes References External links NIST Chemistry WebBook - [18]annulene Structure of [14] and [18]annulene Physical organic chemistry
Annulene
[ "Chemistry" ]
631
[ "Physical organic chemistry" ]
844,290
https://en.wikipedia.org/wiki/Hydrogel
A hydrogel is a biphasic material, a mixture of porous and permeable solids and at least 10% of water or other interstitial fluid. The solid phase is a water insoluble three dimensional network of polymers, having absorbed a large amount of water or biological fluids. Hydrogels have several applications, especially in the biomedical area, such as in hydrogel dressing. Many hydrogels are synthetic, but some are derived from natural materials. The term "hydrogel" was coined in 1894. Chemistry Classification The crosslinks which bond the polymers of a hydrogel fall under two general categories: physical hydrogels and chemical hydrogels. Chemical hydrogels have covalent cross-linking bonds, whereas physical hydrogels have non-covalent bonds. Chemical hydrogels can result in strong reversible or irreversible gels due to the covalent bonding. Chemical hydrogels that contain reversible covalent cross-linking bonds, such as hydrogels of thiomers being cross-linked via disulfide bonds, are non-toxic and are used in numerous medicinal products. Physical hydrogels usually have high biocompatibility, are not toxic, and are also easily reversible by simply changing an external stimulus such as pH, ion concentration (alginate) or temperature (gelatine); they are also used for medical applications. Physical crosslinks consist of hydrogen bonds, hydrophobic interactions, and chain entanglements (among others). A hydrogel generated through the use of physical crosslinks is sometimes called a 'reversible' hydrogel. Chemical crosslinks consist of covalent bonds between polymer strands. Hydrogels generated in this manner are sometimes called 'permanent' hydrogels. Hydrogels are prepared using a variety of polymeric materials, which can be divided broadly into two categories according to their origin: natural or synthetic polymers. Natural polymers for hydrogel preparation include hyaluronic acid, chitosan, heparin, alginate, gelatin and fibrin. Common synthetic polymers include polyvinyl alcohol, polyethylene glycol, sodium polyacrylate, acrylate polymers and copolymers thereof. Whereas natural hydrogels are usually non-toxic, and often provide other advantages for medical use, such as biocompatibility, biodegradability, antibiotic/antifungal effect and improve regeneration of nearby tissue, their stability and strength is usually much lower than synthetic hydrogels. There are also synthetic hydrogels that can be used for medical applications, such as polyethylene glycol (PEG), polyacrylate, and polyvinylpyrrolidone (PVP). Preparation There are two suggested mechanisms behind physical hydrogel formation, the first one being the gelation of nanofibrous peptide assemblies, usually observed for oligopeptide precursors. The precursors self-assemble into fibers, tapes, tubes, or ribbons that entangle to form non-covalent cross-links. The second mechanism involves non-covalent interactions of cross-linked domains that are separated by water-soluble linkers, and this is usually observed in longer multi-domain structures. Tuning of the supramolecular interactions to produce a self-supporting network that does not precipitate, and is also able to immobilize water which is vital for to gel formation. Most oligopeptide hydrogels have a β-sheet structure, and assemble to form fibers, although α-helical peptides have also been reported. The typical mechanism of gelation involves the oligopeptide precursors self-assemble into fibers that become elongated, and entangle to form cross-linked gels. One notable method of initiating a polymerization reaction involves the use of light as a stimulus. In this method, photoinitiators, compounds that cleave from the absorption of photons, are added to the precursor solution which will become the hydrogel. When the precursor solution is exposed to a concentrated source of light, usually ultraviolet irradiation, the photoinitiators will cleave and form free radicals, which will begin a polymerization reaction that forms crosslinks between polymer strands. This reaction will cease if the light source is removed, allowing the amount of crosslinks formed in the hydrogel to be controlled. The properties of a hydrogel are highly dependent on the type and quantity of its crosslinks, making photopolymerization a popular choice for fine-tuning hydrogels. This technique has seen considerable use in cell and tissue engineering applications due to the ability to inject or mold a precursor solution loaded with cells into a wound site, then solidify it in situ. Physically crosslinked hydrogels can be prepared by different methods depending on the nature of the crosslink involved. Polyvinyl alcohol hydrogels are usually produced by the freeze-thaw technique. In this, the solution is frozen for a few hours, then thawed at room temperature, and the cycle is repeated until a strong and stable hydrogel is formed. Alginate hydrogels are formed by ionic interactions between alginate and double-charged cations. A salt, usually calcium chloride, is dissolved into an aqueous sodium alginate solution, that causes the calcium ions to create ionic bonds between alginate chains. Gelatin hydrogels are formed by temperature change. A water solution of gelatin forms an hydrogel at temperatures below 37–35 °C, as Van der Waals interactions between collagen fibers become stronger than thermal molecular vibrations. Peptide based hydrogels Peptide based hydrogels possess exceptional biocompatibility and biodegradability qualities, giving rise to their wide use of applications, particularly in biomedicine; as such, their physical properties can be fine-tuned in order to maximise their use. Methods to do this are: modulation of the amino acid sequence, pH, chirality, and increasing the number of aromatic residues. The order of amino acids within the sequence is crucial for gelation, as has been shown many times. In one example, a short peptide sequence Fmoc-Phe-Gly readily formed a hydrogel, whereas Fmoc-Gly-Phe failed to do so as a result of the two adjacent aromatic moieties being moved, hindering the aromatic interactions. Altering the pH can also have similar effects, an example involved the use of the naphthalene (Nap) modified dipeptides Nap-Gly-Ala, and Nap- Ala-Gly, where a drop in pH induced gelation of the former, but led to crystallisation of the latter. A controlled pH decrease method using glucono-δ-lactone (GdL), where the GdL is hydrolysed to gluconic acid in water is a recent strategy that has been developed as a way to form homogeneous and reproducible hydrogels. The hydrolysis is slow, which allows for a uniform pH change, and thus resulting in reproducible homogenous gels. In addition to this, the desired pH can be achieved by altering the amount of GdL added. The use of GdL has been used various times for the hydrogelation of Fmoc and Nap-dipeptides. In another direction, Morris et al reported the use of GdL as a 'molecular trigger' to predict and control the order of gelation. Chirality also plays an essential role in gel formation, and even changing the chirality of a single amino acid from its natural L-amino acid to its unnatural D-amino acid can significantly impact the gelation properties, with the natural forms not forming gels. Furthermore, aromatic interactions play a key role in hydrogel formation as a result of π- π stacking driving gelation, shown by many studies. Other Hydrogels also possess a degree of flexibility very similar to natural tissue due to their significant water content. As responsive "smart materials", hydrogels can encapsulate chemical systems which upon stimulation by external factors such as a change of pH may cause specific compounds such as glucose to be liberated to the environment, in most cases by a gel–sol transition to the liquid state. Chemomechanical polymers are mostly also hydrogels, which upon stimulation change their volume and can serve as actuators or sensors. Mechanical properties Hydrogels have been investigated for diverse applications. By modifying the polymer concentration of a hydrogel (or conversely, the water concentration), the Young's modulus, shear modulus, and storage modulus can vary from 10 Pa to 3 MPa, a range of about five orders of magnitude. A similar effect can be seen by altering the crosslinking concentration. This much variability of the mechanical stiffness is why hydrogels are so appealing for biomedical applications, where it is vital for implants to match the mechanical properties of the surrounding tissues. Characterizing the mechanical properties of hydrogels can be difficult especially due to the differences in mechanical behavior that hydrogels have in comparison to other traditional engineering materials. In addition to its rubber elasticity and viscoelasticity, hydrogels have an additional time dependent deformation mechanism which is dependent on fluid flow called poroelasticity. These properties are extremely important to consider while performing mechanical experiments. Some common mechanical testing experiments for hydrogels are tension, compression (confined or unconfined), indentation, shear rheometry or dynamic mechanical analysis. Hydrogels have two main regimes of mechanical properties: rubber elasticity and viscoelasticity: Rubber elasticity In the unswollen state, hydrogels can be modelled as highly crosslinked chemical gels, in which the system can be described as one continuous polymer network. In this case: where G is the shear modulus, k is the Boltzmann constant, T is temperature, Np is the number of polymer chains per unit volume, ρ is the density, R is the ideal gas constant, and  is the (number) average molecular weight between two adjacent cross-linking points. can be calculated from the swell ratio, Q, which is relatively easy to test and measure. For the swollen state, a perfect gel network can be modeled as: In a simple uniaxial extension or compression test, the true stress, , and engineering stress, , can be calculated as: where  is the stretch. Viscoelasticity For hydrogels, their elasticity comes from the solid polymer matrix while the viscosity originates from the polymer network mobility and the water and other components that make up the aqueous phase. Viscoelastic properties of a hydrogel is highly dependent on the nature of the applied mechanical motion. Thus, the time dependence of these applied forces is extremely important for evaluating the viscoelasticity of the material. Physical models for viscoelasticity attempt to capture the elastic and viscous material properties of a material. In an elastic material, the stress is proportional to the strain while in a viscous material, the stress is proportional to the strain rate. The Maxwell model is one developed mathematical model for linear viscoelastic response. In this model, viscoelasticity is modeled analogous to an electrical circuit with a Hookean spring, that represents the Young's modulus, and a Newtonian dashpot that represents the viscosity. A material that exhibit properties described in this model is a Maxwell material. Another physical model used is called the Kelvin-Voigt Model and a material that follow this model is called a Kelvin–Voigt material. In order to describe the time-dependent creep and stress-relaxation behavior of hydrogel, a variety of physical lumped parameter models can be used. These modeling methods vary greatly and are extremely complex, so the empirical Prony Series description is commonly used to describe the viscoelastic behavior in hydrogels. In order to measure the time-dependent viscoelastic behavior of polymers dynamic mechanical analysis is often performed. Typically, in these measurements the one side of the hydrogel is subjected to a sinusoidal load in shear mode while the applied stress is measured with a stress transducer and the change in sample length is measured with a strain transducer. One notation used to model the sinusoidal response to the periodic stress or strain is: in which G' is the real (elastic or storage) modulus, G" is the imaginary (viscous or loss) modulus. Poroelasticity Poroelasticity is a characteristic of materials related to the migration of solvent through a porous material and the concurrent deformation that occurs. Poroelasticity in hydrated materials such as hydrogels occurs due to friction between the polymer and water as the water moves through the porous matrix upon compression. This causes a decrease in water pressure, which adds additional stress upon compression. Similar to viscoelasticity, this behavior is time dependent, thus poroelasticity is dependent on compression rate: a hydrogel shows softness upon slow compression, but fast compression makes the hydrogel stiffer. This phenomenon is due to the friction between the water and the porous matrix is proportional to the flow of water, which in turn is dependent on compression rate. Thus, a common way to measure poroelasticity is to do compression tests at varying compression rates. Pore size is an important factor in influencing poroelasticity. The Kozeny–Carman equation has been used to predict pore size by relating the pressure drop to the difference in stress between two compression rates. Poroelasticity is described by several coupled equations, thus there are few mechanical tests that relate directly to the poroelastic behavior of the material, thus more complicated tests such as indentation testing, numerical or computational models are utilized. Numerical or computational methods attempt to simulate the three dimensional permeability of the hydrogel network. Toughness and hysteresis The toughness of a hydrogel refers to the ability of the hydrogel to withstand deformation or mechanical stress without fracturing or breaking apart. A hydrogel with high toughness can maintain its structural integrity and functionality under higher stress. Several factors contribute to the toughness of a hydrogel including composition, crosslink density, polymer chain structure, and hydration level. The toughness of a hydrogel is highly dependent on what polymer(s) and crosslinker(s) make up its matrix as certain polymers possess higher toughness and certain crosslinking covalent bonds are inherently stronger. Additionally, higher crosslinking density generally leads to increased toughness by restricting polymer chain mobility and enhancing resistance to deformation. The structure of the polymer chains is also a factor in that, longer chain lengths and higher molecular weight leads to a greater number of entanglements and higher toughness. A good balance (equilibrium) in the hydration of a hydrogel leads is important because too low hydration causes poor flexibility and toughness within the hydrogel, but too high of water content can cause excessive swelling, weakening the mechanical properties of the hydrogel. The hysteresis of a hydrogel refers to the phenomenon where there is a delay in the deformation and recovery of a hydrogel when it is subjected to mechanical stress and relieved of that stress. This occurs because the polymer chains within a hydrogel rearrange, and the water molecules are displaced, and energy is stored as it deforms in mechanical extension or compression. When the mechanical stress is removed, the hydrogel begins to recover its original shape, but there may be a delay in the recovery process due to factors like viscoelasticity, internal friction, etc. This leads to a difference between the stress-strain curve during loading and unloading. Hysteresis within a hydrogel is influenced by several factors including composition, crosslink density, polymer chain structure, and temperature. The toughness and hysteresis of a hydrogel are especially important in the context of biomedical applications such as tissue engineering and drug delivery, as the hydrogel may need to withstand mechanical forces within the body, but also maintain mechanical performance and stability over time. Most typical hydrogels, both natural and synthetic, have a positive correlation between toughness and hysteresis, meaning that the higher the toughness, the longer the hydrogel takes to recover its original shape and vice versa. This is largely due to sacrificial bonds being the source of toughness within many of these hydrogels. Sacrificial bonds are non-covalent interactions such as hydrogen bonds, ionic interactions, and hydrophobic interactions, that can break and reform under mechanical stress. The reforming of these bonds takes time, especially when there are more of them, which leads to an increase in hysteresis. However, there is currently research focused on the development of highly entangled hydrogels, which instead rely on the long chain length of the polymers and their entanglement to limit the deformation of the hydrogel, thereby increasing the toughness without increasing hysteresis as there is no need for the reformation of the bonds. Environmental response The most commonly seen environmental sensitivity in hydrogels is a response to temperature. Many polymers/hydrogels exhibit a temperature dependent phase transition, which can be classified as either an upper critical solution temperature (UCST) or lower critical solution temperature (LCST). UCST polymers increase in their water-solubility at higher temperatures, which lead to UCST hydrogels transitioning from a gel (solid) to a solution (liquid) as the temperature is increased (similar to the melting point behavior of pure materials). This phenomenon also causes UCST hydrogels to expand (increase their swell ratio) as temperature increases while they are below their UCST. However, polymers with LCSTs display an inverse (or negative) temperature-dependence, where their water-solubility decreases at higher temperatures. LCST hydrogels transition from a liquid solution to a solid gel as the temperature is increased, and they also shrink (decrease their swell ratio) as the temperature increases while they are above their LCST. Applications can dictate for diverse thermal responses. For example, in the biomedical field, LCST hydrogels are being investigated as drug delivery systems due to being injectable (liquid) at room temp and then solidifying into a rigid gel upon exposure to the higher temperatures of the human body. There are many other stimuli that hydrogels can be responsive to, including: pH, glucose, electrical signals, light, pressure, ions, antigens, and more. Additives The mechanical properties of hydrogels can be fine-tuned in many ways beginning with attention to their hydrophobic properties. Another method of modifying the strength or elasticity of hydrogels is to graft or surface coat them onto a stronger/stiffer support, or by making superporous hydrogel (SPH) composites, in which a cross-linkable matrix swelling additive is added. Other additives, such as nanoparticles and microparticles, have been shown to significantly modify the stiffness and gelation temperature of certain hydrogels used in biomedical applications. Processing techniques While a hydrogel's mechanical properties can be tuned and modified through crosslink concentration and additives, these properties can also be enhanced or optimized for various applications through specific processing techniques. These techniques include electro-spinning, 3D/4D printing, self-assembly, and freeze-casting. One unique processing technique is through the formation of multi-layered hydrogels to create a spatially-varying matrix composition and by extension, mechanical properties. This can be done by polymerizing the hydrogel matrixes in a layer by layer fashion via UV polymerization. This technique can be useful in creating hydrogels that mimic articular cartilage, enabling a material with three separate zones of distinct mechanical properties. Another emerging technique to optimize hydrogel mechanical properties is by taking advantage of the Hofmeister series. Due to this phenomenon, through the addition of salt solution, the polymer chains of a hydrogel aggregate and crystallize, which increases the toughness of the hydrogel. This method, called "salting out", has been applied to poly(vinyl alcohol) hydrogels by adding a sodium sulfate salt solution. Some of these processing techniques can be used synergistically with each other to yield optimal mechanical properties. Directional freezing or freeze-casting is another method in which a directional temperature gradient is applied to the hydrogel is another way to form materials with anisotropic mechanical properties. Utilizing both the freeze-casting and salting-out processing techniques on poly(vinyl alcohol) hydrogels to induce hierarchical morphologies and anisotropic mechanical properties. Directional freezing of the hydrogels helps to align and coalesce the polymer chains, creating anisotropic array honeycomb tube-like structures while salting out the hydrogel yielded out a nano-fibril network on the surface of these honeycomb tube-like structures. While maintaining a water content of over 70%, these hydrogels' toughness values are well above those of water-free polymers such as polydimethylsiloxane (PDMS), Kevlar, and synthetic rubber. The values also surpass the toughness of natural tendon and spider silk. Applications Soft contact lenses The dominant material for contact lenses are acrylate-siloxane hydrogels. They have replaced hard contact lenses. One of their most attractive properties is oxygen permeability, which is required since the cornea lacks vasculature. Research Scalephobicity and antifouling Coatings for gas evolution reaction electrodes for efficient bubble detachment Breast implants Contact lenses (silicone hydrogels, polyacrylamides, polymacon) Water sustainability: Hydrogels have emerged as promising materials platforms for solar-powered water purification, water disinfection, and atmospheric water generator. Disposable diapers where they absorb urine, or in sanitary napkins Dressings for healing of burn or other hard-to-heal wounds. Wound gels are excellent for helping to create or maintain a moist environment. EEG and ECG medical electrodes using hydrogels composed of cross-linked polymers (polyethylene oxide, polyAMPS and polyvinylpyrrolidone) Encapsulation of quantum dots Environmentally sensitive hydrogels (also known as 'smart gels' or 'intelligent gels'). These hydrogels have the ability to sense changes of pH, temperature, or the concentration of metabolite and release their load as result of such a change. Fibers Glue Granules for holding soil moisture in arid areas Air bubble-repellent (superaerophobicity). Can improve the performance and stability of electrodes for water electrolysis. Culturing cells: Hydrogel-coated wells have been used for cell culture. Biosensors: Hydrogels that are responsive to specific molecules, such as glucose or antigens, can be used as biosensors, as well as in DDS. Cell carrier: Injectable hydrogels can be used to carry drugs or cells for applications in tissue regeneration or 3D bioprinting. Hydrogels with reversible chemistry are required to allow for fluidization during injection/printing followed by self-healing of the original hydrogel structure. Investigate cell biomechanical functions combined with holotomography microscopy Provide absorption, desloughing and debriding of necrotic and fibrotic tissue Tissue engineering scaffolds. When used as scaffolds, hydrogels may contain human cells to repair tissue. They mimic 3D microenvironment of cells. Materials include agarose, methylcellulose, hyaluronan, elastin-like polypeptides, and other naturally derived polymers. Sustained-release drug delivery systems. Ionic strength, pH and temperature can be used as a triggering factor to control the release of the drug. The swelling behavior exhibited by charged hydrogels can be used as a valuable tool for investigating interactions between charged polymers and various species, including multivalent ions, peptides, and proteins. This response arises due to fluctuating osmotic swelling forces resulting from the exchange of counterions within the gel matrix. Particularly significant is its application in assessing the binding of peptide drugs to biopolymers within the body, as the swelling response of the gel can provide insights into these interactions. Window coating/replacement: Hydrogels are under consideration for reducing infrared light absorption by 75%. Another approach reduced interior temperature using a temperature-responsive hydrogel. Thermodynamic electricity generation: When combined with ions allows for heat dissipation for electronic devices and batteries and converting the heat exchange to an electrical charge. Water gel explosives Controlled release of agrochemicals (pesticides and fertilizer) Talin shock absorbing materials - protein-based hydrogels that can absorb supersonic impacts Computational tasks, including emergent memory. Biomaterials Implanted or injected hydrogels have the potential to support tissue regeneration by mechanical tissue support, localized drug or cell delivery, local cell recruitement or immunomodulation, or encapsulation of nanoparticles for local photothermal therapy or brachytherapy. Polymeric drug delivery systems have overcome challenges due to their biodegradability, biocompatibility, and anti-toxicity. Materials such as collagen, chitosan, cellulose, and poly (lactic-co-glycolic acid) have been implemented extensively for drug delivery to organs such as eye, nose, kidneys, lungs, intestines, skin and brain. Future work is focused on reducing toxicity, improving biocompatibility, expanding assembly techniques Hydrogels have been considered as vehicles for drug delivery. They can also be made to mimic animal mucosal tissues to be used for testing mucoadhesive properties. They have been examined for use as reservoirs in topical drug delivery; particularly ionic drugs, delivered by iontophoresis. References Further reading Colloidal chemistry Gels Water chemistry Soft matter
Hydrogel
[ "Physics", "Chemistry", "Materials_science" ]
5,391
[ "Colloidal chemistry", "Soft matter", "Colloids", "Surface science", "Condensed matter physics", "nan", "Gels" ]
845,000
https://en.wikipedia.org/wiki/Plastic%20welding
Plastic welding is welding for semi-finished plastic materials, and is described in ISO 472 as a process of uniting softened surfaces of materials, generally with the aid of heat (except for solvent welding). Welding of thermoplastics is accomplished in three sequential stages, namely surface preparation, application of heat and pressure, and cooling. Numerous welding methods have been developed for the joining of semi-finished plastic materials. Based on the mechanism of heat generation at the welding interface, welding methods for thermoplastics can be classified as external and internal heating methods, as shown in Fig 1. Production of a good quality weld does not only depend on the welding methods, but also weldability of base materials. Therefore, the evaluation of weldability is of higher importance than the welding operation (see rheological weldability) for plastics. Welding techniques A number of techniques are used for welding of semi-finished plastic products as given below: Hot gas welding Hot gas welding, also known as hot air welding, is a plastic welding technique using heat. A specially designed heat gun, called a hot air welder, produces a jet of hot air that softens both the parts to be joined and a plastic filler rod, all of which must be of the same or a very similar plastic. (Welding PVC to acrylic is an exception to this rule.) Hot air/gas welding is a common fabrication technique for manufacturing smaller items such as chemical tanks, water tanks, heat exchangers, and plumbing fittings. In the case of webs and films a filler rod may not be used. Two sheets of plastic are heated via a hot gas (or a heating element) and then rolled together. This is a quick welding process and can be performed continuously. Welding rod A plastic welding rod, also known as a thermoplastic welding rod, is a rod with circular or triangular cross-section used to bind two pieces of plastic together. They are available in a wide range of colors to match the base material's color. Spooled plastic welding rod is known as "spline". An important aspect of plastic welding rod design and manufacture is the porosity of the material. A high porosity will lead to air bubbles (known as voids) in the rods, which decrease the quality of the welding. The highest quality of plastic welding rods are therefore those with zero porosity, which are called voidless. Heat sealing Heat sealing is the process of sealing one thermoplastic to another similar thermoplastic using heat and pressure. The direct contact method of heat sealing utilizes a constantly heated die or sealing bar to apply heat to a specific contact area or path to seal or weld the thermoplastics together. A variety of heat sealers is available to join thermoplastic materials such as plastic films: Hot bar sealer, Impulse sealer, etc. Heat sealing is used for many applications, including heat seal connectors, thermally activated adhesives, and film or foil sealing. Common applications for the heat sealing process: Heat seal connectors are used to join LCDs to PCBs in many consumer electronics, as well as in medical and telecommunication devices. Heat sealing of products with thermal adhesives is used to hold clear display screens onto consumer electronic products and for other sealed thermo-plastic assemblies or devices where heat staking or ultrasonic welding is not an option due to part design requirements or other assembly considerations. Heat sealing also is used in the manufacturing of bloodtest film and filter media for the blood, virus and many other test strip devices used in the medical field today. Laminate foils and films often are heat sealed over the top of thermoplastic medical trays, Microtiter (microwell) plates, bottles and containers to seal and/or prevent contamination for medical test devices, sample collection trays and containers used for food products. Medical and the Food Industries manufacturing Bag or flexible containers use heat sealing for either perimeter welding of the plastic material of the bags and/or for sealing ports and tubes into the bags. Freehand welding With freehand welding, the jet of hot air (or inert gas) from the welder is placed on the weld area and the tip of the weld rod at the same time. As the rod softens, it is pushed into the joint and fuses to the parts. This process is slower than most others, but it can be used in almost any situation. Speed tip welding With speed welding, the plastic welder, similar to a soldering iron in appearance and wattage, is fitted with a feed tube for the plastic weld rod. The speed tip heats the rod and the substrate, while at the same time it presses the molten weld rod into position. A bead of softened plastic is laid into the joint, and the parts and weld rod fuse. With some types of plastic such as polypropylene, the melted welding rod must be "mixed" with the semi-melted base material being fabricated or repaired. These welding techniques have been improved over time and have been utilized for over 50 years by professional plastic fabricators and repairers internationally. Speed tip welding method is a much faster welding technique and with practice can be used in tight corners. A version of the speed tip "gun" is essentially a soldering iron with a broad, flat tip that can be used to melt the weld joint and filler material to create a bond. Extrusion welding Extrusion welding allows the application of bigger welds in a single weld pass. It is the preferred technique for joining material over 6 mm thick. Welding rod is drawn into a miniature hand held plastic extruder, plasticized, and forced out of the extruder against the parts being joined, which are softened with a jet of hot air to allow bonding to take place. Contact welding This is the same as spot welding except that heat is supplied with thermal conduction of the pincher tips instead of electrical conduction. Two plastic parts are brought together where heated tips pinch them, melting and joining the parts in the process. Hot plate welding Related to contact welding, this technique is used to weld larger parts, or parts that have a complex weld joint geometry. The two parts to be welded are placed in the tooling attached to the two opposing platens of a press. A hot plate, with a shape that matches the weld joint geometry of the parts to be welded, is moved in position between the two parts. The two opposing platens move the parts into contact with the hot plate until the heat softens the interfaces to the melting point of the plastic. When this condition is achieved the hot plate is removed, and the parts are pressed together and held until the weld joint cools and re-solidifies to create a permanent bond. Hot-plate welding equipment is typically controlled pneumatically, hydraulically, or electrically with servo motors. This process is used to weld automotive under hood components, automotive interior trim components, medical filtration devices, consumer appliance components, and other car interior components. Non-contact / IR welding Similar to hot plate welding, non-contact welding uses an infrared heat source to melt the weld interface rather than a hot plate. This method avoids the potential for material sticking to the hot plate, but is more expensive and more difficult to achieve consistent welds, particularly on geometrically complex parts. High frequency welding High Frequency welding, also known as Dielectric Sealing or Radio Frequency (RF) Heat Sealing, is a very mature technology that has been around since the 1940s. High frequency electromagnetic waves in the range of radio frequencies can heat certain polymers up to soften the plastics for joining. Heated plastics under pressure weld together. Heat is generated within the polymer by the rapid reorientation of some chemical dipoles of the polymer, which means that the heating can be localized, and the process can be continuous. Only certain polymers which contain dipoles can be heated by RF waves, in particular polymers with high loss power. Among these, PVC, polyamides (PA) and acetates are commonly welded with this technology. In practice, two pieces of material are placed on a table press that applies pressure to both surface areas. Dies are used to direct the welding process. When the press comes together, high frequency waves (usually 27.120 MHz) are passed through the small area between the die and the table where the weld takes place. This high frequency (radio frequency) heats the plastic which welds under pressure, taking the shape of the die. RF welding is fast and relatively easy to perform, produces a limited degradation of the polymer even welding thick layers, does not create fumes, requires a moderate amount of energy and can produce water-, air-, and bacteria-proof welds. Welding parameters are welding power, (heating and cooling) time and pressure, while temperature is generally not controlled directly. Auxiliary materials can also be used to solve some welding problems. This type of welding is used to connect polymer films used in a variety of industries where a strong consistent leak-proof seal is required. In the fabrics industry, RF is most often used to weld PVC and polyurethane (PU) coated fabrics. Other materials commonly welded using this technology are nylon, PET, PEVA, EVA and some ABS plastics. Exercise caution when welding urethane as it has been known to give off toxic cyanide gasses when melting. Induction welding When an electrical insulator, like a plastic, is embedded with a material having high electrical conductivity, like metals or carbon fibers, induction welding can be performed. The welding apparatus contains an induction coil that is energised with a radio-frequency electric current. This generates an electromagnetic field that acts on either an electrically conductive or a ferromagnetic workpiece. In an electrically conductive workpiece, the main heating effect is resistive heating, which is due to induced currents called eddy currents. Induction welding of carbon fiber reinforced thermoplastic materials is a technology commonly used in for instance the aerospace industry. In a ferromagnetic workpiece, plastics can be induction-welded by formulating them with metallic or ferromagnetic compounds, called susceptors. These susceptors absorb electromagnetic energy from an induction coil, become hot, and lose their heat energy to the surrounding material by thermal conduction. Injection welding Injection welding is similar/identical to extrusion welding, except, using certain tips on the handheld welder, one can insert the tip into plastic defect holes of various sizes and patch them from the inside out. The advantage is that no access is needed to the rear of the defect hole. The alternative is a patch, except that the patch can not be sanded flush with the original surrounding plastic to the same thickness. PE and PP are most suitable for this type of process. The Drader injectiweld is an example of such tool. Ultrasonic welding In ultrasonic welding, high frequency (15 kHz to 40 kHz) low amplitude vibration is used to create heat by way of friction between the materials to be joined. The interface of the two parts is specially designed to concentrate the energy for the maximum weld strength. Ultrasonic can be used on almost all plastic material. It is the fastest heat sealing technology available. Friction welding In friction welding, the two parts to be assembled are rubbed together at a lower frequency (typically 100–300 Hz) and higher amplitude (typically ) than ultrasonic welding. The friction caused by the motion combined with the clamping pressure between the two parts creates the heat which begins to melt the contact areas between the two parts. At this point, the plasticized materials begin to form layers that intertwine with one another, which therefore results in a strong weld. At the completion of the vibration motion, the parts remain held together until the weld joint cools and the melted plastic re-solidifies. The friction movement can be linear or orbital, and the joint design of the two parts has to allow this movement. Spin welding Spin welding is a particular form of frictional welding. With this process, one component with a round weld joint is held stationary, while a mating component is rotated at high speed and pressed against the stationary component. The rotational friction between the two components generates heat. Once the joining surfaces reach a semi-molten state, the spinning component is stopped abruptly. Force on the two components is maintained until the weld joint cools and re-solidifies. This is a common way of producing low- and medium-duty plastic wheels, e.g., for toys, shopping carts, recycling bins, etc. This process is also used to weld various port openings into automotive under hood components. Laser welding This technique requires one part to be transmissive to a laser beam and either the other part absorptive or a coating at the interface to be absorptive to the beam. The two parts are put under pressure while the laser beam moves along the joining line. The beam passes through the first part and is absorbed by the other one or the coating to generate enough heat to soften the interface creating a permanent weld. Semiconductor diode lasers are typically used in plastic welding. Wavelengths in the range of 808 nm to 980 nm can be used to join various plastic material combinations. Power levels from less than 1W to 100W are needed depending on the materials, thickness and desired process speed. Diode laser systems have the following advantages in joining of plastic materials: Cleaner than adhesive bonding No micro-nozzles to get clogged No liquid or fumes to affect surface finish No consumables Higher throughput Can access work-piece in challenging geometry High level of process control Requirements for high strength joints include adequate transmission through upper layer, absorption by lower layer, materials compatibility (wetting), good joint design (clamping pressure, joint area), and lower power density. Some materials that can be joined include polypropylene, polycarbonate, acrylic, nylon, and ABS. Specific applications include sealing, welding, or joining of: catheter bags, medical containers, automobile remote control keys, heart pacemaker casings, syringe tamper evident joints, headlight or tail-light assemblies, pump housings, and cellular phone parts. Transparent laser plastic welding New fiber laser technology allows for the output of longer laser wavelengths, with the best results typically around 2,000 nm, significantly longer than the average 808 nm to 1064 nm diode laser used for traditional laser plastic welding. Because these longer wavelengths are more readily absorbed by thermoplastics than the infrared radiation of traditional plastic welding, it is possible to weld two clear polymers without any colorants or absorbing additives. Common applications will mostly fall in the medical industry for devices like catheters and microfluidic devices. The heavy use of transparent plastics, especially flexible polymers like TPU, TPE and PVC, in the medical device industry makes transparent laser welding a natural fit. Also, the process requires no laser absorbing additives or colorants making testing and meeting biocompatibility requirements significantly easier. Solvent welding In solvent welding, a solvent is applied which can temporarily dissolve the polymer at room temperature. When this occurs, the polymer chains are free to move in the liquid and can mingle with other similarly dissolved chains in the other component. Given sufficient time, the solvent will permeate through the polymer and out into the environment, so that the chains lose their mobility. This leaves a solid mass of entangled polymer chains which constitutes a solvent weld. This technique is commonly used for connecting PVC and ABS pipe, as in household plumbing. The "gluing" together of plastic (polycarbonate, polystyrene or ABS) models is also a solvent welding process. Dichloromethane (methylene chloride) can solvent weld polycarbonate and polymethylmethacrylate. It is a primary ingredient in some solvent cements. ABS plastic is typically welded with acetone based solvents which are often sold as paint thinners or in smaller containers as nail polish remover. Solvent welding is a common method in plastics fabrication and used by manufacturers of in-store displays, brochure holders, presentation cases and dust covers. Another popular use of solvents in the hobby segment is model building from injection molded kits for scale models of aircraft, ships and cars which predominantly use polystyrene plastic. Testing of plastic welds In order to test plastic welds, there are several requirements for both the inspector as well as the test method. Furthermore, there are two different types of testing weld quality. These two types are destructive and non-destructive testing. Destructive testing serves to qualify and quantify the weld joint whereas nondestructive testing serves to identify anomalies, discontinuities, cracks, and/or crevices. As the names of these two tests implies, destructive testing will destroy the part that is being tested while nondestructive testing enables the test piece to be used afterwards. There are several methods available in each of these types. This section outlines some requirements of testing plastic welds as well as the different types of destructive and non-destructive methods that are applicable to plastic welding and go over some of the advantages and disadvantages. Testing requirements Some standards like the American Welding Society (AWS) require the individuals who are conducting the inspection or test to have a certain level of qualification. For example, AWS G1.6 is the Specification for the Qualification of Plastic Welding Inspectors for Hot Gas, Hot Gas Extrusion, and Heated Tool Butt Thermoplastic Welds. This particular standard dictates that in order to inspect the plastic welds, the inspector needs one of 3 different qualification levels. These levels are the Associate Plastics Welding Inspector (APWI), Plastics Welding Inspector (PWI), and Senior Plastics Welding Inspector (SPWI). Each of these levels have different responsibilities. For example, the APWI has to have direct supervision of a PWI or SPWI in order to conduct the inspection or prepare a report. These three different levels of certification also have different capability requirements, education requirements, and examination requirements. Additionally, they must be able to maintain that qualification every 3 years. Destructive testing Bend testing The bend test uses a ram to bend the test coupon to a desired degree. This test setup is shown in Figure 2. A list of the minimum bend angles and ram displacements for different plastic materials can be found in the DVS Standards, DVS2203-1 and DVS2203-5. Some of the ram speeds, bend angle, and displacement information from DVS2203-1 are shown in Table 1 and Table 2. Some of the main advantages of the bend test are it provides qualitative data for tensile, compressive, and shear strain. These results typically lead to a higher confidence level in the quality of the weld joint and process. In contrast, some of the disadvantages are it requires multiple test pieces. It is typically recommended to use a minimum of 6 different test samples. Another disadvantage is that it does not provide specific values for evaluating the joint design. Moreover, large amounts of effort may need to go into preparing the part for testing. This could cause an increase in cost and schedule depending on the complexity of the part. Lastly, like all destructive tests, the part and/or weld seam is destroyed and cannot be used. Tensile testing When conducting the tensile test, a test piece is pulled until it breaks. This test is quantitative and will provide the ultimate tensile strength, strain, as well as the energy to failure if it has extensometers attached to the sample. Additionally, the results from a tensile test cannot be transferable to that of a creep test. The rate at which the specimen is pulled depends on the material. Additionally, the shape of the specimen is also critical. DVS2203-5 and AWS G1.6 are great sources for providing these details. Examples of the shapes are shown in Figure 3 through Figure 5. Additionally, the testing speed per material is shown in Table 3. One advantage of the tensile test is that it provides quantitative data of the weld for both weld seam and the base material. Additionally, the tensile test is easy to conduct. A major disadvantage of this testing is the amount of preparation required to conduct the test. Another disadvantage is that it does not provide the long-term weld performance. Additionally, since this is also a type of destructive test, the part is destroyed in order to collect this data. Impact testing Also known as the Tensile Impact Test, the Impact Test uses a specimen that is clamped into a pendulum. The test specimen looks like the one shown in Figure 4. The pendulum swings down and strikes the specimen against an anvil breaking the specimen. This test enables the impact energy to be determined for the weld seam and base material. Additionally, the permanent fracture elongation can be calculated by measuring the post-test specimen length. The main advantage of this test is that quantitative data is obtained. Another advantage is that it is easy to set up. The disadvantages are that it too can have a great deal of preparation in order to conduct this test. Also, like the tensile test, there is not a long term weld performance determined, and the part is destroyed. Creep test There are two types of creep tests, the Tensile Creep Test and the Creep Rupture Test. Both creep tests look at the long-term weld performance of the test specimen. These tests are typically conducted in a medium at a constant temperature and constant stress. This test requires a minimum of 6 specimens in order to obtain enough data to conduct a statistical analysis. This test is advantageous in that it provides quantitative data on the long-term weld performance; however, it has its disadvantages as well. There is a lot effort that needs to go into preparing the samples and recording where exactly the specimen came from and the removal method used. This is critical because how the specimen is removed from the host part can greatly influence the test results. Also, there has to be strict control of the test environment. A deviation in the medium's temperature can cause the creep rupture time to vary drastically. In some cases, a temperature change of 1 degree Celsius affected the creep rupture time by 13%. Lastly, this test is again a destructive test, so the host part will be destroyed by conducting this type of test. Non-destructive testing Visual examination Visual inspection, just like the name implies, is a visual investigation of the weldment. The inspector is typically looking for visual indications such as discolorations, weld defects, discontinuities, porosity, notches, scratches, etc. Typically visual inspection is broken down into different categories or groups for the qualifying inspection criteria. These groupings may vary among standards and each group has a certain level of imperfections that they consider acceptable. There are 5 tables and a chart found in DVS Standard DVS2202-1 that show different types of defects found by visual examination and their permissible acceptance criteria. Visual inspection is very advantageous in the fact that it is quick, easy, inexpensive, and requires very simple tools and gauges in order to conduct. Because it is so quick, it is typically required to have a weld pass visual inspection prior to being able to have any additional nondestructive test conducted to the specimen. In contrast, the inspection needs to be completed by someone who has a lot of experience and skill. Additionally, this type of test will not give any data into the quality of the weld seam. Because of the low cost, if a part is suspected to have issues, follow on testing can be conducted without much initial investment. X-ray testing X-ray testing of plastics is similar to that of metal weldments, but uses much lower radiation intensity due to the plastics having a lower density than metals. The x-ray testing is used to find imperfections that are below the surface. These imperfections include porosity, solid inclusions, voids, crazes, etc. The x-ray transmits radiation through the tested object onto a film or camera. This film or camera will produce an image. The varying densities of the object will show up as different shades in the image thus showing where the defects are located. One of the advantages of X-ray is that it provides a way to quickly show the flaws both on the surface and inside the weld joint. Additionally, the X-ray can be used on a wide range of materials. They can be used to create a record for the future. One of the disadvantages of X-ray is that it is costly and labor-intensive. Another is that it cannot be used in the evaluation of the weld seam quality or optimize the process parameters. Additionally, if the discontinuity is not aligned properly with the radiation beam, it can be difficult to detect. A fourth disadvantage is that access to both sides of the component being measured is required. Lastly, it presents a health risk due to the radiation that is transmitted during the X-ray process. Ultrasonic testing Ultrasonic testing utilizes high frequency sound waves passing through the weld. The waves are reflected or refracted if they hit an indication. The reflected or refracted wave will have a different amount of time it requires to travel from the transmitter to the receiver than it will if an indication was not present. This change in time is how the flaws are detected. The first advantage that ultrasonic testing provides is that it allows for a relatively quick detection of the flaws inside of the weld joint. This test method also can detect flaws deep inside the part. Additionally, it can be conducted with access from only one side of the part. In contrast, there are several disadvantages of using ultrasonic testing. The first is that it cannot be used to optimize the process parameters or evaluate the seam quality of the weld. Secondly, it is costly and labor-intensive. It also requires experienced technicians to conduct the test. Lastly, there are material limitations with plastics due to transmission limitations of the ultrasonic waves through some of the plastics. The image in Figure 6 shows an example of ultrasonic testing. High voltage leak testing High voltage testing is also known as spark testing. This type of testing utilizes electrically conductive medium to coat the weld. After the weld is coated, the weld is exposed to a high voltage probe. This test shows an indication of a leak in the weld when an arc is observed through the weld. This type of testing is advantageous in the fact that it allows for quick detection of the flaws inside the weld joint and that you only have to have access to one side of the weld. One disadvantage with this type of testing is that there is not a way to evaluate the weld seam quality. Additionally, the weld has to be coated with conductive material. Leak-tightness testing Leak-Tightness Testing or Leak Testing utilizes either liquid or gas to pressurize a part. This type of testing is typically conducted on tubes, containers, and vessels. Another way to leak-test one of these structures is to apply a vacuum to it. One of the advantages is that it is a quick simple way for the weld flaw to be detected. Additionally, it can be used on multiple materials and part shapes. On the other hand, it has a few disadvantages. Firstly, there is not a way to evaluate the weld seam quality. Secondly, it has an explosion hazard associated with it if over pressurization occurs during testing. Last, it is limited to tubular structures., See also Butanone Electrofusion Heat sealer Rheological weldability for semi-finished polymer parts Thermoplastic staking References Further reading J. Alex Neumann and Frank J. Bockoff, "Welding of Plastics", 1959, Reinhold publishing. Safety in the use of Radiofrequency Dielectric Heaters and Sealers, Michael J. Troughton, "Handbook of Plastics Joining, A Practical Guide", 2nd ed., 2008, Tres, Paul A., "Designing Plastic Parts for Assembly", 6th ed., 2006, Grewell, David A., Benatar, Avraham, Park, Joon Bu, "Plastics and Composites Welding Handbook", 2003, Packaging machinery
Plastic welding
[ "Engineering" ]
5,873
[ "Packaging machinery", "Industrial machinery" ]
845,208
https://en.wikipedia.org/wiki/Seal%20%28mechanical%29
A seal is a device or material that helps join systems, mechanisms or other materials together by preventing leakage (e.g. in a pumping system), containing pressure, or excluding contamination. The effectiveness of a seal is dependent on adhesion in the case of sealants and compression in the case of gaskets. The seals are installed in pumps in a wide range of industries including chemicals, water supply, paper production, food processing and many other applications. A stationary seal may also be referred to as a 'packing'. Seal types: Induction sealing or cap sealing Adhesive, sealant Bodok seal, a specialized gas sealing washer for medical applications Bonded seal, also known as Dowty seal or Dowty washer. A type of washer with integral gasket, widely used to provide a seal at the entry point of a screw or bolt Bridgman seal, a piston sealing mechanism that creates a high pressure reservoir from a lower pressure source Split seals are innovative sealing solutions designed to enhance efficiency and convenience in various mechanical systems. These seals are specifically engineered to address the challenges associated with traditional seals, offering improved installation, maintenance, and operational benefits. Bung Compression seal fitting Diaphragm seal Ferrofluidic seal Gasket or Mechanical packing Flange gasket O-ring O-ring boss seal Piston ring Glass-to-metal seal Glass-ceramic-to-metal seals Heat seal Hose coupling, various types of hose couplings Hermetic seal Hydrostatic seal Hydrodynamic seal Inflatable seal Seals that inflate and deflate in three basic directions of operation: the axial direction, the radial-in direction, and the radial-out direction. Each of these inflation directions has their own set of performance parameters for measurements such as the height of inflation and the center-line bend radius that the seal can negotiate. Inflatable seals can be used for numerous applications with difficult sealing issues. Labyrinth seal A seal which creates a tortuous path for the liquid to flow through Lid (container) Rotating face mechanical seal Face seal Plug Radial shaft seal Trap (plumbing) (siphon trap) Stuffing box (mechanical packing) Wiper seal Dry gas seal See also Leakage (chemistry) References Materials Plumbing
Seal (mechanical)
[ "Physics", "Engineering" ]
457
[ "Seals (mechanical)", "Plumbing", "Construction", "Materials", "Mechanical engineering", "Mechanical engineering stubs", "Matter" ]
845,216
https://en.wikipedia.org/wiki/XyMTeX
ΧyMTeΧ is a macro package for TeX which renders high-quality chemical structure diagrams. Using the typesetting system, the name is styled as . It was originally written by . Molecules are defined by TeX markup. Example The following code produces the image for corticosterone below. \documentclass{letter} \usepackage{epic,carom} \pagestyle{empty} \begin{document} \begin{picture}(1000,500) \put(0,0){\steroid[d]{3D==O;{{10}}==\lmoiety{H$_{3}$C};{{13}}==\lmoiety{H$_{3}$C};{{11}}==HO}} \put(684,606){\sixunitv{}{2D==O;1==OH}{cdef}} \end{picture} \end{document} See also PPCHTeX (PPCHTeX) Molecule editor List of TeX extensions References External links Shonan Institute of Chemoinformatics and Mathematical Chemistry XyMTeX for Drawing Chemical Structures — Download of XyMTeX Version 5.01 (the latest version: 2013-09-01) and its manuals. XyMTeX for Drawing Chemical Structures — Download of XyMTeX Version 5.01 (the latest version: 2013-09-01) and its manuals. The Comprehensive TeX Archive Network (CTAN) The TeX Catalogue Online, Entry for XyMTeX, CTAN Edition (Version 4.06) WikiTeX now includes support for XyMTeX directly in Wiki articles. UPDATE 1: Original WikiTeX link (above) is not working. It's returning the error 403 Forbidden (tested in 2022-12-08) UPDATE 2: There still exists a WikiTeX official website consisting of a single-page presentation at WikiTeX.org but the download and repository's links are not working (tested in 2022-12-08) OPTIONAL 1: Sourceforge: Project's version 1.1 beta 3 (last update in 2013-03-21) OPTIONAL 2: GitHub: Forked from SourceForge (above link) in 2018-09-27 TeX Users Group (TUG) The PracTeX Journal LaTeX Tools for Life Scientists (BioTeXniques?) — An article that discusses XyMTeX. Free TeX software Chemistry software Science software
XyMTeX
[ "Chemistry", "Technology" ]
537
[ "Chemistry software", "Theoretical chemistry stubs", "Computational chemistry stubs", "Computational chemistry", "Digital typography stubs", "nan", "Computing stubs", "Physical chemistry stubs" ]
18,739,787
https://en.wikipedia.org/wiki/Nassif%20Ghoussoub
Nassif A. Ghoussoub is a Canadian mathematician working in the fields of non-linear analysis and partial differential equations. He is a Professor of Mathematics and a Distinguished University Scholar at the University of British Columbia. Early life and education Ghoussoub was born to Lebanese parents in Western Africa (now Mali). He completed his doctorat 3ème cycle (PhD) in 1975, and a Doctorat d'Etat in 1979 at the Pierre and Marie Curie University, where his advisors were Gustave Choquet and Antoine Brunel. Career Ghoussoub completed his post-doctoral fellowship at the Ohio State University during 1976–77. He then joined the University of British Columbia, where he currently holds a position of Professor of Mathematics and a Distinguished University Scholar. Ghoussoub is known for his work in functional analysis, non-linear analysis, and partial differential equations. He was vice-president of the Canadian Mathematical Society from 1994 to 1996, the founding director of the Pacific Institute for the Mathematical Sciences (PIMS) for the period 1996–2003, the co-editor-in-chief of the Canadian Journal of Mathematics during 1993–2002, a co-founder of the MITACS Network of Centres of Excellence, and is the founder and scientific director (2001 - 2020) of the Banff International Research Station (BIRS). In 1994, Ghoussoub became a fellow of the Royal Society of Canada, and in 2012, a fellow of the American Mathematical Society. Ghoussoub has been awarded multiple awards and distinctions, including the Coxeter-James prize in 1990, and the Jeffrey-Williams prize in 2007. He holds honorary doctorates from the Université Paris-Dauphine (France), and the University of Victoria (Canada). He was awarded the Queen Elizabeth II Diamond Jubilee Medal in 2012, and appointed to the Order of Canada in 2015, with the grade of officer for contributions to mathematics, research, and education. In 2018, Ghoussoub was elected a faculty representative on the University of British Columbia's Board of Governors. He will serve until February 29, 2020. Ghoussoub has previously served two consecutive terms in this role from 2008 to 2014. Ghoussoub's scholarly work has been cited over 5,900 times and has an h-index of 40. Awards Coxeter-James Prize, Canadian Mathematical Society (1990) Killam Senior Research Fellowship, UBC (1992) Fellow of the Royal Society of Canada (1994) Distinguished University Scholar, UBC (2003) Doctorat Honoris Causa, Paris Dauphine University Jeffery–Williams Prize, Canadian mathematical Society (2007) Faculty of Science Achievement Award for outstanding service and leadership, UBC (2007) David Borwein Distinguished Career Award, Canadian Mathematical Society (2010) Fellow of the American Mathematical Society (2012) Queen Elizabeth II Diamond Jubilee Medal (2012) Honorary Doctor of Science-University of Victoria (June 2015) Officer of the Order of Canada (December 2015) Inaugural fellow of the Canadian Mathematical Society, 2018 Bibliography Selected Academic Publications Books See also Banff International Research Station References External links Nassif Ghoussoub's homepage Piece of Mind, Nassif's personal blog A biography Living people 20th-century Canadian mathematicians 21st-century Canadian mathematicians Canadian people of Lebanese descent Mathematical analysts Academic staff of the University of British Columbia Pierre and Marie Curie University alumni Fellows of the Royal Society of Canada Fellows of the American Mathematical Society Fellows of the Canadian Mathematical Society Functional analysts Partial differential equation theorists Officers of the Order of Canada 1953 births
Nassif Ghoussoub
[ "Mathematics" ]
738
[ "Mathematical analysis", "Mathematical analysts" ]
19,874,353
https://en.wikipedia.org/wiki/Surface-extended%20X-ray%20absorption%20fine%20structure
Surface-extended X-ray absorption fine structure (SEXAFS) is the surface-sensitive equivalent of the EXAFS technique. This technique involves the illumination of the sample by high-intensity X-ray beams from a synchrotron and monitoring their photoabsorption by detecting in the intensity of Auger electrons as a function of the incident photon energy. Surface sensitivity is achieved by the interpretation of data depending on the intensity of the Auger electrons (which have an escape depth of ~1–2 nm) instead of looking at the relative absorption of the X-rays as in the parent method, EXAFS. The photon energies are tuned through the characteristic energy for the onset of core level excitation for surface atoms. The core holes thus created can then be filled by nonradiative decay of a higher-lying electron and communication of energy to yet another electron, which can then escape from the surface (Auger emission). The photoabsorption can therefore be monitored by direct detection of these Auger electrons to the total photoelectron yield. The absorption coefficient versus incident photon energy contains oscillations which are due to the interference of the backscattered Auger electrons with the outward propagating waves. The period of this oscillations depends on the type of the backscattering atom and its distance from the central atom. Thus, this technique enables the investigation of interatomic distances for adsorbates and their coordination chemistry. This technique benefits from long range order not being required, which sometimes becomes a limitation in the other conventional techniques like LEED (about 10 nm). This method also largely eliminates the background from the signal. It also benefits because it can probe different species in the sample by just tuning the X-ray photon energy to the absorption edge of that species. Joachim Stöhr played a major role in the initial development of this technique. Experimental setup Synchrotron radiation sources Normally, the SEXAFS work is done using synchrotron radiation as it has highly collimated, plane-polarized and precisely pulsed X-ray sources, with fluxes of 1012 to 1014 photons/sec/mrad/mA and greatly improves the signal-to-noise ratio over that obtainable from conventional sources. A bright source X-ray source is illuminating the sample and the transmission is being measured as the absorption coefficient as where I is the transmitted and Io is the incident intensity of the X-rays. Then it is plotted against the energy of the incoming X-ray photon energy. Electron detectors In SEXAFS, an electron detector and a high-vacuum chamber is required to calculate the Auger yields instead of the intensity of the transmitted X-ray waves. The detector can be either an energy analyzer, as in the case of Auger measurements, or an electron multiplier, as in the case of total or partial secondary electron yield. The energy analyzer gives rise to better resolution while the electron multiplier has larger solid angle acceptance. Signal-to-noise ratio The equation governing the signal-to-noise ratio is where μA is the absorption coefficient; In is the nonradiative contribution in electron counts/sec; Ib is the background contribution in electron counts/sec; μA is the absorption by the SEXAFS-producing element; μT is the total absorption by all the elements; Io is the incident intensity; n is the attenuation length; Ω/(4π) is the solid angle acceptance for the detector; εn is the nonradiative yield which is the probability that the electron will not decay radiatively and will actually get emitted as an Auger electron. Physics Basics The absorption of an X-ray photon by the atom excites a core level electron, thus generating a core hole. This generates a spherical electron wave with the excited atom as the center. The wave propagates outwards and get scattered off from the neighbouring atoms and is turned back towards the central ionized atom. The oscillatory component of the photoabsorption originates from the coupling of this reflected wave to the initial state via the dipole operator Mfs as in (1). The Fourier transform of the oscillations gives the information about the spacing of the neighboring atoms and their chemical environment. This phase information is carried over to the oscillations in the Auger signal because the transition time in Auger emission is of the same order of magnitude as the average time for a photoelectron in the energy range of interest. Thus, with a proper choice of the absorption edge and characteristic Auger transition, measurement of the variation of the intensity in a particular Auger line as a function of incident photon energy would be a measure of the photoabsorption cross section. This excitation also triggers various decay mechanisms. These can be of radiative (fluorescence) or nonradiative (Auger and Coster–Kronig) nature. The intensity ratio between the Auger electron and X-ray emissions depends on the atomic number Z. The yield of the Auger electrons decreases with increasing Z. Theory of EXAFS The cross section of photoabsorption is given by Fermi's golden rule, which, in the dipole approximation, is given as where the initial state, i with energy Ei, consists of the atomic core and the Fermi sea, and the incident radiation field, the final state, ƒ with energy Eƒ (larger than the Fermi level), consists of a core hole and an excited electron. ε is the polarization vector of the electric field, e the electron charge, and ħω the x-ray photon energy. The photoabsorption signal contains a peak when the core level excitation is neared. It is followed by an oscillatory component which originates from the coupling of that part of the electron wave which upon scattering by the medium is turned back towards the central ionized atom, where it couples to the initial state via the dipole operator, Mi. Assuming single-scattering and small-atom approximation for kRj >> 1, where Rj is the distance from the central excited atom to the jth shell of neighbors and k is the photoelectrons wave vector, where ħωT is the absorption edge energy and Vo is the inner potential of the solid associated with exchange and correlation, the following expression for the oscillatory component of the photoabsorption cross section (for K-shell excitation) is obtained: where the atomic scattering factor in a partial wave expansion with partial wave phase-shifts δl is given by Pl(x) is the lth Legendre polynomial, γ is an attenuation coefficient, exp(−2σi2k2) is a Debye–Waller factor and weight Wj is given in terms of the number of atoms in the jth shell and their distance as The above equation for the χ(k) forms the basis of a direct, Fourier transform, method of analysis which has been successfully applied to the analysis of the EXAFS data. Incorporation of EXAFS-Auger The number of electrons arriving at the detector with an energy of the characteristic WαXY Auger line (where Wα is the absorption edge core-level of element α, to which the incident x-ray line has been tuned) can be written as where NB(ħω) is the background signal and is the Auger signal we are interested in, where where is the probability that an excited atom will decay via WαXY Auger transition, ρα(z) is the atomic concentration of the element α at depth z, λ(WαXY) is the mean free path for an WαXY Auger electron, θ is the angle that the escaping Auger electron makes with the surface normal and κ is the photon emission probability which is dictated the atomic number. As the photoabsorption probability, is the only term that is dependent on the photon energy, the oscillations in it as a function of energy would give rise to similar oscillations in the . Notes References Stöhr, J. (1988) "SEXAFS: Everything you always wanted to know about SEXAFS but were afraid to ask" , in X-Ray Absorption: Principles, Applications, Techniques of EXAFS, SEXAFS and XANES, Edits. D. Koningsberger and R. Prins, Wiley, 1988 External links Details about SEXAFS X-ray absorption spectroscopy
Surface-extended X-ray absorption fine structure
[ "Chemistry", "Materials_science", "Engineering" ]
1,757
[ "X-ray absorption spectroscopy", "Materials science", "Laboratory techniques in condensed matter physics" ]
19,878,629
https://en.wikipedia.org/wiki/High-level%20synthesis
High-level synthesis (HLS), sometimes referred to as C synthesis, electronic system-level (ESL) synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that takes an abstract behavioral specification of a digital system and finds a register-transfer level structure that realizes the given behavior. Synthesis begins with a high-level specification of the problem, where behavior is generally decoupled from low-level circuit mechanics such as clock-level timing. Early HLS explored a variety of input specification languages, although recent research and commercial applications generally accept synthesizable subsets of ANSI C/C++/SystemC/MATLAB. The code is analyzed, architecturally constrained, and scheduled to transcompile from a transaction-level model (TLM) into a register-transfer level (RTL) design in a hardware description language (HDL), which is in turn commonly synthesized to the gate level by the use of a logic synthesis tool. The goal of HLS is to let hardware designers efficiently build and verify hardware, by giving them better control over optimization of their design architecture, and through the nature of allowing the designer to describe the design at a higher level of abstraction while the tool does the RTL implementation. Verification of the RTL is an important part of the process. Hardware can be designed at varying levels of abstraction. The commonly used levels of abstraction are gate level, register-transfer level (RTL), and algorithmic level. While logic synthesis uses an RTL description of the design, high-level synthesis works at a higher level of abstraction, starting with an algorithmic description in a high-level language such as SystemC and ANSI C/C++. The designer typically develops the module functionality and the interconnect protocol. The high-level synthesis tools handle the micro-architecture and transform untimed or partially timed functional code into fully timed RTL implementations, automatically creating cycle-by-cycle detail for hardware implementation. The (RTL) implementations are then used directly in a conventional logic synthesis flow to create a gate-level implementation. History Early academic work extracted scheduling, allocation, and binding as the basic steps for high-level-synthesis. Scheduling partitions the algorithm in control steps that are used to define the states in the finite-state machine. Each control step contains one small section of the algorithm that can be performed in a single clock cycle in the hardware. Allocation and binding maps the instructions and variables to the hardware components, multiplexers, registers and wires of the data path. First generation behavioral synthesis was introduced by Synopsys in 1994 as Behavioral Compiler and used Verilog or VHDL as input languages. The abstraction level used was partially timed (clocked) processes. Tools based on behavioral Verilog or VHDL were not widely adopted in part because neither languages nor the partially timed abstraction were well suited to modeling behavior at a high level. 10 years later, in early 2004, Synopsys end-of-lifed Behavioral Compiler. In 1998, Forte Design Systems introduced its Cynthesizer tool which used SystemC as an entry language instead of Verilog or VHDL. Cynthesizer was adopted by many Japanese companies in 2000 as Japan had a very mature SystemC user community. The first high-level synthesis tapeout was achieved in 2001 by Sony using Cynthesizer. Adoption in the United States started in earnest in 2008. In 2006, an efficient and scalable "SDC modulo scheduling" technique was developed on control and data flow graphs and was later extended to pipeline scheduling. This technique uses the integer linear programming formulation. But it shows that the underlying constraint matrix is totally unimodular (after approximating the resource constraints). Thus, the problem can be solved in polynomial time optimally using a linear programming solver in polynomial time. This work was inducted to the FPGA and Reconfigurable Computing Hall of Fame 2022. The SDC scheduling algorithm was implemented in the xPilot HLS system developed at UCLA, and later licensed to the AutoESL Design Technologies, a spin-off from UCLA. AutoESL was acquired by Xilinx (now part of AMD) in 2011, and the HLS tool developed by AutoESL became the base of Xilinx HLS solutions, Vivado HLS and Vitis HLS, widely used for FPGA designs. Source input The most common source inputs for high-level synthesis are based on standard languages such as ANSI C/C++, SystemC and MATLAB. High-level synthesis typically also includes a bit-accurate executable specification as input, since to derive an efficient hardware implementation, additional information is needed on what is an acceptable Mean-Square Error or Bit-Error Rate etc. For example, if the designer starts with an FIR filter written using the "double" floating type, before he can derive an efficient hardware implementation, they need to perform numerical refinement to arrive at a fixed-point implementation. The refinement requires additional information on the level of quantization noise that can be tolerated, the valid input ranges etc. This bit-accurate specification makes the high level synthesis source specification functionally complete. Normally the tools infer from the high level code a Finite State Machine and a Datapath that implement arithmetic operations. Process stages The high-level synthesis process consists of a number of activities. Various high-level synthesis tools perform these activities in different orders using different algorithms. Some high-level synthesis tools combine some of these activities or perform them iteratively to converge on the desired solution. Lexical processing Algorithm optimization Control/Dataflow analysis Library processing Resource allocation Scheduling Functional unit binding Register binding Output processing Input Rebundling Functionality In general, an algorithm can be performed over many clock cycles with few hardware resources, or over fewer clock cycles using a larger number of ALUs, registers and memories. Correspondingly, from one algorithmic description, a variety of hardware microarchitectures can be generated by an HLS compiler according to the directives given to the tool. This is the same trade off of execution speed for hardware complexity as seen when a given program is run on conventional processors of differing performance, yet all running at roughly the same clock frequency. Architectural constraints Synthesis constraints for the architecture can automatically be applied based on the design analysis. These constraints can be broken into Hierarchy Interface Memory Loop Low-level timing constraints Iteration Interface synthesis Interface Synthesis refers to the ability to accept pure C/C++ description as its input, then use automated interface synthesis technology to control the timing and communications protocol on the design interface. This enables interface analysis and exploration of a full range of hardware interface options such as streaming, single- or dual-port RAM plus various handshaking mechanisms. With interface synthesis the designer does not embed interface protocols in the source description. Examples might be: direct connection, one line, 2 line handshake, FIFO. Vendors Data reported on recent Survey Dynamatic from EPFL/ETH Zurich MATLAB HDL Coder from Mathworks HLS-QSP from CircuitSutra Technologies C-to-Silicon from Cadence Design Systems Concurrent Acceleration from Concurrent EDA Symphony C Compiler from Synopsys QuickPlay from PLDA PowerOpt from ChipVision Cynthesizer from Forte Design Systems (now Stratus HLS from Cadence Design Systems) Catapult C from Calypto Design Systems, part of Mentor Graphics as of 2015, September 16. In November 2016 Siemens announced plans to acquire Mentor Graphics, Mentor Graphics became styled as "Mentor, a Siemens Business". In January 2021, the legal merger of Mentor Graphics with Siemens was completed - merging into the Siemens Industry Software Inc legal entity. Mentor Graphics' name was changed to Siemens EDA, a division of Siemens Digital Industries Software. PipelineC CyberWorkBench from NEC Mega Hardware C2R from CebaTech CoDeveloper from Impulse Accelerated Technologies HercuLeS by Nikolaos Kavvadias Program In/Code Out (PICO) from Synfora, acquired by Synopsys in June 2010 xPilot from University of California, Los Angeles Vsyn from vsyn.ru ngDesign from SynFlow See also C to HDL Electronic design automation (EDA) Electronic system-level (ESL) Logic synthesis High-level verification (HLV) SystemVerilog Hardware acceleration References Further reading Jason Cong, Jason Lau, Gai Liu, Stephen Neuendorffer, Peichen Pan, Kees Vissers, Zhiru Zhang.  FPGA HLS Today: Successes, Challenges, and Opportunities. ACM Transactions on Reconfigurable Technology and Systems, Volume 15, Issue 4, Article No. 5, pp 1–42, December 2022, https://doi.org/10.1145/3530775. covers the use of C/C++, SystemC, TML and even UML External links Vivado HLS course on Youtube Deepchip Discussion Forum Electronic design automation Hardware acceleration
High-level synthesis
[ "Technology" ]
1,857
[ "Hardware acceleration", "Computer systems" ]
19,879,858
https://en.wikipedia.org/wiki/Switched%20reluctance%20motor
The switched reluctance motor (SRM) is a type of reluctance motor. Unlike brushed DC motors, power is delivered to windings in the stator (case) rather than the rotor. This simplifies mechanical design because power does not have to be delivered to the moving rotor, which eliminates the need for a commutator. However it complicates the electrical design, because a switching system must deliver power to the different windings and limit torque ripple. Sources disagree on whether it is a type of stepper motor. The simplest SRM has the lowest construction cost of any electric motor. Industrial motors may have some cost reduction due to the lack of rotor windings or permanent magnets. Common uses include applications where the rotor must remain stationary for long periods, and in potentially explosive environments such as mining, because no commutation is involved. The windings in an SRM are electrically isolated from each other, producing higher fault tolerance than induction motors. The optimal drive waveform is not a pure sinusoid, due to the non-linear torque relative to rotor displacement, and the windings' highly position-dependent inductance. History The first patent was by W. H. Taylor in 1838 in the United States. The principles for SR drives were described around 1970, and enhanced by Peter Lawrenson and others from 1980 onwards. At the time, some experts viewed the technology as unfeasible, and practical application has been limited, partly because of control issues and unsuitable applications, and because low production numbers result in higher cost. Operating principle The SRM has wound field coils as in a DC motor for the stator windings. The rotor however has no magnets or coils attached. It is a solid salient-pole rotor (having projecting magnetic poles) made of soft magnetic material, typically laminated steel. When power is applied to a stator winding, the rotor's magnetic reluctance creates a force that attempts to align a rotor pole with the nearest stator pole. In order to maintain rotation, an electronic control system switches on the windings of successive stator poles in sequence so that the magnetic field of the stator "leads" the rotor pole, pulling it forward. Rather than using a mechanical commutator to switch the winding current as in traditional motors, the switched-reluctance motor uses an electronic position sensor to determine the angle of the rotor shaft and solid state electronics to switch the stator windings, which enables dynamic control of pulse timing and shaping. This differs from the apparently similar induction motor which also energizes windings in a rotating phased sequence. In an SRM the rotor magnetization is fixed, meaning the salient 'North' poles remains so as the motor rotates. In contrast, an induction motor has slip, meaning it rotates at slower than the magnetic field in the stator. SRM's absence of slip makes it possible to know the rotor position exactly, allowing the motor to be stepped slowly, even to the point of being stopped completely. Simple switching If the poles A0 and A1 are energised then the rotor will align itself with these poles. Once this has occurred it is possible for the stator poles to be de-energised before the stator poles of B0 and B1 are energized. The rotor is now positioned at the stator poles b. This sequence continues through c before arriving back at the start. This sequence can also be reversed to achieve motion in the opposite direction. High loads and/or high de/acceleration can destabilize this sequence, causing a step to be missed, such that the rotor jumps to wrong angle, perhaps going back one step instead of forward three. Quadrature A much more stable system can be found by using a "quadrature" sequence in which up to two coils are energised at any time. First, stator poles A0 and A1 are energized. Then stator poles B0 and B1 are energized which, pulls the rotor so that it is aligned in between A and B. Following this A's stator poles are de-energized and the rotor continues on to be aligned with B. The sequence continues through BC, C and CA to complete a full rotation. This sequence can be reversed to achieve motion in the opposite direction. More steps between positions with identical magnetisation, so the onset of missed steps occurs at higher speeds or loads. In addition to more stable operation, this approach leads to a duty cycle of each phase of 1/2, rather than 1/3 as in the simpler sequence. Control The control system is responsible for giving the required sequential pulses to the power circuitry. It is possible to do this using electro-mechanical means such as commutators or analog or digital timing circuits. Many controllers incorporate programmable logic controllers (PLCs) rather than electromechanical components. A microcontroller can enable precise phase activation timing. It also enables a soft start function in software form, in order to reduce the amount of required hardware. A feedback loop enhances the control system. Power circuitry The most common approach to powering an SRM is to use an asymmetric bridge converter. The switching frequency can be 10 times lower than for AC motors. The phases in an asymmetric bridge converter correspond to the motor phases. If both of the power switches on either side of the phase are turned on, then that corresponding phase is actuated. Once the current has risen above the set value, the switch turns off. The energy now stored within the winding maintains the current in the same direction, the so-called back EMF (BEMF). This BEMF is fed back through the diodes to the capacitor for re-use, thus improving efficiency. This basic circuitry may be altered so that fewer components are required although the circuit performs the same action. This efficient circuit is known as the (n+1) switch and diode configuration. A capacitor, in either configuration, is used for storing BEMF for re-use and to suppress electrical and acoustic noise by limiting fluctuations in the supply voltage. If a phase is disconnected, an SR motor may continue to operate at lower torque, unlike an AC induction motor which turns off. Applications SRMs are used in some appliances, in linear form for wave energy conversion, magnetic levitation trains, or industrial sewing machines. The same electromechanical design can be used in a generator. The load is switched to the coils in sequence to synchronize the current flow with the rotation. Such generators can be run at much higher speeds than conventional types as the armature can be made as one piece of magnetisable material, as a slotted cylinder. In this case the abbreviation SRM is extended to mean Switched Reluctance Machine, (along with SRG, Switched Reluctance Generator). A topology that is both motor and generator is useful for starting the prime mover, as it saves a dedicated starter motor. References External links Switched Reluctance Motor Drives Real-Time Simulation and Control of Reluctance Motor Drives for High Speed Operation with Reduced Torque Ripple Torrey – Switched reluctance generators and their control DOI: 10.1109/41.982243 Asadi – Development and application of an advanced switched reluctance generator drive SR database archive Adam Biernat: Electrical Machines in the Power Engineering and Automatics (Warsaw Polytechnic) Synchronous Reluctance Motor Introduction Concepts Electric motors
Switched reluctance motor
[ "Technology", "Engineering" ]
1,533
[ "Electrical engineering", "Engines", "Electric motors" ]
19,880,157
https://en.wikipedia.org/wiki/Diaminopimelic%20acid
Diaminopimelic acid (DAP) is an amino acid, representing an epsilon-carboxy derivative of lysine. meso-α,ε-Diaminopimelic acid is the last intermediate in the biosynthesis of lysine and undergoes decarboxylation by diaminopimelate decarboxylase to give the final product. DAP is a characteristic of certain cell walls of some bacteria. DAP is often found in the peptide linkages of NAM-NAG chains that make up the cell wall of gram-negative bacteria. When provided, they exhibit normal growth. When in deficiency, they still grow but with the inability to make new cell wall peptidoglycan. This is also the attachment point for Braun's lipoprotein. See also Aspartate-semialdehyde dehydrogenase, an enzyme involved in DAP synthesis Peptidoglycan Pimelic acid Images References Alpha-Amino acids Dicarboxylic acids Non-proteinogenic amino acids
Diaminopimelic acid
[ "Chemistry" ]
216
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
19,882,883
https://en.wikipedia.org/wiki/Small%20cancellation%20theory
In the mathematical subject of group theory, small cancellation theory studies groups given by group presentations satisfying small cancellation conditions, that is where defining relations have "small overlaps" with each other. Small cancellation conditions imply algebraic, geometric and algorithmic properties of the group. Finitely presented groups satisfying sufficiently strong small cancellation conditions are word hyperbolic and have word problem solvable by Dehn's algorithm. Small cancellation methods are also used for constructing Tarski monsters, and for solutions of Burnside's problem. History Some ideas underlying the small cancellation theory go back to the work of Max Dehn in the 1910s. Dehn proved that fundamental groups of closed orientable surfaces of genus at least two have word problem solvable by what is now called Dehn's algorithm. His proof involved drawing the Cayley graph of such a group in the hyperbolic plane and performing curvature estimates via the Gauss–Bonnet theorem for a closed loop in the Cayley graph to conclude that such a loop must contain a large portion (more than a half) of a defining relation. A 1949 paper of Tartakovskii was an immediate precursor for small cancellation theory: this paper provided a solution of the word problem for a class of groups satisfying a complicated set of combinatorial conditions, where small cancellation type assumptions played a key role. The standard version of small cancellation theory, as it is used today, was developed by Martin Greendlinger in a series of papers in the early 1960s, who primarily dealt with the "metric" small cancellation conditions. In particular, Greendlinger proved that finitely presented groups satisfying the C′(1/6) small cancellation condition have word problem solvable by Dehn's algorithm. The theory was further refined and formalized in the subsequent work of Lyndon, Schupp and Lyndon-Schupp, who also treated the case of non-metric small cancellation conditions and developed a version of small cancellation theory for amalgamated free products and HNN-extensions. Small cancellation theory was further generalized by Alexander Ol'shanskii who developed a "graded" version of the theory where the set of defining relations comes equipped with a filtration and where a defining relator of a particular grade is allowed to have a large overlap with a defining relator of a higher grade. Olshaskii used graded small cancellation theory to construct various "monster" groups, including the Tarski monster and also to give a new proof that free Burnside groups of large odd exponent are infinite (this result was originally proved by Adian and Novikov in 1968 using more combinatorial methods). Small cancellation theory supplied a basic set of examples and ideas for the theory of word-hyperbolic groups that was put forward by Gromov in a seminal 1987 monograph "Hyperbolic groups". Main definitions The exposition below largely follows Ch. V of the book of Lyndon and Schupp. Pieces Let be a group presentation where R ⊆ F(X) is a set of freely reduced and cyclically reduced words in the free group F(X) such that R is symmetrized, that is, closed under taking cyclic permutations and inverses. A nontrivial freely reduced word u in F(X) is called a piece with respect to (∗) if there exist two distinct elements r1, r2 in R that have u as maximal common initial segment. Note that if is a group presentation where the set of defining relators S is not symmetrized, we can always take the symmetrized closure R of S, where R consists of all cyclic permutations of elements of S and S−1. Then R is symmetrized and is also a presentation of G. Metric small cancellation conditions Let 0 < λ < 1. Presentation (∗) as above is said to satisfy the C′(λ) small cancellation condition if whenever u is a piece with respect to (∗) and u is a subword of some r ∈ R, then |u| < λ|r|. Here |v| is the length of a word v. The condition C′(λ) is sometimes called a metric small cancellation condition. Non-metric small cancellation conditions Let p ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the C(p) small cancellation condition if whenever r ∈ R and where ui are pieces and where the above product is freely reduced as written, then m ≥ p. That is, no defining relator can be written as a reduced product of fewer than p pieces. Let q ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the T(q) small cancellation condition if whenever 3 ≤ t < q and r1,...,rt in R are such that r1 ≠ r2−1,..., rt ≠ r1−1 then at least one of the products r1r2,...,rt−1rt, rtr1 is freely reduced as written. Geometrically, condition T(q) essentially means that if D is a reduced van Kampen diagram over (∗) then every interior vertex of D of degree at least three actually has degree at least q. Examples Let be the standard presentation of the free abelian group of rank two. Then for the symmetrized closure of this presentation the only pieces are words of length 1. This symmetrized form satisfies the C(4)–T(4) small cancellation conditions and the C′(λ) condition for any 1 > λ > 1/4. Let , where k ≥ 2, be the standard presentation of the fundamental group of a closed orientable surface of genus k. Then for the symmetrization of this presentation the only pieces are words of length 1 and this symmetrization satisfies the C′(1/7) and C(8) small cancellation conditions. Let . Then, up to inversion, every piece for the symmetrized version of this presentation, has the form biabj or bi, where 0 ≤ i,j ≤ 100. This symmetrization satisfies the C′(1/20) small cancellation condition. If a symmetrized presentation satisfies the C′(1/m) condition then it also satisfies the C(m) condition. Let r ∈ F(X) be a nontrivial cyclically reduced word which is not a proper power in F(X) and let n ≥ 2. Then the symmetrized closure of the presentation satisfies the C(2n) and C′(1/n) small cancellation conditions. Basic results of small cancellation theory Greendlinger's lemma The main result regarding the metric small cancellation condition is the following statement (see Theorem 4.4 in Ch. V of ) which is usually called Greendlinger's lemma: Let (∗) be a group presentation as above satisfying the C′(λ) small cancellation condition where 0 ≤ λ ≤ 1/6. Let w ∈ F(X) be a nontrivial freely reduced word such that w = 1 in G. Then there is a subword v of w and a defining relator r ∈ R such that v is also a subword of r and such that Note that the assumption λ ≤ 1/6 implies that  (1 − 3λ) ≥ 1/2, so that w contains a subword more than a half of some defining relator. Greendlinger's lemma is obtained as a corollary of the following geometric statement: Under the assumptions of Greendlinger's lemma, let D be a reduced van Kampen diagram over (∗) with a cyclically reduced boundary label such that D contains at least two regions. Then there exist two distinct regions D1 and D2 in D such that for j = 1,2 the region Dj intersects the boundary cycle ∂D of D in a simple arc whose length is bigger than (1 − 3λ)|∂Dj|. This result in turn is proved by considering a dual diagram for D. There one defines a combinatorial notion of curvature (which, by the small cancellation assumptions, is negative at every interior vertex), and one then obtains a combinatorial version of the Gauss–Bonnet theorem. Greendlinger's lemma is proved as a consequence of this analysis and in this way the proof evokes the ideas of the original proof of Dehn for the case of surface groups. Dehn's algorithm For any symmetrized group presentation (∗), the following abstract procedure is called Dehn's algorithm: Given a freely reduced word w on X±1, construct a sequence of freely reduced words w = w0, w1, w2,..., as follows. Suppose wj is already constructed. If it is the empty word, terminate the algorithm. Otherwise check if wj contains a subword v such that v is also a subword of some defining relator r = vu ∈ R such that |v| > |r|/2. If no, terminate the algorithm with output wj. If yes, replace v by u−1 in wj, then freely reduce, denote the resulting freely reduced word by wj+1 and go to the next step of the algorithm. Note that we always have |w0| > |w1| > |w2| >... which implies that the process must terminate in at most |w| steps. Moreover, all the words wj represent the same element of G as does w and hence if the process terminates with the empty word, then w represents the identity element of G. One says that for a symmetrized presentation (∗) Dehn's algorithm solves the word problem in G if the converse is also true, that is if for any freely reduced word w in F(X) this word represents the identity element of G if and only if Dehn's algorithm, starting from w, terminates in the empty word. Greendlinger's lemma implies that for a C′(1/6) presentation Dehn's algorithm solves the word problem. If a C′(1/6) presentation (∗) is finite (that is both X and R are finite), then Dehn's algorithm is an actual non-deterministic algorithm in the sense of recursion theory. However, even if (∗) is an infinite C′(1/6) presentation, Dehn's algorithm, understood as an abstract procedure, still correctly decides whether or not a word in the generators X±1 represents the identity element of G. Asphericity Let (∗) be a C′(1/6) or, more generally, C(6) presentation where every r ∈ R is not a proper power in F(X) then G is aspherical in the following sense. Consider a minimal subset S of R such that the symmetrized closure of S is equal to R. Thus if r and s are distinct elements of S then r is not a cyclic permutation of s±1 and is another presentation for G. Let Y be the presentation complex for this presentation. Then (see and Theorem 13.3 in ), under the above assumptions on (∗), Y is a classifying space for G, that is G = π1(Y) and the universal cover of Y is contractible. In particular, this implies that G is torsion-free and has cohomological dimension two. More general curvature More generally, it is possible to define various sorts of local "curvature" on any van Kampen diagram to be - very roughly - the average excess of (which, by Euler's formula, must total 2) and, by showing, in a particular group, that this is always non-positive (or – even better – negative) internally, show that the curvature must all be on or near the boundary and thereby try to obtain a solution of the word problem. Furthermore, one can restrict attention to diagrams that do not contain any of a set of "regions" such that there is a "smaller" region with the same boundary. Other basic properties of small cancellation groups Let (∗) be a C′(1/6) presentation. Then an element g in G has order n > 1 if and only if there is a relator r in R of the form r = sn in F(X) such that g is conjugate to s in G. In particular, if all elements of R are not proper powers in F(X) then G is torsion-free. If (∗) is a finite C′(1/6) presentation, the group G is word-hyperbolic. If R and S are finite symmetrized subsets of F(X) with equal normal closures in F(X) such that both presentations and satisfy the C′(1/6) condition then R = S. If a finite presentation (∗) satisfies one of C′(1/6), C′(1/4)–T(4), C(6), C(4)–T(4), C(3)–T(6) then the group G has solvable word problem and solvable conjugacy problem Applications Examples of applications of small cancellation theory include: Solution of the conjugacy problem for groups of alternating knots (see and Chapter V, Theorem 8.5 in ), via showing that for such knots augmented knot groups admit C(4)–T(4) presentations. Finitely presented C′(1/6) small cancellation groups are basic examples of word-hyperbolic groups. One of the equivalent characterizations of word-hyperbolic groups is as those admitting finite presentations where Dehn's algorithm solves the word problem. Finitely presented groups given by finite C(4)–T(4) presentations where every piece has length one are basic examples of CAT(0) groups: for such a presentation the universal cover of the presentation complex is a CAT(0) square complex. Early applications of small cancellation theory involve obtaining various embeddability results. Examples include a 1974 paper of Sacerdote and Schupp with a proof that every one-relator group with at least three generators is SQ-universal and a 1976 paper of Schupp with a proof that every countable group can be embedded into a simple group generated by an element of order two and an element of order three. The so-called Rips construction, due to Eliyahu Rips, provides a rich source of counter-examples regarding various subgroup properties of word-hyperbolic groups: Given an arbitrary finitely presented group Q, the construction produces a short exact sequence where K is two-generated and where G is torsion-free and given by a finite C′(1/6)–presentation (and thus G is word-hyperbolic). The construction yields proofs of unsolvability of several algorithmic problems for word-hyperbolic groups, including the subgroup membership problem, the generation problem and the rank problem. Also, with a few exceptions, the group K in the Rips construction is not finitely presentable. This implies that there exist word-hyperbolic groups that are not coherent that is which contain subgroups that are finitely generated but not finitely presentable. Small cancellation methods (for infinite presentations) were used by Ol'shanskii to construct various "monster" groups, including the Tarski monster and also to give a proof that free Burnside groups of large odd exponent are infinite (a similar result was originally proved by Adian and Novikov in 1968 using more combinatorial methods). Some other "monster" groups constructed by Ol'shanskii using this methods include: an infinite simple Noetherian group; an infinite group in which every proper subgroup has prime order and any two subgroups of the same order are conjugate; a nonamenable group where every proper subgroup is cyclic; and others. Bowditch used infinite small cancellation presentations to prove that there exist continuumly many quasi-isometry types of two-generator groups. Thomas and Velickovic used small cancellation theory to construct a finitely generated group with two non-homeomorphic asymptotic cones, thus answering a question of Gromov. McCammond and Wise showed how to overcome difficulties posed by the Rips construction and produce large classes of small cancellation groups that are coherent (that is where all finitely generated subgroups are finitely presented) and, moreover, locally quasiconvex (that is where all finitely generated subgroups are quasiconvex). Small cancellation methods play a key role in the study of various models of "generic" or "random" finitely presented groups (see ). In particular, for a fixed number m ≥ 2 of generators and a fixed number t ≥ 1 of defining relations and for any λ < 1 a random m-generator t-relator group satisfies the C′(λ) small cancellation condition. Even if the number of defining relations t is not fixed but grows as (2m − 1)εn (where ε ≥ 0 is the fixed density parameter in Gromov's density model of "random" groups, and where is the length of the defining relations), then an ε-random group satisfies the C′(1/6) condition provided ε < 1/12. Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space. This result provides a direction (the only one available so far) for looking for counter-examples to the Novikov conjecture. Osin used a generalization of small cancellation theory to obtain an analog of Thurston's hyperbolic Dehn surgery theorem for relatively hyperbolic groups. Generalizations A version of small cancellation theory for quotient groups of amalgamated free products and HNN extensions was developed in the paper of Sacerdote and Schupp and then in the book of Lyndon and Schupp. Rips and Ol'shanskii developed a "stratified" version of small cancellation theory where the set of relators is filtered as an ascending union of strata (each stratum satisfying a small cancellation condition) and for a relator r from some stratum and a relator s from a higher stratum their overlap is required to be small with respect to |s| but is allowed to have a large with respect to |r|. This theory allowed Ol'shanskii to construct various "monster" groups including the Tarski monster and to give a new proof that free Burnside groups of large odd exponent are infinite. Ol'shanskii and Delzant later on developed versions of small cancellation theory for quotients of word-hyperbolic groups. McCammond provided a higher-dimensional version of small cancellation theory. McCammond and Wise pushed substantially further the basic results of the standard small cancellation theory (such as Greendlinger's lemma) regarding the geometry of van Kampen diagrams over small cancellation presentations. Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space. Osin gave a version of small cancellation theory for quotients of relatively hyperbolic groups and used it to obtain a relatively hyperbolic generalization of Thurston's hyperbolic Dehn surgery theorem. Basic references Roger Lyndon and Paul Schupp, Combinatorial group theory. Reprint of the 1977 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. . Alexander Yu. Olʹshanskii, Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. . Ralph Strebel, Appendix. Small cancellation groups. Sur les groupes hyperboliques d'après Mikhael Gromov (Bern, 1988), pp. 227–273, Progress in Mathematics, 83, Birkhäuser Boston, Boston, Massachusetts, 1990. . Milé Krajčevski, Tilings of the plane, hyperbolic groups and small cancellation conditions. Memoirs of the American Mathematical Society, vol. 154 (2001), no. 733. See also Geometric group theory Word-hyperbolic group Tarski monster group Burnside problem Finitely presented group Word problem for groups Van Kampen diagram Notes Group theory Geometric group theory Combinatorics on words
Small cancellation theory
[ "Physics", "Mathematics" ]
4,325
[ "Geometric group theory", "Group actions", "Combinatorics", "Group theory", "Fields of abstract algebra", "Combinatorics on words", "Symmetry" ]
19,886,507
https://en.wikipedia.org/wiki/Lev%20Lipatov
Lev Nikolaevich Lipatov (; 2 May 1940, in Leningrad – 4 September 2017, in Dubna) was a Russian physicist, well known for his contributions to nuclear physics and particle physics. He has been the head of Theoretical Physics Division at St. Petersburg's Nuclear Physics Institute of Russian Academy of Sciences in Gatchina and an Academician of the Russian Academy of Sciences. For the long period he worked with Vladimir Gribov, laying a basis for a field theory description of deep inelastic scattering and annihilation (Gribov-Lipatov evolution equations, later known as DGLAP, 1972). He wrote significant papers of the Pomeranchuk singularity in Quantum chromodynamics (1977) what resulted in deriving the BFKL evolution equation (Balitsky-Fadin-Kuraev-Lipatov), contributed to the study of critical phenomena (semiclassical Lipatov's approximation), the theory of tunnelling and renormalon contribution to effective couplings. He discovered the connection between high-energy scattering and the exactly solvable models (1994). HEJA BVB Awards High Energy and Particle Physics Prize (2015) Pomeranchuk Prize (2001) See also BFKL pomeron Relativistic Heavy Ion Collider Renormalon References External links Diakonov QCD scattering: from DGLAP to BFKL, CERN Courier, July 2010 1940 births 2017 deaths Russian nuclear physicists Full Members of the Russian Academy of Sciences Particle physicists Scientists from Saint Petersburg
Lev Lipatov
[ "Physics" ]
323
[ "Particle physicists", "Particle physics" ]
19,887,440
https://en.wikipedia.org/wiki/Indium%20gallium%20zinc%20oxide
Indium gallium zinc oxide (IGZO) is a semiconducting material, consisting of indium (In), gallium (Ga), zinc (Zn) and oxygen (O). IGZO thin-film transistors (TFT) are used in the TFT backplane of flat-panel displays (FPDs). IGZO-TFT was developed by Hideo Hosono's group at Tokyo Institute of Technology and Japan Science and Technology Agency (JST) in 2003 (crystalline IGZO-TFT) and in 2004 (amorphous IGZO-TFT). IGZO-TFT has 20–50 times the electron mobility of amorphous silicon, which has often been used in liquid-crystal displays (LCDs) and e-papers. As a result, IGZO-TFT can improve the speed, resolution and size of flat-panel displays. It is currently used as the thin-film transistors for use in organic light-emitting diode (OLED) TV displays. IGZO-TFT and its applications are patented by JST. They have been licensed to Samsung Electronics (in 2011) and Sharp (in 2012). In 2012, Sharp was the first to start production of LCD panels incorporating IGZO-TFT. Sharp uses IGZO-TFT for smartphones, tablets, and 32" LCDs. In these, the aperture ratio of the LCD is improved by up to 20%. Power consumption is improved by LCD idling stop technology, which is possible due to the high mobility and low off current of IGZO-TFT. Sharp has started to release high pixel-density panels for notebook applications. IGZO-TFT is also employed in the 14" 3,200x1,800 LCD of an ultrabook PC supplied by Fujitsu, also used in the Razer Blade 14" (Touchscreen Variant) Gaming Laptop and a 55" OLED TV supplied by LG Electronics. IGZO's advantage over zinc oxide is that it can be deposited as a uniform amorphous phase while retaining the high carrier mobility common to oxide semiconductors. The transistors are slightly photo-sensitive, but the effect becomes significant only in the deep violet to ultra-violet (photon energy above 3 eV) range, offering the possibility of a fully transparent transistor. The current impediment to large-scale IGZO manufacturing is the synthesis method. The most widely used technique for transparent conducting oxide (TCO) synthesis is pulsed laser deposition (PLD). In PLD, a laser is used to focus on nano-sized spots on solid elemental targets. Laser pulse frequencies are varied between the targets in ratios to control the composition of the film. IGZO can be deposited onto substrates such as quartz, single-crystal silicon, or even plastic due to its ability for low-temperature deposition. The substrates are placed in a PLD vacuum chamber, which controls oxygen pressure in order to ensure favorable electrical properties. After synthesis, the film is annealed, or gradually exposed to air to adjust to the atmosphere. While PLD is a useful and versatile synthesis technique, it requires expensive equipment and plenty of time for each sample to adjust to regular atmospheric conditions. This is not ideal for industrial manufacturing. Solution processing is a more cost effective alternative. Specifically, combustion synthesis techniques can be used. Kim et al. used a metal nitrate solution with an oxidizer to create an exothermic reaction. One common type of combustion synthesis is spin coating, which involves depositing In and Ga solution layers onto a hot plate and annealing at temperatures roughly between 200 and 400 degrees C, depending on the target composition. The films can be annealed in air, which is a large advantage over PLD. References Japanese inventions Liquid crystal displays Semiconductor fabrication materials Oxides
Indium gallium zinc oxide
[ "Chemistry" ]
806
[ "Oxides", "Salts" ]
8,379,672
https://en.wikipedia.org/wiki/Kratos%20MS%2050
The Kratos MS 50, or EI 50, is a tool for electron ionization (EI). The EI 50, used for relatively small molecules (as opposed to methods like MALDI), ionizes molecules via electron ionization (normally under 70 electronvolt conditions) and then accelerates them through an electric potential. The spectroscopy is done by analyzing the different displacements by a magnet. For equal charge, these displacements depend only on velocity, thus for the EI 50's constant kinetic energy conditions, these displacements are uniquely determined by a particle's mass. References Mass spectrometry
Kratos MS 50
[ "Physics", "Chemistry", "Astronomy" ]
130
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Astronomy stubs", "Mass spectrometry", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs", "Matter" ]
8,380,514
https://en.wikipedia.org/wiki/Thermometric%20titration
A thermometric titration is one of a number of instrumental titration techniques where endpoints can be located accurately and precisely without a subjective interpretation on the part of the analyst as to their location. Enthalpy change is arguably the most fundamental and universal property of chemical reactions, so the observation of temperature change is a natural choice in monitoring their progress. It is not a new technique, with possibly the first recognizable thermometric titration method reported early in the 20th century (Bell and Cowell, 1913). In spite of its attractive features, and in spite of the considerable research that has been conducted in the field and a large body of applications that have been developed; it has been until now an under-utilized technique in the critical area of industrial process and quality control. Automated potentiometric titration systems have pre-dominated in this area since the 1970s. With the advent of cheap computers able to handle the powerful thermometric titration software, development has now reached the stage where easy to use automated thermometric titration systems can in many cases offer a superior alternative to potentiometric titrimetry. Comparison between potentiometric and thermometric titrations Potentiometric titrimetry has been the predominant automated titrimetric technique since the 1970s, so it is worthwhile considering the basic differences between it and thermometric titrimetry. Potentiometrically-sensed titrations rely on a free energy change in the reaction system. Measurement of a free energy dependent term is necessary. ΔG0 = -RT lnK (1) Where: ΔG0 = change on free energy R = universal gas constant T = temperature in kelvins (K) or degrees Rankine (°R) K = equilibrium constant at temperature T ln is the natural logarithm function In order for a reaction to be amenable to potentiometric titrimetry, the free energy change must be sufficient for an appropriate sensor to respond with a significant inflection (or "kink") in the titration curve where sensor response is plotted against the amount of titrant delivered. However, free energy is just one of three related parameters in describing any chemical reaction: ΔH0 = ΔG0 + TΔS0 (2) where: ΔH0 = change in enthalpy ΔG0 = change in free energy ΔS0 = change in entropy T = temperature in K For any reaction where the free energy is not opposed by the entropy change, the enthalpy change will be significantly greater than the free energy. Thus a titration based on a change in temperature (which permits observation of the enthalpy change) will show a greater inflection than will curves obtained from sensors reacting to free energy changes alone. Thermometric titrations In the thermometric titration, titrant is added at a known constant rate to a titrand until the completion of the reaction is indicated by a change in temperature. The endpoint is determined by an inflection in the curve generated by the output of a temperature measuring device. Consider the titration reaction: aA + bB = pP (3) Where: A = the titrant, and a = the corresponding number of moles reacting B = the analyte, and b = the corresponding number of moles reacting P = the product, and p = the corresponding number of moles produced At completion, the reaction produces a molar heat of reaction ΔHr which is shown as a measurable temperature change ΔT. In an ideal system, where no losses or gains of heat due to environmental influences are involved, the progress of the reaction is observed as a constant increase or decrease of temperature depending respectively on whether ΔHr is negative (indicating an exothermic reaction) or positive (indicating an endothermic reaction). In this context, environmental influences may include (in order of importance): Heat losses or gains from outside the system via the vessel walls and cover; Differences in the temperature between the titrant and the titrand; Evaporative losses from the surface of the rapidly mixed fluid; Heats of solution when the titrant solvent is mixed with the analyte solvent; Heat introduced by the mechanical action of stirring (minor influence); and Heat produced by the thermistor itself (very minor influence). If the equilibrium for the reaction lies far to the right (i.e. a stoichiometric equilibrium has been achieved), then when all analyte has been reacted by the titrant continuing addition of titrant will be revealed by a sharp break in the temperature/volume curve. Figures 1a and 1b illustrate idealized examples. The shape of experimentally obtained thermometric titration plots will vary from such idealized examples, and some of the environmental influences listed above may have impacts. Curvature at the endpoint might be observed. This can be due to insensitivity of the sensor or where thermal equilibrium at the endpoint is slow to occur. It can also occur where the reaction between titrant and titrand does not proceed to stoichiometric completion. The determinant of the degree to which a reaction will proceed to completion is the free energy change. If this is favourable, then the reaction will proceed to be completion and be essentially stoichiometric. In this case, the sharpness of the endpoint is dependent on the magnitude of the enthalpy change. If it is unfavourable, the endpoint will be rounded regardless of the magnitude of the enthalpy change. Reactions where non-stoichiometric equilibria are evident can be used to obtain satisfactory results using a thermometric titration approach. If the portions of the titration curve both prior to and after the endpoint are reasonably linear, then the intersection of tangents to these lines will accurately locate the endpoint. This is illustrated in Figure 2. Consider the reaction for the equation aA + bB = pP which is non-stoichiometric at equilibrium. Let A represent the titrant, and B the titrand. At the beginning of the titration, the titrand B is strongly in excess, and the reaction is pushed towards completion. Under these conditions, for a constant rate of titrant addition the temperature increase is constant and the curve is essentially linear until the endpoint is approached. In a similar manner, when the titrant is in excess past the endpoint, a linear temperature response can also be anticipated. Thus intersection of tangents will reveal the true endpoint. An actual thermometric titration plot for the determination of a strong base with a strong acid is illustrated in Figure 3. The most practical sensor for measuring temperature change in titrating solutions has been found to be the thermistor. Thermistors are small solid state devices which exhibit relatively large changes in electrical resistance for small changes in temperature. They are manufactured from sintered mixed metal oxides, with lead wires enabling connection to electrical circuitry. The thermistor is encapsulated in a suitable electrically insulating medium with satisfactory heat transfer characteristics and acceptable chemical resistance. Typically for thermistors used for chemical analysis the encapsulating medium is glass, although thermistors encapsulated in epoxy resin may be used in circumstances where either chemical attack (e.g., by acidic fluoride-containing solutions) or severe mechanical stress is anticipated. The thermistor is supported by suitable electronic circuitry to maximize sensitivity to minute changes in solution temperature. The circuitry in the Metrohm 859 Titrotherm thermometric titration interface module is capable of resolving temperature changes as low as 10−5 K. A critical element in modern automated thermometric titrimetry is the ability to locate the endpoint with a high degree of reproducibility. It is clearly impractical and insufficient for modern demands of accuracy and precision to estimate the inflection by intersection of tangents. This is done conveniently by derivatization of the temperature curve. The second derivative essentially locates the intersection of tangents to the temperature curve immediately pre- and post- the breakpoint. Thermistors respond quickly to small changes in temperature such as temperature gradients in the mixed titration solution, and thus the signal can exhibit a small amount of noise. Prior to derivatization it is therefore necessary to digitally smooth (or "filter") the temperature curve in order to obtain sharp, symmetrical second derivative "peaks" which will accurately locate the correct inflection point. This is illustrated in Figure 5. The degree of digital smoothing is optimized for each determination, and is stored as a method parameter for application every time a titration for that particular analysis is run. Because enthalpy change is a universal characteristic of chemical reactions, thermometric endpoint sensing can be applied to a wide range of titration types, e.g. Acid/base Redox Complexometric (EDTA) and Precipitation Further, since the sensor is not required to interact with the titration solution electrochemically, titrations in non-conducting media can be performed, as can titrations using reactions for which no convenient or cost-effective potentiometric sensor is available. Thermometric titrations generally demand rapid reaction kinetics in order to obtain sharp reproducible endpoints. Where reaction kinetics are slow, and direct titrations between titrant and titrand are not possible, indirect or back-titrations often can be devised to solve the problem. Catalytically enhanced endpoints can be used in some instances where the temperature change at the endpoint is very small and endpoints would not be detected satisfactorily by the titration software. The suitability of a particular chemical reaction as a candidate for a thermometric titration procedure can generally be predicted on the basis of the estimated amount of analyte present in the sample and the enthalpy of the reaction. However, other parameters such as the kinetics of the reaction, the sample matrix itself, heats of dilution and losses of heat to the environment can affect the outcome. A properly designed experimental program is the most reliable way of determining the viability of a thermometric titration approach. Successful applications for thermometric titrations are generally where titrant-titrand reaction kinetics are fast, and chemical equilibria are stoichiometric or nearly so. Where thermometric titration determinations may be recommended The analyst wishes to simplify the conduct of a variety of titrations by using one sensor for all. For example, a laboratory might conduct routinely acid/base, redox, complexometric, sulfate and chloride titrations. A single thermometric sensor in conjunction with an autosampler will enable all titrations to be performed in the same carousel load without having to change titration sensors. After preparation of the samples and placing in the carousel, the analyst assigns the appropriate thermometric method to the beaker position in the carousel. The titration environment is considered unsuitable for conventional titration sensors. For example, glass membrane pH electrodes must be kept adequately hydrated for proper operation. The use of such electrodes in substantially non-aqueous media as in the determination of trace acids in lipids and lubricating oils can lead to loss of performance as the membrane fouls and dehydrates, and/or if the reference junction is partly or completely blocked. It is often necessary to keep a number of electrodes cycling through a rejuvenation program in order to keep up with an analytical workload. Thermometric sensors have no electrochemical interaction with the titrating solution, and therefore can be used on a continuous basis with essentially no maintenance. Similarly, the potentiometric titration of sulfate with barium chloride in various industrial samples can lead to rapid degradation of the indicating barium ion selective electrode. A thermometric titration methodology which cannot be emulated using other types of titration sensors will deliver superior or results otherwise unobtainable by other techniques. Examples are the determination of fluoride by titration with boric acid, the analysis of orthophosphate by titration with magnesium ions, and the direct titration of aluminium with fluoride ions. Apparatus and setup for automated thermometric titrimetry A suitable setup for automated thermometric titrimetry comprises the following: Precision fluid dispensing devices – "burettes" – for adding titrants and dosing of other reagents Thermistor-based thermometric sensor Titration vessel Stirring device, capable of highly efficient stirring of vessel contents without splashing Computer with thermometric titration operating system Thermometric titration interface module – this regulates the data flow between the burettes, sensors and the computer Figure 6 illustrates a modern automated thermometric titration system based on the Metrohm 859 Titrotherm interface module with Thermoprobe sensor, Metrohm 800 Dosino dispensing devices and a computer running the operational software. Figure 7 is a schematic of the relationship between components in automated thermometric titration system. A = dosing device B = thermometric sensor C = stirring device D = thermometric titration interface module E = computer Types of thermometric titration Applications for thermometric titrimetry are drawn from the major groupings, namely: Acid–base titration Redox titration Precipitation titration Complexometric titration Because the sensor does not interact electrically or electrochemically with the solution, electrical conductance of the titrating medium is not a pre-requisite for a determination. Titrations may be carried out in completely non-conducting, non-polar media if required. Further, titrations may be carried out in turbid solutions or even suspensions of solids, and titrations where precipitates are reaction products can be contemplated. The range of possible thermometric titration applications far exceeds the actual experience of this writer, and the reader will be referred to the appropriate literature in some instances. Acid–base titrations Determination of fully dissociated acids and bases. The heat of neutralization of a fully dissociated acid with a fully dissociated base is approximately –56kJ/mol. The reaction is thus strongly exothermic, and is an excellent basis for a wide range of analysis in industry. An advantage for the industrial analyst is that the use of stronger titrants (1 to 2 mol/L) permits a reduction in the amount of sample preparation, and samples can often be directly and accurately dispensed into the titration vessel prior to titration. Titration of weak acids Weakly dissociated acids yield sharp thermometric endpoints when titrated with a strong base. For instance, bicarbonate can be unequivocally determined in the company of carbonate by titrating with hydroxyl (Δ0Hr=-40.9 kJ/mol). Titration of acid mixtures Mixtures of complex acids can be resolved by thermometric titration with standard NaOH in aqueous solution. In a mixture of nitric, acetic and phosphoric acids used in the fabrication of semi-conductors, three endpoints could be predicted on the basis of the dissociation constants of the acids: The key to determine the amount of each acid present in the mixture is the ability to obtain an accurate value for the amount of phosphoric acid present, as revealed by titration of the third proton of H3PO4. Figure 10 illustrates a titration plot of this mixture, showing 3 sharp endpoints. Titration of complex alkaline solutions The thermometric titrimetric analysis of sodium aluminate liquor ("Bayer liquor") in the production of alumina from bauxite is accomplished in an automated two titration sequence. This is an adaptation of a classic thermometric titration application (VanDalen and Ward, 1973). In the first titration, tartrate solution is added to an aliquot of liquor to complex aluminate, releasing one mole of hydroxyl for each mole of aluminate present. This is titrated acidimetrically along with "free" hydroxyl present and the carbonate content (as a second endpoint). The second titration is preceded by the automatic addition of fluoride solution. The alumina-tartrate complex is broken in favour of the formation of an aluminium fluoride complex and the concomitant release of three moles of hydroxyl for each mole of aluminium present, which are then titrated acidimetrically. The whole determination can be completed in less than 5 minutes. Non-aqueous acid–base titrations Non-aqueous acid–base titrations can be carried out advantageously by thermometric means. Acid leach solutions from some copper mines can contain large quantities of Fe(III) as well as Cu(II). The "free acid" (sulfuric acid) content of these leach solutions is a critical process parameter. While thermometric titrimetry can determine the free acid content with modest amounts of Fe(III), in some solutions the Fe(III) content is so high as to cause serious interference. Complexation with necessarily large amounts of oxalate is undesirable due to the toxicity of the reagent. A thermometric titration was devised by diluting the aliquot with propan-2-ol and titration with standard KOH in propan-2-ol. Most of the metal content precipitated prior to the commencement of the titration, and a clear, sharp endpoint for the sulfuric acid content was obtained. Catalyzed endpoint thermometric acid–base titrations The determination of trace acids in organic matrices is a common analytical task assigned to titrimetry. Examples are Total Acid Number (TAN) in mineral and lubricating oils and Free Fatty Acids (FFA) in edible fats and oils. Automated potentiometric titration procedures have been granted standard method status, for example by ASTM for TAN and AOAC for FFA. The methodology is similar in both instances. The sample is dissolved in a suitable solvent mixture; say a hydrocarbon and an alcohol which also must contain a small amount of water. The water is intended to enhance the electrical conductivity of the solution. The trace acids are titrated with standard base in an alcohol. The sample environment is essentially hostile to the pH electrode used to sense the titration. The electrode must be taken out of service on a regular basis to rehydrate the glass sensing membrane, which is also in danger of fouling by the oily sample solution. A recent thermometric titrimetric procedure for the determination of FFA developed by Cameiro et al. (2002) has been shown to be particularly amenable to automation. It is fast, highly precise, and results agree very well with those obtained by the official AOAC method. The temperature change for the titration of very weak acids such as oleic acid by 0.1 mol/L KOH in propan-2-ol is too small to yield an accurate endpoint. In this procedure, a small amount of paraformaldehyde as a fine powder is added to the titrand before the titration. At the endpoint, the first excess of hydroxyl ions catalyzes the depolymerization of paraformaldehyde. The reaction is strongly endothermic and yields a sharp inflection. The titration plot is illustrated in Figure 13. The speed of this titration coupled with its precision and accuracy makes it ideal for the analysis of FFA in biodiesel feedstocks and product. Redox titrations Titrations with permanganate and dichromate Redox reactions are normally strongly exothermic, and can make excellent candidates for thermometric titrations. In the classical determination of ferrous ion with permanganate, the reaction enthalpy is more than double that of a strong acid/strong base titration:Δ0Hr = −123.9 kJ/mol of Fe. The determination of hydrogen peroxide by permanganate titration is even more strongly exothermic at Δ0Hr = −149.6 kJ/mol H2O2 Titrations with thiosulfate In the determination of hypochlorite (for example in commercial bleach formulations), a direct titration with thiosulfate can be employed without recourse to an iodometric finish.      ClO− + H2O + 2e− ↔ Cl− + 2OH−                      2S2O32− ↔ S4O62− + 2e−          2S2O32− +ClO− +H2O ↔ S4O62− +Cl− +2OH− Thermometric iodometric titrations employing thiosulfate as a titrant are also practical, for example in the determination of Cu(II). In this instance, it has been found advantageous to incorporate the potassium iodide reagent with the thiosulfate titrant in such proportions that iodine is released into solution just prior to its reduction by thiosulfate. This minimizes iodine losses during the course of the titration. Titrations with hypochlorite While relatively unstable and requiring frequent standardization, sodium hypochlorite has been used in a very rapid thermometric titration method for the determination of ammonium ion. This is an alternative to the classical approach of ammonia distillation from basic solution and consequent acid–base titration. The thermometric titration is carried out in bicarbonate solution containing bromide ion (Brown et al., 1969). Complexometric (EDTA) titrations Thermometric titrations employing sodium salts of ethylenediaminetetra-acetic acid (EDTA) have been demonstrated for the determination of a range of metal ions. Reaction enthalpies are modest, so titrations are normally carried out with titrant concentrations of 1 mol/L. This necessitates the use of the tetra-sodium salt of EDTA rather than the more common di-sodium salt which is saturated at a concentration of only approximately 0.25 mol/L. An excellent application is the sequential determination of calcium and magnesium. Although calcium reacts exothermically with EDTA (heat of chelation ~-23.4 kJ/mol), magnesium reacts endothermically with a heat of chelation of ~+20.1 kJ/mol. This is illustrated in the titration plot of EDTA with calcium and magnesium in sea water (Figure 14). Following the solution temperature curve, the breakpoint for the calcium content (red-tagged endpoint) is followed by a region of modest temperature rise due to competition between the heats of dilution of the titrant with the solution, and the endothermic reaction of Mg2+ and EDTA. The breakpoint for the consumption of Mg2+ (blue-tagged endpoint) by EDTA is revealed by upswing in temperature caused purely by the heat of dilution. Direct EDTA titrations with metal ions are possible when reaction kinetics are fast, for example zinc, copper, calcium and magnesium. However, with slower reaction kinetics such as those exhibited by cobalt and nickel, back-titrations are used. Titrations for cobalt and nickel are carried out in an ammoniacal environment; buffered with ammonia:ammonium chloride solution. An excess of EDTA is added, and is back-titrated with Cu(II) solution. It is postulated that the breakpoint is revealed by the difference in reaction enthalpies between the formation of the Cu-EDTA complex, and that for the formation of the Cu-amine complex. A catalyzed endpoint procedure to determine trace amounts of metal ions in solution (down to approximately 10 mg/L) employs 0.01 mol/L EDTA. This has been applied to the determination of low level Cu(II) in specialized plating baths, and to the determination of total hardness in water. The reaction enthalpies of EDTA with most metal ions are often quite low, and typically titrant concentrations around 1 mol/L are employed with commensurately high amounts of titrand in order to obtain sharp, reproducible endpoints. Using a catalytically indicated endpoint, very low EDTA titrant concentrations can be used. A back-titration is used. An excess of EDTA solution is added. The excess of EDTA is back-titrated with a suitable metal ion such as Mn2+ or Cu2+. At the endpoint, the first excess of metal ion catalyzes a strongly exothermic reaction between a polyhydric phenol (such as resorcinol) and hydrogen peroxide. Precipitation titrations Thermometric titrimetry is particularly suited to the determination of a range of analytes where a precipitate is formed by reaction with the titrant. In some cases, an alternative to traditional potentiometric titration practice can be offered. In other cases, reaction chemistries may be employed for which there is no satisfactory equivalent in potentiometric titrimetry. Titrations with silver nitrate Thermometric titrations of silver nitrate with halides and cyanide are all possible. The reaction of silver nitrate with chloride is strongly exothermic. For instance, the reaction enthalpy of Ag+ with Cl− is a high −61.2 kJ/mol. This permits convenient determination of chloride with commonly available standard 0.1 mol/L AgNO3. Endpoints are very sharp, and with care, chloride concentrations down to 15 mg/L can be analyzed. Bromide and chloride may be determined in admixture. Titration of sulfate Sulfate may be rapidly and easily titrated thermometrically using standard solutions of Ba2+ as titrant. Industrially, the procedure has been applied to the determination of sulfate in brine (including electrolysis brines), in nickel refining solutions and particularly for sulfate in wet process phosphoric acid, where it has proven to be quite popular. The procedure can also be used to assist in the analysis of complex acid mixtures containing sulfuric acid where resorting to titration in non-aqueous media is not feasible. The reaction enthalpy for the formation of barium sulfate is a modest −18.8 kJ/mol. This can place a restriction on the lower limit of sulfate in a sample which can be analyzed. Titration of aluminium with fluoride Thermometric titrimetry offers a rapid, highly precise method for the determination of aluminium in solution. A solution of aluminium is conditioned with acetate buffer and an excess of sodium and potassium ions. Titration with sodium or potassium fluoride yields the exothermic precipitation of an insoluble alumino-fluoride salt. Al3+ + Na+ + 2K+ + 6F− ↔ K2NaAlF6↓ Because 6 mole of fluoride react with one mole of aluminium, the titration is particularly precise, and a coefficient of variance (CV) of 0.03 has been achieved in the analysis of alum. When aluminium ion (say as aluminium nitrate) is employed as the titrant, fluoride can be determined using the same chemistry. This titration is useful in the determination of fluoride in complex acid mixtures used as etchants in the semi-conductor industry. Titration of total orthophosphate Orthophosphate ion can be conveniently thermometrically titrated with magnesium ions in the presence of ammonium ion. An aliquot of sample is buffered to approximately pH10 with an NH3/NH4Cl solution. The reaction: Mg2+ + NH4+ + PO43− ↔ MgNH4PO4↓ Is exothermic. CV's of under 0.1 have been achieved in test applications. The procedure is suitable for the determination of orthophosphate in fertilizers and other products. Titration of nickel Nickel can be titrated thermometrically using di-sodium dimethylglyoximate as titrant. The chemistry is analogous to the classic gravimetric procedure, but the time taken for a determination can be reduced from many hours to a few minutes. Potential interferences need to be considered. Titration of anionic and cationic surfactants Anionic and cationic surfactants can be determined thermometrically by titrating one type against the other. For instance, benzalkonium chloride (a quaternary-type cationic surfactant) may be determined in cleaners and algaecides for swimming pools and spas by titrating with a standard solution of sodium dodecyl sulfate. Alternatively, anionic surfactants such as sodium lauryl sulfate can be titrated with cetyl pyridinium chloride. Titration of non-ionic surfactants When an excess of Ba2+ is added to a non-ionic surfactant of the alkyl propylene oxide derivative type, a pseudo-cationic complex is formed. This may be titrated with standard sodium tetraphenylborate. Two moles tetraphenylborate react with one mole of the Ba2+/ non-ionic surfactant complex. Miscellaneous aqueous titrations Titration of fluoride with boric acid Acidic solutions of fluoride (including hydrofluoric acid) can be determined by a simple thermometric titration with boric acid. B(OH)3 + 3F− + 3H+ ↔ BF3 + 3H2O The titration plot illustrated in Figure 19 shows that the endpoint is quite rounded, suggesting that the reaction might not proceed to stoichiometric equilibrium. However, since the regions of the temperature curve immediately before and after the endpoint are quite linear, the second derivative of this curve (representing the intersection of tangents) will accurately locate the endpoint. Indeed, excellent precision can be obtained with this titration, with a CV of less than 0.1. Determination of formaldehyde Formaldehyde can be determined in electroless copper plating solutions by the addition of an excess of sodium sulfite solution and titrating the liberated hydroxyl ion with standard acid. H2C=O + HSO3− + H2O → [HO-CH2-SO3−] + OH− References J. M. Bell and C. F. Cowell. J. Am. Chem. Soc. 35, 49-54 (1913) E. VanDalen and L. G. Ward. Thermometric titration determination of hydroxide and alumina in Bayer process solutions. Anal. Chem. 45 (13) 2248-2251, (1973) M. J. D. Carneiro, M. A. Feres Júnior, and O. E. S. Godinho. Determination of the acidity of oils using paraformaldehyde as a thermometric end-point indicator. J. Braz. Chem. Soc. 13 (5) 692-694 (2002) Bibliography Bark, L. S. and Bark, S. M.; (1969). Thermometric titrimetry. International Series of Monographs in Analytical Chemistry Vol 33 Pergamon Press (Oxford) Library of Congress Catalog Card No. 68-57883 Barthel, J.; (1975) Thermometric titrations. John Wiley & Sons, New York. Library of Congress Catalog Card No. 75-17503 Eatough, D. J.; Christensen, J. J. & Izatt R. M.; (1974) Experiments in thermometric titrimetry and titration calorimetry. Brigham Young University Press, Provo, Utah. Library of Congress Catalog Card 74-13074 Grime, J. K.; (1985) Analytical solution calorimetry. John Wiley & Sons, New York. Library of Congress Catalog Card No. 84-28424 Vaughan, G.A.; (1973) Thermometric and enthalpimetric titrimetry. Van Nostrand Reinhold Company (London) Library of Congress Catalog Card No. 79-186764 External links Basics of Thermometric Titration IUPAC Definition of Thermometric Titration Metrohm Thermometric Titration Monograph Titration
Thermometric titration
[ "Chemistry" ]
6,944
[ "Instrumental analysis", "Titration" ]
8,381,628
https://en.wikipedia.org/wiki/Timeline%20of%20biotechnology
The historical application of biotechnology throughout time is provided below in chronological order. These discoveries, inventions and modifications are evidence of the application of biotechnology since before the common era and describe notable events in the research, development and regulation of biotechnology. Before Common Era 5000 BCE – Chinese discover fermentation through beer making. 6000 BCE – Yogurt and cheese made with lactic acid-producing bacteria by various people. 4500 BCE – Egyptians bake leavened bread using yeast. 500 BCE – Moldy soybean curds used as an antibiotic. 300 BCE – The Greeks practice crop rotation for maximum soil fertility. 100 AD – Chinese use chrysanthemum as a natural insecticide. Pre-20th century 1663 – First recorded description of living cells by Robert Hooke. 1677 – Antonie van Leeuwenhoek discovers and describes bacteria and protozoa. 1798 – Edward Jenner uses first viral vaccine to inoculate a child from smallpox. 1802 – The first recorded use of the word biology. 1824 – Henri Dutrochet discovers that tissues are composed of living cells. 1838 – Protein discovered, named and recorded by Gerardus Johannes Mulder and Jöns Jacob Berzelius. 1862 – Louis Pasteur discovers the bacterial origin of fermentation. 1863 – Gregor Mendel discovers the laws of inheritance. 1864 – invents first centrifuge to separate cream from milk. 1869 – Friedrich Miescher identifies DNA in the sperm of a trout. 1871 – Felix Hoppe-Seyler discovers invertase, which is still used for making artificial sweeteners. 1877 – Robert Koch develops a technique for staining bacteria for identification. 1878 – Walther Flemming discovers chromatin leading to the discovery of chromosomes. 1881 – Louis Pasteur develops vaccines against bacteria that cause cholera and anthrax in chickens. 1885 – Louis Pasteur and Emile Roux develop the first rabies vaccine and use it on Joseph Meister. 20th century 1919 – Károly Ereky, a Hungarian agricultural engineer, first uses the word biotechnology. In 1919, a pivotal milestone was reached with the production of citric acid by Aspergillus niger, marking the inception of the first aerobic fermentation process. This breakthrough spurred the development of technologies to ensure the supply of sterile air at a large scale, paving the way for future advancements in industrial fermentation processes. 1928 – Alexander Fleming notices that a certain mold could stop the duplication of bacteria, leading to the first antibiotic: penicillin. 1933 – Hybrid corn is commercialized. 1942 – Penicillin is mass-produced in microbes for the first time. 1950 – The first synthetic antibiotic is created. 1951 – Artificial insemination of livestock is accomplished using frozen semen. 1952 – L.V. Radushkevich and V.M. Lukyanovich publish clear images of 50 nanometer diameter tubes made of carbon, in the Soviet Journal of Physical Chemistry. 1953 – James D. Watson and Francis Crick describe the structure of DNA. 1958 – The term bionics is coined by Jack E. Steele. 1964 – The first commercial myoelectric arm is developed by the Central Prosthetic Research Institute of the USSR and distributed by the Hangar Limb Factory of the UK. 1972 – The DNA composition of chimpanzees and gorillas is discovered to be 99% similar to that of humans. 1973 – Stanley Norman Cohen and Herbert Boyer perform the first successful recombinant DNA experiment, using bacterial genes. 1974 – Scientists invent the first biocement for industrial applications. 1975 – Method for producing monoclonal antibodies developed by Köhler and César Milstein. 1978 – North Carolina scientists Clyde Hutchison and Marshall Edgell show it is possible to introduce specific mutations at specific sites in a DNA molecule. 1980 – The U.S. patent for gene cloning is awarded to Cohen and Boyer. 1982 – Humulin, Genentech's human insulin drug produced by genetically engineered bacteria for the treatment of diabetes, is the first biotech drug to be approved by the Food and Drug Administration. 1983 – The Polymerase Chain Reaction (PCR) technique is conceived. 1990 – First federally approved gene therapy treatment is performed successfully on a young girl who suffered from an immune disorder. 1994 – The United States Food and Drug Administration approves the first GM food: the "Flavr Savr" tomato. 1997 – British scientists, led by Ian Wilmut from the Roslin Institute, report cloning Dolly the sheep using DNA from two adult sheep cells. 1999 – Discovery of the gene responsible for developing cystic fibrosis. 2000 – Completion of a "rough draft" of the human genome in the Human Genome Project. 21st century 2001 – Celera Genomics and the Human Genome Project create a draft of the human genome sequence. It is published by Science and Nature Magazine. 2002 – Rice becomes the first crop to have its genome decoded. 2003 – The Human Genome Project is completed, providing information on the locations and sequence of human genes on all 46 chromosomes. 2004 – Addgene launches. 2008 – Japanese astronomers launch the first Medical Experiment Module called "Kibō", to be used on the International Space Station. 2010-Over the past two decades, a considerable focus has been directed toward creating sustainable alternatives for petroleum-based fuels, chemicals, and materials. Major players in the chemical industry, such as BASF, DSM, BP, and Total, have initiated significant projects and collaborations in metabolic engineering. Additionally, various startups have emerged with the goal of pioneering new bio-based processes for sustainable chemicals. Despite advancements in establishing large-scale processes, the overall impact on transitioning the chemical industry from petroleum-based to bio-based has been limited. For instance, efforts to engineer microbial production of succinic acid have faced challenges, leading to the termination or minimal-scale production of related research and commercial activities. Out of the chemicals listed by the US Department of Energy, only lactic acid and itaconic acid have achieved industrial-scale production. Lactic acid, added to the list in 2010 after large-scale production was established, currently holds a market value exceeding US$2.5 billion, primarily used in the production of polylactate. 2009 – Cedars-Sinai Heart Institute uses modified SAN heart genes to create the first viral pacemaker in guinea pigs, now known as iSANs. 2012 – Thirty-one-year-old Zac Vawter successfully uses a nervous system-controlled bionic leg to climb the Chicago Willis Tower. 2018-The Joint Centre of Excellence by Imperial College and the UK National Physical Laboratory focuses on advancing industry collaboration to transform high-value manufacturing into high-value products. Noteworthy progress includes the adoption of SBOL by ACS Synthetic Biology in 2016 and ongoing efforts, such as engagement with the BioRoboost project, aiming for international standards with partners from the US, China, Japan, and Singapore. 2019 – Scientists report, for the first time, the use of the CRISPR technology to edit human genes to treat cancer patients with whom standard treatments were not successful. The progression of commercial applications in synthetic biology is notably swift, propelled predominantly by investments directed towards start-up enterprises and small to medium-sized enterprises (SMEs) engaged in the dissemination of tools, services, and products to the market. This is exemplified by the informational resource titled 'Synthetic Biology UK — A Decade of Rapid Progress,' disseminated online in July 2019, which furnishes a demonstrative compilation of instances rooted in the United Kingdom. 2019 – In a study researchers describe a new method of genetic engineering superior to previous methods like CRISPR they call "prime editing". 2020 27 January – Scientists demonstrate a "Trojan horse" designer-nanoparticle that makes blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks and are the current most common cause of death globally. 5 February – Scientists develop a CRISPR-Cas12a-based gene editing system that can probe and control several genes at once and can implement logic gating to e.g. detect cancer cells and execute therapeutic immunomodulatory responses. 6 February – Scientists report that preliminary results from a phase I trial using CRISPR-Cas9 gene editing of T cells in patients with refractory cancer demonstrates that, according to their study, such CRISPR-based therapies can be safe and feasible. 4 March – Scientists report that they have developed a way to 3D bioprint graphene oxide with a protein. They demonstrate that this novel bioink can be used to recreate vascular-like structures. This may be used in the development of safer and more efficient drugs. 4 March – Scientists report to have used CRISPR-Cas9 gene editing inside a human's body for the first time. They aim to restore vision for a patient with inherited Leber congenital amaurosis and state that it may take up to a month to see whether the procedure was successful. In an hour-long surgery study approved by government regulators doctors inject three drops of fluid containing viruses under the patient's retina. In earlier tests in human tissue, mice and monkeys scientists were able to correct half of the cells with the disease-causing mutation, which was more than what is needed to restore vision. Unlike germline editing these DNA modifications aren't inheritable. 9 March – Scientists show that CRISPR-Cas12b is a third promising CRISPR editing tool, next to Cas9 and Cas12a, for plant genome engineering. 14 March – Scientists report in a preprint to have developed a CRISPR-based strategy, called PAC-MAN (Prophylactic Antiviral Crispr in huMAN cells), that can find and destroy viruses in vitro. However, they weren't able to test PAC-MAN on the actual SARS-CoV-2, use a targeting-mechanism that uses only a very limited RNA-region, haven't developed a system to deliver it into human cells and would need a lot of time until another version of it or a potential successor system might pass clinical trials. In the study published as a preprint they write that the CRISPR-Cas13d-based system could be used prophylactically as well as therapeutically and that it could be implemented rapidly to manage new pandemic coronavirus strains – and potentially any virus – as it could be tailored to other RNA-targets quickly, only requiring a small change. The paper was published on 29 April 2020. 16 March – Researchers report that they have developed a new kind of CRISPR-Cas13d screening platform for effective guide RNA design to target RNA. They used their model to predict optimized Cas13 guide RNAs for all protein-coding RNA-transcripts of the human genome's DNA. Their technology could be used in molecular biology and in medical applications such as for better targeting of virus RNA or human RNA. Targeting human RNA after it has been transcribed from DNA, rather than DNA, would allow for more temporary effects than permanent changes to human genomes. The technology is made available to researchers through an interactive website and free and open source software and is accompanied by a guide on how to create guide RNAs to target the SARS-CoV-2 RNA genome. 16 March – Scientists present new multiplexed CRISPR technology, called CHyMErA (Cas Hybrid for Multiplexed Editing and Screening Applications), that can be used to analyse which or how genes act together by simultaneously removing multiple genes or gene-fragments using both Cas9 and Cas12a. 10 April – Scientists report to have achieved wireless control of adrenal hormone secretion in genetically unmodified rats through the use of injectable, magnetic nanoparticles (MNPs) and remotely applied alternating magnetic fields heats them up. Their findings may aid research of physiological and psychological impacts of stress and related treatments and present an alternative strategy for modulating peripheral organ function than problematic implantable devices. 14 April – Researchers report to have developed a predictive algorithm which can show in visualizations how combinations of genetic mutations can make proteins highly effective or ineffective in organisms – including for viral evolution for viruses like SARS-CoV-2. 15 April – Scientists describe and visualize the atomical structure and mechanical action of the bacteria-killing bacteriocin R2 pyocin and construct engineered versions with different behaviours than the naturally occurring version. Their findings may aid the engineering of nanomachines such as for targeted antibiotics. 20 April – Researchers demonstrate a diffusive memristor fabricated from protein nanowires of the bacterium Geobacter sulfurreducens which functions at substantially lower voltages than previously described ones and may allow the construction of artificial neurons which function at voltages of biological action potentials. The nanowires have a range of advantages over silicon nanowires and the memristors may be used to directly process biosensing signals, for neuromorphic computing and/or direct communication with biological neurons. 27 April – Scientists report to have genetically engineered plants to glow much brighter than previously possible by inserting genes of the bioluminescent mushroom Neonothopanus nambi. The glow is self-sustained, works by converting plants' caffeic acid into luciferin and, unlike for bacterial bioluminescence genes used earlier, has a high light output that is visible to the naked eye. 8 May – Researchers report to have developed artificial chloroplasts – the photosynthetic structures inside plant cells. They combined thylakoids, which are used for photosynthesis, from spinach with a bacterial enzyme and an artificial metabolic module of 16 enzymes, which can convert carbon dioxide more efficiently than plants can alone, into cell-sized droplets. According to the study this demonstrates how natural and synthetic biological modules can be matched for new functional systems. 11 May – Researchers report the development of synthetic red blood cells that for the first time have all of the natural cells' known broad natural properties and abilities. Furthermore, methods to load functional cargos such as hemoglobin, drugs, magnetic nanoparticles, and ATP biosensors may enable additional non-native functionalities. 12 June – Scientists announce preliminary results that demonstrate successful treatment during a small trial of the first to use of CRISPR gene editing (CRISPR-Cas9) to treat inherited genetic disorders – beta thalassaemia and sickle cell disease. 8 July – Mitochondria are gene-edited for the first time, using a new kind of CRISPR-free base editor (DdCBE), by a team of researchers. 8 July – A team of RIKΞN researchers report that they succeeded in using a genetically altered variant of R. sulfidophilum to produce spidroins, the main proteins in spider silk. 10 July – Scientists report that after mice exercise their livers secrete the protein GPLD1, which is also elevated in elderly humans who exercise regularly, that this is associated with improved cognitive function in aged mice and that increasing the amount of GPLD1 produced by the mouse liver could yield many benefits of regular exercise for the brain. 17 July – Scientists report that yeast cells of the same genetic material and within the same environment age in two distinct ways, describe a biomolecular mechanism that can determine which process dominates during aging and genetically engineer a novel aging route with substantially extended lifespan. 24 July – Scientists report the development of a ML-based process using genome databases for designing novel proteins. They used inverse statistical physics to learn the patterns of amino acid conservation and co-evolution to identify design-rules. 8 September – Scientists report that suppressing activin type 2 receptors-signalling proteins myostatin and activin A via activin A/myostatin inhibitor ACVR2B – tested preliminarily in humans in the form of ACE-031 in the 2010s – can protect against both muscle and bone loss in mice. The mice were sent to the International Space Station and could largely maintain their muscle weights – about twice those of wild type due to genetic engineering for targeted deletion of the myostatin gene – under microgravity. 18 September – Researchers report the development of two active guide RNA-only elements that, according to their study, may enable halting or deleting gene drives introduced into populations in the wild with CRISPR-Cas9 gene editing. The paper's senior author cautions that the two neutralizing systems they demonstrated in cage trials "should not be used with a false sense of security for field-implemented gene drives". 28 September – Biotechnologists report the genetically engineered refinement and mechanical description of synergistic enzymes – PETase, first discovered in 2016, and MHETase of Ideonella sakaiensis – for faster depolymerization of PET and also of PEF, which may be useful for depollution, recycling and upcycling of mixed plastics along with other approaches. 7 October – The 2020 Nobel Prize in Chemistry is awarded to Emmanuelle Charpentier and Jennifer A. Doudna for their work on genome editing. 10 November – Scientists show, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basalt rocks via bioleaching in space. 18 November – Researchers report that CRISPR/Cas9, using a lipid nanoparticle delivery system, has been used to treat cancer effectively in a living animal for the first time. 25 November – Scientists report the development of micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors capable of producing oxygen as well as hydrogen via photosynthesis in daylight under air, which may be useful as a hydrogen economy biotechnology. 30 November – An artificial intelligence company demonstrates an AI algorithm-based approach for protein folding, one of the biggest problems in biology that achieves a protein structure prediction accuracy of over 90% in tests of the biennial CASP assessment with AlphaFold 2. 2 December – The world's first regulatory approval for a cultivated meat product is awarded by the Government of Singapore. The chicken meat was grown in a bioreactor in a fluid of amino acids, sugar, and salt. The chicken nuggets food products are ~70% lab-grown meat, while the remainder is made from mung bean proteins and other ingredients. The company pledged to strive for price parity with premium "restaurant" chicken servings. 11 December – Scientists report that they have rebuilt a human thymus using stem cells and a bioengineered scaffold. 2021 Scientists report the use of CRISPR/Cas9 genome editing to produce a tenfold increase in super-bug targeting formicamycin antibiotics. Scientists use novel lipid nanoparticles to deliver CRISPR genome editing into the livers of mice, resulting in a 57% reduction of LDL cholesterol levels. Researchers describe a CRISPR-dCas9 epigenome editing method for a potential treatment of chronic pain, an analgesia that represses Nav1.7 and showed therapeutic potential in three mouse models of pain. Scientists report the discovery of unknown species of bacteria of Methylobacterium, tentatively named Methylobacterium ajmalii, associated with three new strains, designated IF7SW-B2T, IIF1SW-B5, and IIF4SW-B5, on the ISS. These potentially have ecological significance in closed microgravity systems. A study finds that, despite suboptimal implementation, the snapshot mass-testing for COVID-19 of ~80% of Slovakia's population at the end of October 2020 was highly efficacious, decreasing observed prevalence by 58% within one week and 70% compared to a hypothetical scenario of no snapshot-mass-testing. The extensive worldwide pollution risks due to the use of pesticides are estimated with a new environmental model. Scientists present a tool for epigenome editing, CRISPRoff, that can heritably silence the gene expression of "most genes" and allows for reversible modifications. Scientists report the, controversial, first creation of human-monkey hybrid embryos – some survived for 19 days. A malaria vaccine with 77% efficacy after 1 year – and first to meet the WHO's goal of 75% efficacy – is reported by the University of Oxford. CRISPR gene editing is demonstrated to decrease LDL cholesterol in vivo in Macaca fascicularis by 60%. Researchers partially restore eyesight of a patient with Retinitis pigmentosa using eye-injected viral vectors for genes encoding the light-sensing channelrhodopsin protein ChrimsonR found in glowing algae, and light stimulation of them via engineered goggles that transform visual information of the environment. Scientists develop a light-responsive days-lasting modulator of circadian rhythms of tissues via Ck1 inhibition which may be useful for chronobiology research and repair of organs that are "out of sync". Biologists report the development of a new updated classification system for cell nuclei and find a way of transmuting one cell type into that of another. Researchers report the development of a plant proteins-based biodegradable packaging alternative to plastic based on research about molecularly similar spider silk known for its high strength. The first, small clinical trial of CRISPR gene editing in which a – lipid nanoparticle formulated – CRISPR (with mCas9) gene editing therapeutic is injected in vivo into bloodstream of humans concludes with promising results. Researchers report the development of embedded biosensors for pathogenic signatures – such as of SARS-CoV-2 – that are wearable such as face masks. Scientists report that solar-energy-driven production of microbial foods from direct air capture substantially outperforms agricultural cultivation of staple crops in terms of land use. Researchers report that a mix of microorganisms from cow stomachs could break down three types of plastics. Researchers report promising results of ongoing testing and development of an engineered monoclonal antibodies based female contraception. Researchers demonstrate that probiotics can help coral reefs mitigate heat stress, indicating that such could make them more resilient to climate change and mitigate coral bleaching. Researchers present a bioprinting method to produce steak-like cultured meat, composed of three types of bovine cell fibers. Bioengineers report the development of a viable CRISPR-Cas gene-editing system, "CasMINI", that is about twice as compact as the commonly used Cas9 and Cas12a. Media outlets report that the world's first cultured coffee product has been created, still awaiting regulatory approval for near-term commercialization. It was also reported that another biotechnology company produced and sold "molecular coffee" without clear details of the molecular composition or similarity to cultured coffee except having compounds that are in green coffee and that a third company is working on the development of a similar product made from extracted molecules. Such products, for which multiple companies' R&D have acquired substantial funding, may have equal or highly similar effects, composition and taste as natural products but use less water, generate less carbon emissions, require less and relocated labor and cause no deforestation. Researchers report the world's first artificial synthesis of starch. The material essential for many products and the most common carbohydrate in human diets was made from CO2 in a cell-free process and could reduce land, pesticide and water use as well as greenhouse gas emissions while increasing food security. Media outlets report that in Japan the first CRISPR-edited food has gone on public sale. Tomatoes were genetically modified for around five times the normal amount of possibly calming GABA. CRISPR was first applied in tomatoes in 2014. Biomedical researchers demonstrate a switchable Yamanaka factors-reprogramming-based approach for regeneration of damaged heart without tumor-formation with success in mice if the intervention is done immediately before or after a heart attack. The World Health Organization endorses the first malaria vaccine – the antiparasitic RTS,S. A new eco-friendly way of extracting and separating rare earth elements is described, using a bacteria-derived protein called lanmodulin, which binds easily to the metals. Medical researchers announce that on 25 September the first successful xenotransplantation of a, genetically engineered, pig kidney, along with the pig thymus gland to make the immune system recognize it as part of the body, to a brain-dead human with no immediate signs of rejection, moving the practice closer to clinical trials with some of the living humans waiting for kidney transplants. Researchers report the development of chewing gums that could mitigate COVID-19 spread. The ingredients – CTB-ACE2 proteins grown via plants – bind to the virus. Bionanoengineers report a novel therapy for spinal cord injury – an injectable gel of nanofibers that contain moving molecules that cause cellular repair signaling and mimic the matrix around cells. The therapy enabled paralyzed mice to walk again. Biochemists report one of the first supercomputational approaches for the development of new antibiotic derivatives against antimicrobial resistance. Scientists report the development of a vaccine of mRNAs for the body build 19 proteins in tick saliva which, by enabling quick development of erythema (itchy redness) at the bite site, protects guinea pigs against Lyme disease from ticks. Sri Lanka announces that it will lift its import ban on pesticides and herbicides, explained by both a lack of sudden changes to widely applied practices or education systems and contemporary economics and, by extension, food security, protests and high food costs. The effort for the first transition to a completely organic farming nation was challenged by effects of the COVID-19 pandemic. A team of scientists reports a new form of biological reproduction in the, <1 mm sized, xenobots that are made up of and are emersed in frog cells. A method of DNA data storage with 100 times the density of previous techniques is announced. A stem cell-based treatment for Type 1 diabetes is announced. Scientists demonstrate that grown brain cells integrated into digital systems can carry out goal-directed tasks with performance-scores. In particular, playing a simulated (via electrophysiological stimulation) Pong which the cells learned to play faster than known machine intelligence systems, albeit to a lower skill-level than both AI and humans. Moreover, the study suggests it provides "first empirical evidence" of information-processing capacity differences between neurons from different species. Such technologies are referred to as Organoid Intelligence (OI). Researchers report the development of face masks that glow under ultraviolet light if they contain SARS-CoV-2 when the filter is taken out and sprayed with a fluorescent dye that contains antibodies from ostrich eggs. Scientists report the development of a genome editing system, called "twin prime editing", which surpasses the original prime editing system reported in 2019 in that it allows editing large sequences of DNA, addressing the method's key drawback. An mRNA vaccine against HIV with promising results in tests with mice and primates is reported. A vaccine to remove senescent cells, a key driver of the aging process, is demonstrated in mice by researchers from Japan. Scientists call for accelerated efforts in the development of broadly protective vaccines, especially a universal coronavirus vaccine that durably protects not just against all SARS-CoV-2 variants but also other coronaviruses, including already identified animal coronaviruses with pandemic potential. Researchers report the development of DNA-based "nanoantennas" that attach to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for distinct conformational changes. The first CRISPR-gene-edited seafood and second set of CRISPR-edited food has gone on public sale in Japan: two fish of which one species grows to twice the size of natural specimens due to disruption of leptin, which controls appetite, and the other grows to 1.2 the natural size with the same amount of food due to disabled myostatin, which inhibits muscle growth. 2022 Scientists report the development of sensors to gather and identify DNA of animals from air (airborne eDNA). A team reports the fastest ever sequencing of a human genome, accomplished in just five hours and two minutes. A chip with molecular circuit components in single-molecule (bio)sensors is demonstrated. Bionanotechnologists report the development of a viable biosensor, , that can detect levels of diverse water pollutants. Researchers report the development of 3D-printed nano-"skyscraper" electrodes that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis than before. Genetic engineers report field test results that show CRISPR-based gene knockout of KRN2 in maize and OsKRN2 in rice increased grain yields by ~10% and ~8% and did not find any negative effects. Publication of research reporting the sequencing of the remaining gap of the Human genome. Researchers report that CRISPR-Cas9 gene editing has been used to boost vitamin D in tomatoes. Scientists report the first 3D-printed lab-grown wood. It is unclear if it could ever be used on a commercial scale (e.g. with sufficient production efficiency and quality). Researchers report a robotic finger covered in a type of manufactured living human skin. Researchers report the controlled growth of diverse foods in the dark as a potential way to increase energy efficiency of food production and reduce its environmental impacts. News outlets report about the development of algae biopanels by a company for sustainable energy generation with unclear viability after other researchers built the self-powered house prototype in 2013. Researchers report the development of deep learning software that can design proteins that contain prespecified functional sites. Researchers introduce and demonstrate it by repurposing dead spiders as robotic grippers by activating their gripping arms via applying pressurized air. DeepMind announces that its AlphaFold program has uncovered the structures of more than 200 million folded proteins, essentially all of those known to science. The creation of artificial neurons that can receive and release dopamine (chemical signals rather than electrical signals) and communicate with natural rat muscle and brain cells is reported, with potential for use in BCIs/prosthetics. Multiple gene editing of soybean is shown to improve photosynthesis and boost yields by 20%. First report of Synthetic embryos grown exclusively from mouse embryonic stem cells, without sperm or eggs or a uterus, with natural-like development and some surviving until day 8.5 where early organogenesis, including formation of foundations of a brain, occurs. They grew in vitro and subsequently ex utero in an artificial womb developed the year before by the same group. Scientists elaborate a need for an evidence-based reform of regulation of genetically modified crops (moving from regulation based on characteristics of the development-process to characteristics of the product) in a paywalled article. Researchers report the development of remote controlled cyborg cockroaches if moving to sunlight for recharging. A novel synthetic biology-based process for recycling of plastics mixtures is presented. Emulate researchers assess advantages of using liver-chips predicting drug-induced liver injury which could reduce the high costs and time needed in drug development workflows/pipelines, sometimes described as the pharmaceutical industry's "productivity crisis". In a paywalled article, American scientists propose policy-based measures to reduce large risks from life sciences research – such as pandemics through accident or misapplication. Risk management measures may include novel international guidelines, effective oversight, improvement of US policies to influence policies globally, and identification of gaps in biosecurity policies along with potential approaches to address them. News reports about the development in China of an edible, plant-based ink derived from food waste, which could be used in 3D printing of scaffolds to reduce the cost of cultured meat. Medical applications Some of these items may also have potential nonmedical applications and vice versa. The first successful xenogeneic heart transplant, from a genetically modified pig to a human patient, is reported. Microbiologists demonstrate an individually adjusted phage-antibiotic combination as an antimicrobial resistance treatment, calling for scaling up the research and further development of this approach. Scientists regrow the missing legs of adult frogs, which are naturally unable to regenerate limbs, within 1.5 years using a five-drug mixture applied for 24 hours via a silicone wearable bioreactor. Scientists report the detection of anomalous unknown-host SARS-CoV-2 lineages with RT-qPCR-based wastewater surveillance. Researchers demonstrate a spinal cord stimulator that enables patients with spinal cord injury to walk again via epidural electrical stimulation (EES) with substantial neurorehabilitation-progress during the first day. On the same day, a separate team reports the first engineered functional human (motor-)neuronal networks derived from iPSCs from the patient for implantation to regenerate injured spinal cord showing success in tests with mice. A new therapy called is reported by scientists in South Korea, which uses CRISPR-Cas9 to kill cancer cells without harming normal tissues. A new compact CRISPR gene editing tool better suited for therapeutic (temporary) RNA editing than Cas13 is reported, Cas7-11, – of which an early version was used for in vitro editing in 2021. The world's smallest remote-controlled walking robot, measuring just half a millimetre wide, is demonstrated. Potential applications include the clearing of blocked arteries. Success of record-long (3 days rather than usually <12 hours) of human transplant organ preservation with machine perfusion of a liver is reported. It could possibly be extended to 10 days and prevent substantial cell damage by low temperature preservation methods. On the same day, a separate study reports new cryoprotectant solvents, tested with cells, that could preserve organs by the latter methods for much longer with substantially reduced damage. First success of a clinical trial for a 3D bioprinted transplant, an external ear to treat microtia, that is made from the patient's own cells is reported. Researchers describe a new light-activated 'photoimmunotherapy' for brain cancer in vitro. They believe it could join surgery, chemotherapy, radiotherapy and immunotherapy as a fifth major form of cancer treatment. Researchers, health organizations and regulators are discussing, and partly recommending COVID-19 vaccine boosters that mix the original vaccine formulation with Omicron-adjusted parts – such as spike proteins of a specific Omicron subvariant – to better prepare the immune system to recognize a wide variety of variants amid substantial and ongoing immune evasion by Omicron. A new CRISPR gene editing/repair tool alternative to fully active Cas9 is reported – Cas9-derived nickases mediated homologous chromosome-templated repair, applicable to organisms whose matching chromosome has the desired gene/s, which to be more effective than Cas9 and cause fewer off-target edits. Progress towards a pan coronavirus vaccine is announced, following tests on mice. Antibodies targeting the S2 subunit of SARS-CoV-2's spike protein are found to neutralise multiple coronavirus variants. Scientists report an organ perfusion system that can restore, i.e. on the cellular level, multiple vital (pig) organs one hour after death (during which the body had warm ischaemia), after reporting a similar method/system for reviving (pig) brains hours after death in 2019. This could be used to preserve donor organs or for revival in medical emergencies. Lab-made cartilage gel based on a synthetic hydrogel composite is found to have greater strength and wear resistance than natural cartilage, which could enable the durable resurfacing of damaged articulating joints. A bioengineered cornea made from pig's skin is shown to restore vision to blind people. It can be mass-produced and stored for up to two years, unlike donated human corneas that are scarce and must be used within two weeks. A weak spot in the spike protein of SARS-CoV-2 is described by researchers, which an antibody fragment called VH Ab6 can attach to, potentially neutralising all major variants of the virus. On 11 August, researchers report a single antibody, SP1-77, that could potentially neutralize all known variants of the virus via a novel mechanism, not by not preventing the virus from binding to ACE2 receptors but by blocking it from fusing with host cells' membranes. A university reports the first successful transplantation of an organoid into a human, first announced on 7 July, with the underlying study being published in February. Researchers report the development of a highly effective CRISPR-Cas9 genome editing method without expensive viral vectors, enabling e.g. novel anti-cancer CAR-T cell therapies. Wastewater surveillance, which substantially expanded during the COVID-19 pandemic is used to detect monkeypox, with one team of researchers describing their qualitative detection method. A new malaria vaccine developed by the University of Oxford is shown to be ~80% effective at preventing the disease. A study adds to the accumulating research indicating postexposure antiviral TIPs could be an effective countermeasure that reduces COVID-19 transmission. India and China approve first nasal COVID-19 vaccines which may (as boosters) also reduce transmission (sterilizing immunity). Nanoengineers report the development of biocompatible microalgae hybrid microrobots for active drug-delivery in the lungs and the gastrointestinal tract (GT). The microrobots are related to medical nanobots and proved effective in tests with mice. A separate team reports the development of 'RoboCap', a robotic drug delivery capsule that enhances drug absorption by tunneling through the mucus layer in the GT. A magnetical guidance system with engineered bacterial microbots for 'precision targeting' is demonstrated to be effective for fighting cancer in mice. The first clinical trial of laboratory-grown red blood cells transfused into people begins. A new CRISPR-Cas9 gene editing tool for large edits without problematic double-stranded breaks is demonstrated, . Researchers report the development of a blood test, , for Alzheimer's screening via levels of toxic amyloid beta oligomers with sensitivity and specificity of apparently 99%. A separate study reports another well-performing blood test to detect Alzheimer's disease via biomarker brain-derived tau. 2023 Cellular bioengineers report the development of nonreplicating bacterial 'cyborg cells' (similar to artificial cells) using a novel approach, assembling a synthetic hydrogel polymer network as an artificial cytoskeleton inside the bacteria. The cells can resist stressors that would kill natural cells and e.g. invade cancer cells or potentially act as biosensors. News outlets report on a study (Nov 22) demonstrating locust antennae implanted as biosensors into (bio-hybrid) robots for AI-interpreted machine olfaction. Scientists review safety-by-design technology- and policy-based approaches to ensure biosafety and biosecurity to prevent engineered pathogen pandemics, such as sequence screening and biocontainment systems, some of which already implemented and part of regulations to some degree. Researchers report the development of a biocomposite 3D printing ink, BactoInk, containing calcium carbonate-producing microorganisms which could be used for restoration, artificial reefs and potentially bone-repair. The growing of electrodes in the living tissue of zebrafish (including in the brain) and medicinal leeches is demonstrated, using an injectable gel and the animals' own endogenous molecules to trigger the formation. The researchers claim their breakthrough enables "a new paradigm in bioelectronics." Scientists coalesce recent developments using human brain organoids into a new field they term organoid intelligence (OI), seeking to harness OI for computing – as a novel type of AI – in an ethically responsible way. Networks of such miniature tissues could become functional using stimulus-response training or organoid-computer interfaces – to potentially become "more powerful than silicon-based computing" for a range of tasks – and could also be used for research of various pathophysiologies, brain development, human learning, memory and intelligence, and new therapeutic approaches against brain diseases. Biological organoid intelligence, 'Brainoware', is demonstrated to solve in a preprint, with implications for bioethics and potential bottlenecks and limits of nonbio-AI. A bacterial hydrogenase enzyme, Huc, for biohydrogen energy from the air is reported. A study reports a bacterial new PVC injection system-based way of protein delivery, one of the biggest unsolved problems of gene editing. Researchers demonstrate functional integration of a magnetically steered microbot containing neurons, 'Mag-Neurobot', in a mouse "organotypic hippocampal slice" (OHS) as physical (semi-)artificial neurons. Neuroengineers demonstrate induction of a torpor-like state in mice via ultrasound stimulation. Researchers report in a preprint the CRISPR alternative Fanzor naturally present in eukaryotes with several potential advantages over CRISPR in genome editing, notably smaller size and higher selectiveness. A separate team further demonstrates the potential of this class of genome editors. A new method to deliver drugs into the inner ear is demonstrated with a gene-therapy against hearing loss in mice. Researchers demonstrate encoding and storing data – small images – as DNA without new DNA synthesis by recording light exposure into bacterial DNA via optogenetic circuits. The 'biological camera' extends chemical and electrical interface techniques. Scientists use CRISPR gene-editing to reduce the lignin content in poplar trees by as much as 50%, offering a potentially more sustainable method of fiber production. Researchers report a production method for spider silk fibers from gene-edited transgenic silkworms for a sustainable alternative material six times stronger than Kevlar. In 10 studies, researchers of the report yeast with a half-synthetic genome. Researchers report the discovery of nearly 200 functionally diverse natural machineries for CRISPR gene editing. Researchers demonstrate multicellular microbots grown from a human cell, "anthrobots", that can move around in tissues in vitro. Notable innovations: a large language model (ProGen) that can generate functional protein sequences with a predictable function, with the input including tags specifying protein properties, a deep-learning model (ZFDesign) for zinc finger design for any genomic target for gene- and epigenetic-editing, a second biotech company commercializes sustainable MS mycelium protein after Quorn in 1983, a biodegradable and biorecyclable glass, nonalcoholic first powdered beer (Dryest Beer), a phase-change materials embedded in wood-based energy-saving building material, cultivated meat from extinct mammoths as demonstration of potential, first yeast-based cow-free dairy (Remilk), a method for fat tissue cultured meat, an engineered probiotic against alcohol-induced damage, exogenously administered bioengineered sensors that amplify urinary cancer biomarkers for detection, an open source automated experimentation science platform (BacterAI) for predicting microbial metabolism , an open source transfer learning-based system (Geneformer) for predicting how networks of interconnected human genes control or affect the function of cells, first approval for two cultured meat products in the U.S. and two of the first worldwide, transgenic soya beans containing pig protein (Piggy Sooy) are reported, a performant open source AI software for protein design (RFdiffusion) is introduced, a viable real-time pathogen air quality (pAQ) sensor is demonstrated, a CRISPR-free base editing system without guide RNA that enables also editing chloroplast and mitochondrial genomes with precision (CyDENT), genetically engineered marine microorganism for breaking down PET in salt water, taste-tested bioreactor-grown cultured coffee. Medical applications Researchers demonstrate the use of ants as biosensors to detect cancer via urine, a mice-tested engineered probiotic against autoimmunity in the brain as in multiple sclerosis, mice-tested engineered bacteria to detect cancer DNA, 3D printed of hair follicles on lab-grown skin. AI in drug development successes The world's first COVID-19 drug designed by generative AI is approved for human use, with clinical trials expected to begin in China. The new drug, ISM3312, is developed by Insilico Medicine. A new AI algorithm developed by Baidu is shown to boost the antibody response of COVID-19 mRNA vaccines by 128 times. AI is used to develop an experimental antibiotic called abaucin, which is shown to be effective against A. baumannii. AI is used to find senolytics. A science writer provides an overview of "the nascent industry of AI-designed drugs". A new class of antibiotic candidates, able to kill methicillin-resistant Staphylococcus aureus (MRSA) is identified using explainable deep learning. The first successful transplant of a functional cryopreserved mammalian kidney is reported. The study demonstrates a "nanowarming" technique for vitrification for up-to-100 days preservation of transplant organs. 2024 Notable innovation: rice grains as scaffolds containing cultured animal cells are demonstrated, precision fermentation-derived beta-lactoglobulin is released as a substitute for whey protein amid growth of a nascent animal-free dairy industry. See also Bioeconomy Bioelectronics Biotechnology risk Working animal Synthetic biology Environmental impact of pesticides#Alternatives Bioethics#Issues Bioinformatics CRISPR gene editing#Recent events Nanobiotechnology Timeline of sustainable energy research 2020–present#Bioenergy and biotechnology Timeline of biology and organic chemistry#1990–present Timeline of the history of genetics Medical Artificial intelligence in healthcare Diagnostic microbiology Gene therapy#2020s List of emerging technologies#Medical Regeneration in humans Timeline of human vaccines Timeline of medicine and medical technology#2000–2022 Timeline of senescence research References Biology timelines Genetics-related lists Medicine timelines Technology timelines
Timeline of biotechnology
[ "Biology" ]
9,604
[ "History of biotechnology" ]
8,384,010
https://en.wikipedia.org/wiki/List%20of%20fluid%20mechanics%20journals
This is a list of scientific journals related to the field of fluid mechanics. See also List of scientific journals List of physics journals List of materials science journals Fluid mechanics Fluid mechanics
List of fluid mechanics journals
[ "Chemistry", "Engineering" ]
36
[ "Fluid dynamics journals", "Civil engineering", "Fluid mechanics", "Fluid dynamics" ]
8,384,818
https://en.wikipedia.org/wiki/Moving%20shock
In fluid dynamics, a moving shock is a shock wave that is travelling through a fluid (often gaseous) medium with a velocity relative to the velocity of the fluid already making up the medium. As such, the normal shock relations require modification to calculate the properties before and after the moving shock. A knowledge of moving shocks is important for studying the phenomena surrounding detonation, among other applications. Theory To derive the theoretical equations for a moving shock, one may start by denoting the region in front of the shock as subscript 1, with the subscript 2 defining the region behind the shock. This is shown in the figure, with the shock wave propagating to the right. The velocity of the gas is denoted by u, pressure by p, and the local speed of sound by a. The speed of the shock wave relative to the gas is W, making the total velocity equal to u1 + W. Next, suppose a reference frame is then fixed to the shock so it appears stationary as the gas in regions 1 and 2 move with a velocity relative to it. Redefining region 1 as x and region 2 as y leads to the following shock-relative velocities: With these shock-relative velocities, the properties of the regions before and after the shock can be defined below introducing the temperature as T, the density as ρ, and the Mach number as M: Introducing the heat capacity ratio as γ, the speed of sound, density, and pressure ratios can be derived: One must keep in mind that the above equations are for a shock wave moving towards the right. For a shock moving towards the left, the x and y subscripts must be switched and: See also Shock wave Oblique shock Normal shock Gas dynamics Compressible flow Bow shock (aerodynamics) Prandtl-Meyer expansion fan References External links NASA Beginner's Guide to Compressible Aerodynamics Fluid dynamics Aerodynamics Shock waves
Moving shock
[ "Physics", "Chemistry", "Engineering" ]
394
[ "Physical phenomena", "Shock waves", "Chemical engineering", "Aerodynamics", "Waves", "Aerospace engineering", "Piping", "Fluid dynamics" ]
8,387,306
https://en.wikipedia.org/wiki/Heat%20capacity%20rate
The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as C, listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, Ch and Cc either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine. Basis A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling. As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day. If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through a heat exchanger, the hot fluid would have a very small change in temperature while the cold fluid would heat up a significant amount. If the cool fluid has a much lower heat capacity rate, that is desirable. If they were equal, they would both change more or less temperature equally, assuming equal mass-flow per unit time through a heat exchanger. In practice, a cooling fluid which has both a higher specific heat capacity and a lower heat capacity rate is desirable, accounting for the pervasiveness of water cooling solutions in technology—the polar nature of the water molecule creates some distinct sub-atomic behaviors favorable in practice. where C = heat capacity rate of the fluid of interest in W⋅K−1, dm/dt = mass flow rate of the fluid of interest and cp = specific heat of the fluid of interest. See also Heat specific heat Heat capacity Heat capacity ratio Heat equation Heat transfer coefficient Latent heat Specific heat capacity Specific melting heat Temperature Thermodynamics Thermodynamic (absolute) temperature Thermodynamic equations Volumetric heat capacity References Fundamentals of Heat and Mass Transfer (6th edition) Incorpera, DeWitt, Bergmann, and Lavine Heat transfer Physical quantities Temporal rates
Heat capacity rate
[ "Physics", "Chemistry", "Mathematics" ]
647
[ "Temporal quantities", "Transport phenomena", "Physical phenomena", "Heat transfer", "Physical quantities", "Quantity", "Temporal rates", "Thermodynamics", "Physical properties" ]
1,919,357
https://en.wikipedia.org/wiki/Constituent%20quark
A constituent quark is a current quark with a notional "covering" induced by the renormalization group. In the low-energy limit of QCD, a description by means of perturbation theory is not possible: Here, no asymptotic freedom exists, but collective interactions between valence quarks and sea quarks gain strongly in significance. Part of the effects of virtual quarks and virtual gluons in the "sea" can be assigned to a quark so well that the term "constituent quark" can serve as an effective description of the low-energy system. Constituent quarks appear like "dressed" current quarks, i.e. current quarks surrounded by a cloud of virtual quarks and gluons. This cloud, in the end, underlies the large constituent-quark masses. Definition Constituent quarks are valence quarks for which the correlations for the description of hadrons by means of gluons and sea-quarks are put into effective quark masses of these valence quarks. The effective quark mass is called constituent quark mass. Hadrons consist of "glued" constituent quarks. Binding energy The quantum chromodynamic binding energy of a valence quark in a hadron is the amount of energy required to make the hadron spontaneously emit a meson containing the valence quark. This is the same as the constituent-quark mass. Note that the following values are model-dependent. References Quarks
Constituent quark
[ "Physics" ]
324
[ "Particle physics stubs", "Particle physics" ]
1,919,367
https://en.wikipedia.org/wiki/QCD%20matter
Quark matter or QCD matter (quantum chromodynamic) refers to any of a number of hypothetical phases of matter whose degrees of freedom include quarks and gluons, of which the prominent example is quark-gluon plasma. Several series of conferences in 2019, 2020, and 2021 were devoted to this topic. Quarks are liberated into quark matter at extremely high temperatures and/or densities, and some of them are still only theoretical as they require conditions so extreme that they cannot be produced in any laboratory, especially not at equilibrium conditions. Under these extreme conditions, the familiar structure of matter, where the basic constituents are nuclei (consisting of nucleons which are bound states of quarks) and electrons, is disrupted. In quark matter it is more appropriate to treat the quarks themselves as the basic degrees of freedom. In the standard model of particle physics, the strong force is described by the theory of QCD. At ordinary temperatures or densities this force just confines the quarks into composite particles (hadrons) of size around 10−15 m = 1 femtometer = 1 fm (corresponding to the QCD energy scale ΛQCD ≈ 200 MeV) and its effects are not noticeable at longer distances. However, when the temperature reaches the QCD energy scale (T of order 1012 kelvins) or the density rises to the point where the average inter-quark separation is less than 1 fm (quark chemical potential μ around 400 MeV), the hadrons are melted into their constituent quarks, and the strong interaction becomes the dominant feature of the physics. Such phases are called quark matter or QCD matter. The strength of the color force makes the properties of quark matter unlike gas or plasma, instead leading to a state of matter more reminiscent of a liquid. At high densities, quark matter is a Fermi liquid, but is predicted to exhibit color superconductivity at high densities and temperatures below 1012 K. Occurrence Natural occurrence According to the Big Bang theory, in the early universe at high temperatures when the universe was only a few tens of microseconds old, the phase of matter took the form of a hot phase of quark matter called the quark–gluon plasma (QGP). Compact stars (neutron stars). A neutron star is much cooler than 1012 K, but gravitational collapse has compressed it to such high densities, that it is reasonable to surmise that quark matter may exist in the core. Compact stars composed mostly or entirely of quark matter are called quark stars or strange stars. QCD matter may exist within the collapsar of a gamma-ray burst, where temperatures as high as 6.7 × 1013 K may be generated. At this time no star with properties expected of these objects has been observed, although some evidence has been provided for quark matter in the cores of large neutron stars. Strangelets. These are theoretically postulated (but as yet unobserved) lumps of strange matter comprising nearly equal amounts of up, down and strange quarks. Strangelets are supposed to be present in the galactic flux of high energy particles and should therefore theoretically be detectable in cosmic rays here on Earth, but no strangelet has been detected with certainty. Cosmic ray impacts. Cosmic rays comprise a lot of different particles, including highly accelerated atomic nuclei, particularly that of iron. Laboratory experiments suggests that the inevitable interaction with heavy noble gas nuclei in the upper atmosphere would lead to quark–gluon plasma formation. Quark matter with baryon number over about 300 may be more stable than nuclear matter. This form of baryonic matter could possibly form a continent of stability. Laboratory experiments Even though quark-gluon plasma can only occur under quite extreme conditions of temperature and/or pressure, it is being actively studied at particle colliders, such as the Large Hadron Collider LHC at CERN and the Relativistic Heavy Ion Collider RHIC at Brookhaven National Laboratory. In these collisions, the plasma only occurs for a very short time before it spontaneously disintegrates. The plasma's physical characteristics are studied by detecting the debris emanating from the collision region with large particle detectors Heavy-ion collisions at very high energies can produce small short-lived regions of space whose energy density is comparable to that of the 20-micro-second-old universe. This has been achieved by colliding heavy nuclei such as lead nuclei at high speeds, and a first time claim of formation of quark–gluon plasma came from the SPS accelerator at CERN in February 2000. This work has been continued at more powerful accelerators, such as RHIC in the US, and as of 2010 at the European LHC at CERN located in the border area of Switzerland and France. There is good evidence that the quark–gluon plasma has also been produced at RHIC. Thermodynamics The context for understanding the thermodynamics of quark matter is the standard model of particle physics, which contains six different flavors of quarks, as well as leptons like electrons and neutrinos. These interact via the strong interaction, electromagnetism, and also the weak interaction which allows one flavor of quark to turn into another. Electromagnetic interactions occur between particles that carry electrical charge; strong interactions occur between particles that carry color charge. The correct thermodynamic treatment of quark matter depends on the physical context. For large quantities that exist for long periods of time (the "thermodynamic limit"), we must take into account the fact that the only conserved charges in the standard model are quark number (equivalent to baryon number), electric charge, the eight color charges, and lepton number. Each of these can have an associated chemical potential. However, large volumes of matter must be electrically and color-neutral, which determines the electric and color charge chemical potentials. This leaves a three-dimensional phase space, parameterized by quark chemical potential, lepton chemical potential, and temperature. In compact stars quark matter would occupy cubic kilometers and exist for millions of years, so the thermodynamic limit is appropriate. However, the neutrinos escape, violating lepton number, so the phase space for quark matter in compact stars only has two dimensions, temperature (T) and quark number chemical potential μ. A strangelet is not in the thermodynamic limit of large volume, so it is like an exotic nucleus: it may carry electric charge. A heavy-ion collision is in neither the thermodynamic limit of large volumes nor long times. Putting aside questions of whether it is sufficiently equilibrated for thermodynamics to be applicable, there is certainly not enough time for weak interactions to occur, so flavor is conserved, and there are independent chemical potentials for all six quark flavors. The initial conditions (the impact parameter of the collision, the number of up and down quarks in the colliding nuclei, and the fact that they contain no quarks of other flavors) determine the chemical potentials. (Reference for this section:). Phase diagram The phase diagram of quark matter is not well known, either experimentally or theoretically. A commonly conjectured form of the phase diagram is shown in the figure to the right. It is applicable to matter in a compact star, where the only relevant thermodynamic potentials are quark chemical potential μ and temperature T. For guidance it also shows the typical values of μ and T in heavy-ion collisions and in the early universe. For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the system. Higher μ means a stronger bias favoring quarks over antiquarks. At low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks. Ordinary atomic matter as we know it is really a mixed phase, droplets of nuclear matter (nuclei) surrounded by vacuum, which exists at the low-temperature phase boundary between vacuum and nuclear matter, at μ = 310 MeV and T close to zero. If we increase the quark density (i.e. increase μ) keeping the temperature low, we move into a phase of more and more compressed nuclear matter. Following this path corresponds to burrowing more and more deeply into a neutron star. Eventually, at an unknown critical value of μ, there is a transition to quark matter. At ultra-high densities we expect to find the color-flavor-locked (CFL) phase of color-superconducting quark matter. At intermediate densities we expect some other phases (labelled "non-CFL quark liquid" in the figure) whose nature is presently unknown. They might be other forms of color-superconducting quark matter, or something different. Now, imagine starting at the bottom left corner of the phase diagram, in the vacuum where μ = T = 0. If we heat up the system without introducing any preference for quarks over antiquarks, this corresponds to moving vertically upwards along the T axis. At first, quarks are still confined and we create a gas of hadrons (pions, mostly). Then around T = 150 MeV there is a crossover to the quark gluon plasma: thermal fluctuations break up the pions, and we find a gas of quarks, antiquarks, and gluons, as well as lighter particles such as photons, electrons, positrons, etc. Following this path corresponds to travelling far back in time (so to say), to the state of the universe shortly after the big bang (where there was a very tiny preference for quarks over antiquarks). The line that rises up from the nuclear/quark matter transition and then bends back towards the T axis, with its end marked by a star, is the conjectured boundary between confined and unconfined phases. Until recently it was also believed to be a boundary between phases where chiral symmetry is broken (low temperature and density) and phases where it is unbroken (high temperature and density). It is now known that the CFL phase exhibits chiral symmetry breaking, and other quark matter phases may also break chiral symmetry, so it is not clear whether this is really a chiral transition line. The line ends at the "chiral critical point", marked by a star in this figure, which is a special temperature and density at which striking physical phenomena, analogous to critical opalescence, are expected. (Reference for this section:). For a complete description of phase diagram it is required that one must have complete understanding of dense, strongly interacting hadronic matter and strongly interacting quark matter from some underlying theory e.g. quantum chromodynamics (QCD). However, because such a description requires the proper understanding of QCD in its non-perturbative regime, which is still far from being completely understood, any theoretical advance remains very challenging. Theoretical challenges: calculation techniques The phase structure of quark matter remains mostly conjectural because it is difficult to perform calculations predicting the properties of quark matter. The reason is that QCD, the theory describing the dominant interaction between quarks, is strongly coupled at the densities and temperatures of greatest physical interest, and hence it is very hard to obtain any predictions from it. Here are brief descriptions of some of the standard approaches. Lattice gauge theory The only first-principles calculational tool currently available is lattice QCD, i.e. brute-force computer calculations. Because of a technical obstacle known as the fermion sign problem, this method can only be used at low density and high temperature (μ < T), and it predicts that the crossover to the quark–gluon plasma will occur around T = 150 MeV However, it cannot be used to investigate the interesting color-superconducting phase structure at high density and low temperature. Weak coupling theory Because QCD is asymptotically free it becomes weakly coupled at unrealistically high densities, and diagrammatic methods can be used. Such methods show that the CFL phase occurs at very high density. At high temperatures, however, diagrammatic methods are still not under full control. Models To obtain a rough idea of what phases might occur, one can use a model that has some of the same properties as QCD, but is easier to manipulate. Many physicists use Nambu–Jona-Lasinio models, which contain no gluons, and replace the strong interaction with a four-fermion interaction. Mean-field methods are commonly used to analyse the phases. Another approach is the bag model, in which the effects of confinement are simulated by an additive energy density that penalizes unconfined quark matter. Effective theories Many physicists simply give up on a microscopic approach, and make informed guesses of the expected phases (perhaps based on NJL model results). For each phase, they then write down an effective theory for the low-energy excitations, in terms of a small number of parameters, and use it to make predictions that could allow those parameters to be fixed by experimental observations. Other approaches There are other methods that are sometimes used to shed light on QCD, but for various reasons have not yet yielded useful results in studying quark matter. 1/N expansion Treat the number of colors N, which is actually 3, as a large number, and expand in powers of 1/N. It turns out that at high density the higher-order corrections are large, and the expansion gives misleading results. Supersymmetry Adding scalar quarks (squarks) and fermionic gluons (gluinos) to the theory makes it more tractable, but the thermodynamics of quark matter depends crucially on the fact that only fermions can carry quark number, and on the number of degrees of freedom in general. Experimental challenges Experimentally, it is hard to map the phase diagram of quark matter because it has been rather difficult to learn how to tune to high enough temperatures and density in the laboratory experiment using collisions of relativistic heavy ions as experimental tools. However, these collisions ultimately will provide information about the crossover from hadronic matter to QGP. It has been suggested that the observations of compact stars may also constrain the information about the high-density low-temperature region. Models of the cooling, spin-down, and precession of these stars offer information about the relevant properties of their interior. As observations become more precise, physicists hope to learn more. One of the natural subjects for future research is the search for the exact location of the chiral critical point. Some ambitious lattice QCD calculations may have found evidence for it, and future calculations will clarify the situation. Heavy-ion collisions might be able to measure its position experimentally, but this will require scanning across a range of values of μ and T. Evidence In 2020, evidence was provided that the cores of neutron stars with mass ~2M⊙ were likely composed of quark matter. Their result was based on neutron-star tidal deformability during a neutron star merger as measured by gravitational-wave observatories, leading to an estimate of star radius, combined with calculations of the equation of state relating the pressure and energy density of the star's core. The evidence was strongly suggestive but did not conclusively prove the existence of quark matter. See also Sources and further reading Aronson, S. and Ludlam, T.: "Hunting the quark gluon plasma", U.S. Dept. of Energy (2005) Letessier, Jean: Hadrons and quark-gluon plasma, Cambridge monographs on particle physics, nuclear physics, and cosmology (Vol. 18), Cambridge University Press (2002) References External links Virtual Journal on QCD Matter RHIC finds Exotic Antimatter Quantum chromodynamics Phases of matter Unsolved problems in physics
QCD matter
[ "Physics", "Chemistry" ]
3,363
[ "Quark matter", "Phases of matter", "Unsolved problems in physics", "Astrophysics", "Nuclear physics", "Matter" ]
1,919,614
https://en.wikipedia.org/wiki/Gas%20leak
A gas leak refers to a leak of natural gas or another gaseous product from a pipeline or other containment into any area where the gas should not be present. Gas leaks can be hazardous to health as well as the environment. Even a small leak into a building or other confined space may gradually build up an explosive or lethal gas concentration. Natural gas leaks and the escape of refrigerant gas into the atmosphere are especially harmful, because of their global warming potential and ozone depletion potential. Leaks of gases associated with industrial operations and equipment are also generally known as fugitive emissions. Natural gas leaks from fossil fuel extraction and use are known as fugitive gas emissions. Such unintended leaks should not be confused with similar intentional types of gas release, such as: gas venting emissions which are controlled releases, and often practiced as a part of routine operations, or "emergency pressure releases" prevent equipment damage and safeguard life. Gas leaks should also not be confused with "gas seepage" from the earth or oceans - either natural or due to human activity. Fire and explosion safety Pure natural gas is colorless and odorless and is composed primarily of methane. Unpleasant scents in the form of traces of mercaptans are usually added, to assist in identifying leaks. This odor may be perceived as rotting eggs or a faintly unpleasant skunk smell. Persons detecting the odor must evacuate the area and abstain from using open flames or operating electrical equipment, to reduce the risk of fire and explosion. As a result of the Pipeline Safety Improvement Act of 2002 passed in the United States, federal safety standards require companies providing natural gas to conduct safety inspections for gas leaks in homes and other buildings receiving natural gas. The gas company is required to inspect gas meters and inside gas piping from the point of entry into the building to the outlet side of the gas meter for gas leaks. This may require entry into private homes by the natural gas companies to check for hazardous conditions. Harm to vegetation Gas leaks can damage or kill plants. In addition to leaks from natural gas pipes, methane and other gases migrating from landfill garbage disposal sites can also cause chlorosis and necrosis in grass, weeds, or trees. In some cases, leaking gas may migrate as far as from the source of the leak to an affected tree. Harm to animals Methane is an asphyxiant gas which can reduce the normal oxygen concentration in breathing air. Small animals and birds are also more sensitive to toxic gas like carbon monoxide that are sometimes present with natural gas. The expression "canary in a coal mine" derives from the historical practice of using a canary as an animal sentinel to detect dangerously high concentrations of naturally occurring coal gas. Greenhouse gas emissions Methane, the primary constituent of natural gas, is up to 120 times as potent a greenhouse gas as carbon dioxide. Thus, the release of unburned natural gas produces much stronger effects than the carbon dioxide that would have been released if the gas had been burned as intended. Leak grades In the United States, most state and federal agencies have adopted the Gas Piping and Technology Committee (GPTC) standards for grading natural gas leaks. A Grade 1 leak is a leak that represents an existing or probable hazard to persons or property, and requires immediate repair or continuous action until the conditions are no longer hazardous. Examples of a Grade 1 leak are: Any leak which, in the judgment of operating personnel at the scene, is regarded as an immediate hazard. Escaping gas that has ignited. Any indication of gas which has migrated into or under a building, or into a foreign sub-structure. Any reading at the outside wall of a building, or where gas would likely migrate to an outside wall of a building. Any reading of 80% LEL, or greater, in a confined space. Any reading of 80% LEL, or greater in small substructures (other than gas associated sub structures) from which gas would likely migrate to the outside wall of a building. Any leak that can be seen, heard, or felt, and which is in a location that may endanger the general public or property. A Grade 2 leak is a leak that is recognized as being non-hazardous at the time of detection, but justifies scheduled repair based on probable future hazard. Examples of a Grade 2 Leak are: Leaks Requiring Action Ahead of Ground Freezing or Other Adverse Changes in Venting Conditions. Any leak which, under frozen or other adverse soil conditions, would likely migrate to the outside wall of a building. Leaks requiring action within six months Any reading of 40% LEL, or greater, under a sidewalk in a wall-to-wall paved area that does not qualify as a Grade 1 leak. Any reading of 100% LEL, or greater, under a street in a wall-to-wall paved area that has significant gas migration and does not qualify as a Grade 1 leak. Any reading less than 80% LEL in small substructures (other than gas associated substructures) from which gas would likely migrate creating a probable future hazard. Any reading between 20% LEL and 80% LEL in a confined space. Any reading on a pipeline operating at 30 percent specified minimum yield strength (SMYS) or greater, in a class 3 or 4 location, which does not qualify as a Grade 1 leak. Any reading of 80% LEL, or greater, in gas associated sub-structures. Any leak which, in the judgment of operating personnel at the scene, is of sufficient magnitude to justify scheduled repair. A Grade 3 leak is non-hazardous at the time of detection and can be reasonably expected to remain non-hazardous. Examples of a Grade 3 Leak are: Any reading of less than 80% LEL in small gas associated substructures. Any reading under a street in areas without wall-to-wall paving where it is unlikely the gas could migrate to the out-side wall of a building. Any reading of less than 20% LEL in a confined space. Studies In 2012, Boston University professor Nathan Phillips and his students drove along all of Boston roads with a gas sensor, identifying 3300 leaks. The Conservation Law Foundation produced a map showing around 4000 leaks reported to the Massachusetts Department of Public Utilities. In July 2014, the Environmental Defense Fund released an interactive online map based on gas sensors attached to three mapping cars which already were being driven along Boston streets to update Google Earth Street View. This survey differed from the previous studies in that an estimate of leak severity was produced, rather than just leak detection. This map should help the gas utility to prioritize leak repairs, as well as raising public awareness of the problem. In 2017, Rhode Island released an estimated 15.7 million metric tons of greenhouse gases, about a third of which comes from leaks in natural gas pipes. This figure, published in 2019, was calculated based on an assumed leakage rate of 2.7% (as that is the rate of leakage in the nearby city of Boston). The study's authors estimated that fixing the leaks would incur an annual cost of $1.6 billion to $4 billion. In 2021, University of Geoscience(Beijing) affiliates Jian Rui Feng and Wen-men Gai, along with Chief Engineer of the Guangzhou Metro Group Co Ya-bin Yan, launched a case study modelling a subway within Guangzhou, China and potential evacuation plans and actions that could mitigate risk to personal against gas leaks via virtual computations. This study found that the activation of air vents, the reduction number of initial people that needed to evacuate, and the increased ability for each person to identity the risk all reduced the risk that a gas leak would pose within the subway. Regulation Massachusetts Legislation passed in 2014 requires gas suppliers to make greater efforts to control some of the 20,000 documented leaks in the US state of Massachusetts. The new law requires grade 1 and 2 leaks to be repaired if the street above a gas pipe is dug up, and requires priority be given to leaks near schools. It provides a mechanism for increased revenue from ratepayers (up to 1.5% without further approval) to cover the cost of repairs and replacement of leak-prone materials (like cast iron and non-cathodically protected steel) on an accelerated basis. The law sets a target of 20 years for replacement of pipes made from leak-prone materials if feasible given the revenue cap; , Columbia Gas of Massachusetts (formerly named "Bay State Gas"), Berkshire Gas, Liberty Utilities, National Grid, and Unitil say they will meet this target, but NSTAR says it will take 25 years to complete. Leaks, statistics on leak-prone materials, and financial statements are reported annually to the Department of Public Utilities, which also has responsibility for rate-setting. Additional proposals not included in the law would have required grade 3 leaks to be repaired during road construction, and priority for leaks which are killing trees or which were near hospitals or churches. An attorney for the Conservation Law Foundation stated that the leaks were worth $38.8 million in lost natural gas, which also contributes 4% of the state's greenhouse gas emissions. A federal study prompted by US Senator Edward J. Markey concluded that Massachusetts consumers paid approximately $1.5 billion from 2000–2011 for gas which leaked and benefited no one. Markey has also backed legislation that would implement similar requirements at the national level, along with financing provisions for repairs. History Catastrophic gas leaks, such as the Bhopal disaster are well-recognized as problems, but the more-subtle effects of chronic low-level leaks have been slower to gain recognition. Other contexts In work with dangerous gases (such as in a lab or industrial setting), a gas leak may require hazmat emergency response, especially if the leaked material is flammable, explosive, corrosive, or toxic. For instance, the transportation of natural gasses can be susceptible to gas leaks them with themselves have explosive properties, such as the Latvian natural gas system and its report on the classification and potential risks of gas leakage as well as actionable responses. There is also the safety of the public to consider, such as analyzing technics to help ensure the safety and ease of evacuation plans. See also Gas detector List of pipeline accidents in the United States Merrimack Valley gas explosions 2022 Nord Stream pipeline sabotage References External links naturalgaswatch.org (advocacy blog) City Maps of Gas Leaks reported by utilities in Massachusetts Somerville and Cambridge gas leaks surveyed by mobile detection vehicle Occupational safety and health Gases Natural gas safety
Gas leak
[ "Physics", "Chemistry" ]
2,137
[ "Matter", "Natural gas safety", "Phases of matter", "Natural gas technology", "Statistical mechanics", "Gases" ]
1,920,694
https://en.wikipedia.org/wiki/Volume%20element
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form where the are the coordinates, so that the volume of any set can be computed by For example, in spherical coordinates , and so . The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density. Volume element in Euclidean space In Euclidean space, the volume element is given by the product of the differentials of the Cartesian coordinates In different coordinate systems of the form , , , the volume element changes by the Jacobian (determinant) of the coordinate change: For example, in spherical coordinates (mathematical convention) the Jacobian determinant is so that This can be seen as a special case of the fact that differential forms transform through a pullback as Volume element of a linear subspace Consider the linear subspace of the n-dimensional Euclidean space Rn that is spanned by a collection of linearly independent vectors To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the is the square root of the determinant of the Gramian matrix of the : Any point p in the subspace can be given coordinates such that At a point p, if we form a small parallelepiped with sides , then the volume of that parallelepiped is the square root of the determinant of the Grammian matrix This therefore defines the volume form in the linear subspace. Volume element of manifolds On an oriented Riemannian manifold of dimension n, the volume element is a volume form equal to the Hodge dual of the unit constant function, : Equivalently, the volume element is precisely the Levi-Civita tensor . In coordinates, where is the determinant of the metric tensor g written in the coordinate system. Area element of a surface A simple example of a volume element can be explored by considering a two-dimensional surface embedded in n-dimensional Euclidean space. Such a volume element is sometimes called an area element. Consider a subset and a mapping function thus defining a surface embedded in . In two dimensions, volume is just area, and a volume element gives a way to determine the area of parts of the surface. Thus a volume element is an expression of the form that allows one to compute the area of a set B lying on the surface by computing the integral Here we will find the volume element on the surface that defines area in the usual sense. The Jacobian matrix of the mapping is with index i running from 1 to n, and j running from 1 to 2. The Euclidean metric in the n-dimensional space induces a metric on the set U, with matrix elements The determinant of the metric is given by For a regular surface, this determinant is non-vanishing; equivalently, the Jacobian matrix has rank 2. Now consider a change of coordinates on U, given by a diffeomorphism so that the coordinates are given in terms of by . The Jacobian matrix of this transformation is given by In the new coordinates, we have and so the metric transforms as where is the pullback metric in the v coordinate system. The determinant is Given the above construction, it should now be straightforward to understand how the volume element is invariant under an orientation-preserving change of coordinates. In two dimensions, the volume is just the area. The area of a subset is given by the integral Thus, in either coordinate system, the volume element takes the same expression: the expression of the volume element is invariant under a change of coordinates. Note that there was nothing particular to two dimensions in the above presentation; the above trivially generalizes to arbitrary dimensions. Example: Sphere For example, consider the sphere with radius r centered at the origin in R3. This can be parametrized using spherical coordinates with the map Then and the area element is See also Volume integral Surface integral Line integral Line element References Measure theory Integral calculus Multivariable calculus
Volume element
[ "Mathematics" ]
955
[ "Multivariable calculus", "Integral calculus", "Calculus" ]
1,920,736
https://en.wikipedia.org/wiki/Claisen%20rearrangement
The Claisen rearrangement is a powerful carbon–carbon bond-forming chemical reaction discovered by Rainer Ludwig Claisen. The heating of an allyl vinyl ether will initiate a [3,3]-sigmatropic rearrangement to give a γ,δ-unsaturated carbonyl, driven by exergonically favored carbonyl CO bond formation Δ(ΔfH) = . Mechanism The Claisen rearrangement is an exothermic, concerted (bond cleavage and recombination) pericyclic reaction. Woodward–Hoffmann rules show a suprafacial, stereospecific reaction pathway. The kinetics are of the first order and the whole transformation proceeds through a highly ordered cyclic transition state and is intramolecular. Crossover experiments eliminate the possibility of the rearrangement occurring via an intermolecular reaction mechanism and are consistent with an intramolecular process. There are substantial solvent effects observed in the Claisen rearrangement, where polar solvents tend to accelerate the reaction to a greater extent. Hydrogen-bonding solvents gave the highest rate constants. For example, ethanol/water solvent mixtures give rate constants 10-fold higher than sulfolane. Trivalent organoaluminium reagents, such as trimethylaluminium, have been shown to accelerate this reaction. Variations Aromatic Claisen rearrangement The first reported Claisen rearrangement is the [3,3]-sigmatropic rearrangement of an allyl phenyl ether to intermediate 1, which quickly tautomerizes to a 2-allylphenol. The Claisen rearrangement can occur in domino fashion with a Cope rearrangement, in which case the allyl group appears to attack the para position on the ring: Meta-substitution affects the regioselectivity of this rearrangement. For example, electron withdrawing groups (such as bromide) at the meta-position direct the rearrangement to the ortho-position (71% ortho product), while electron donating groups (such as methoxy), direct rearrangement to the para-position (69% para product). Additionally, presence of ortho substituents exclusively leads to para-substituted rearrangement products. If an aldehyde or carboxylic acid occupies the ortho or para positions, the allyl side-chain displaces the group, releasing it as carbon monoxide or carbon dioxide, respectively. Bellus–Claisen rearrangement The Bellus–Claisen rearrangement is the reaction of allylic ethers, amines, and thioethers with ketenes to give γ,δ-unsaturated esters, amides, and thioesters. This transformation was serendipitously observed by Bellus in 1979 through their synthesis of an intermediate to an insecticide, pyrethroid. Halogen substituted ketenes (R1, R2) are often used in this reaction for their high electrophilicity. Numerous reductive methods for the removal of the resulting α-haloesters, amides and thioesters have been developed. The Bellus-Claisen offers synthetic chemists a unique opportunity for ring expansion strategies. Eschenmoser–Claisen rearrangement The Eschenmoser–Claisen rearrangement proceeds by heating allylic alcohols in the presence of N,N-dimethylacetamide dimethyl acetal to form a γ,δ-unsaturated amide. This was developed by Albert Eschenmoser in 1964. Eschenmoser-Claisen rearrangement was used as a key step in the total synthesis of morphine. Mechanism: Ireland–Claisen rearrangement The Ireland–Claisen rearrangement is the reaction of an allylic carboxylate with a strong base (such as lithium diisopropylamide) to give a γ,δ-unsaturated carboxylic acid. The rearrangement proceeds via silylketene acetal, which is formed by trapping the lithium enolate with chlorotrimethylsilane. Like the Bellus-Claisen (above), Ireland-Claisen rearrangement can take place at room temperature and above. The E- and Z-configured silylketene acetals lead to anti and syn rearranged products, respectively. There are numerous examples of enantioselective Ireland-Claisen rearrangements found in literature to include chiral boron reagents and the use of chiral auxiliaries. Johnson–Claisen rearrangement The Johnson–Claisen rearrangement is the reaction of an allylic alcohol with an orthoester to yield a ester. Weak acids, such as propionic acid, have been used to catalyze this reaction. This rearrangement often requires high temperatures (100–200 °C) and can take anywhere from 10 to 120 hours to complete. However, microwave assisted heating in the presence of KSF-clay or propionic acid have demonstrated dramatic increases in reaction rate and yields. Mechanism: Kazmaier–Claisen rearrangement The Kazmaier-Claisen rearrangement is the reaction of an unsaturated amino acid ester with a strong base (such as lithium diisopropylamide) and a metal salt at –78 °C to give a chelated enolate as intermediate. While different metal salts can be used to form the enolate, the use of zinc chloride results in the highest yield and gives the best stereospecificity. The enolate species rearranges at –20 °C to form an amino acid with an allylic side chain in α-position. This method was described by Uli Kazmaier in 1993. Photo-Claisen rearrangement The Claisen rearrangement of aryl ethers can also be performed as a photochemical reaction. In addition to the traditional ortho product obtained under thermal conditions (the [3,3] rearrangement product), the photochemical variation also gives the para product ([3,5] product), alternate isomers of the allyl group (for example, [1,3] and [1,5] products), and simple loss of the ether group, and even can rearrange alkyl ethers in addition to allyl ethers. The photochemical reaction occurs via a stepwise process of radical-cleavage followed by bond-formation rather than as a concerted pericyclic reaction, which therefore allows the opportunity for the greater variety of possible substrates and product isomers. The [1,3] and [1,5] results of the photo-Claisen rearrangement are analogous to the photo-Fries rearrangement of aryl esters and related acyl compounds. Hetero-Claisens Aza–Claisen An iminium can serve as one of the pi-bonded moieties in the rearrangement. Chen–Mapp reaction The Chen–Mapp reaction, also known as the [3,3]-phosphorimidate rearrangement or Staudinger–Claisen reaction, installs a phosphite in the place of an alcohol and takes advantage of the Staudinger reduction to convert this to an allylic amine. The subsequent Claisen is driven by the fact that a P=O double bond is more energetically favorable than a P=N double bond. Overman rearrangement The Overman rearrangement (named after Larry Overman) is a Claisen rearrangement of allylic trichloroacetimidates to allylic trichloroacetamides. The Overman rearrangement is applicable to the synthesis of vicinal diamino compounds from 1,2-vicinal allylic diols. Zwitterionic Claisen rearrangement Unlike typical Claisen rearrangements which require heating, zwitterionic Claisen rearrangements take place at or below room temperature. The acyl ammonium ions are highly selective for Z-enolates under mild conditions. In nature The enzyme chorismate mutase (EC 5.4.99.5) catalyzes the Claisen rearrangement of chorismate to prephenate, an intermediate in the biosynthetic pathway towards the synthesis of phenylalanine and tyrosine. History Discovered in 1912, the Claisen rearrangement is the first recorded example of a [3,3]-sigmatropic rearrangement. See also Carroll rearrangement Cope rearrangement References Rearrangement reactions Name reactions Substitution reactions Carbon-carbon bond forming reactions
Claisen rearrangement
[ "Chemistry" ]
1,855
[ "Name reactions", "Carbon-carbon bond forming reactions", "Rearrangement reactions", "Organic reactions" ]
1,924,100
https://en.wikipedia.org/wiki/Fire%20ecology
Fire ecology is a scientific discipline concerned with the effects of fire on natural ecosystems. Many ecosystems, particularly prairie, savanna, chaparral and coniferous forests, have evolved with fire as an essential contributor to habitat vitality and renewal. Many plant species in fire-affected environments use fire to germinate, establish, or to reproduce. Wildfire suppression not only endangers these species, but also the animals that depend upon them. Wildfire suppression campaigns in the United States have historically molded public opinion to believe that wildfires are harmful to nature. Ecological research has shown, however, that fire is an integral component in the function and biodiversity of many natural habitats, and that the organisms within these communities have adapted to withstand, and even to exploit, natural wildfire. More generally, fire is now regarded as a 'natural disturbance', similar to flooding, windstorms, and landslides, that has driven the evolution of species and controls the characteristics of ecosystems. Fire suppression, in combination with other human-caused environmental changes, may have resulted in unforeseen consequences for natural ecosystems. Some large wildfires in the United States have been blamed on years of fire suppression and the continuing expansion of people into fire-adapted ecosystems as well as climate change. Land managers are faced with tough questions regarding how to restore a natural fire regime, but allowing wildfires to burn is likely the least expensive and most effective method in many situations. History Fire has played a major role in shaping the world's vegetation. The biological process of photosynthesis began to concentrate the atmospheric oxygen needed for combustion during the Devonian approximately 350 million years ago. Then, approximately 125 million years ago, fire began to influence the habitat of land plants. In the 20th century ecologist Charles Cooper made a plea for fire as an ecosystem process. Fire components A fire regime describes the characteristics of fire and how it interacts with a particular ecosystem. Its "severity" is a term that ecologists use to refer to the impact that a fire has on an ecosystem. It is usually studied using tools such as remote sensing which can detect burned area estimates, severity and fire risk associated with an area. Ecologists can define this in many ways, but one way is through an estimate of plant mortality. Fires can burn at three elevation levels. Ground fires will burn through soil that is rich in organic matter. Surface fires will burn through living and dead plant material at ground level. Crown fires will burn through the tops of shrubs and trees. Ecosystems generally experience a mix of all three. Fires will often break out during a dry season, but in some areas wildfires also commonly occur during times of year when lightning is prevalent. The frequency over a span of years at which fire will occur at a particular location is a measure of how common wildfires are in a given ecosystem. It is either defined as the average interval between fires at a given site, or the average interval between fires in an equivalent specified area. Defined as the energy released per unit length of fireline (kW m−1), wildfire intensity can be estimated either as the product of the linear spread rate (m s−1), the low heat of combustion (kJ kg−1), and the combusted fuel mass per unit area, or it can be estimated from the flame length. Abiotic responses Fires can affect soils through heating and combustion processes. Depending on the temperatures of the soils during the combustion process, different effects will happen- from evaporation of water at the lower temperature ranges, to the combustion of soil organic matter and the formation of pyrogenic organic matter, such as charcoal. Fires can cause changes in soil nutrients through a variety of mechanisms, which include oxidation, volatilization, erosion, and leaching by water, but the event must usually be of high temperatures for significant loss of nutrients to occur. However, the quantity of bioavailable nutrients in the soil usually increases due to the ash that is generated, as compared to the slow release of nutrients by decomposition. Rock spalling (or thermal exfoliation) accelerates weathering of rock and potentially the release of some nutrients. Increase in the pH of the soil following a fire is commonly observed, most likely due to the formation of calcium carbonate, and the subsequent decomposition of this calcium carbonate to calcium oxide when temperatures get even higher. It could also be due to the increased cation content in the soil due to the ash, which temporarily increases soil pH. Microbial activity in the soil might also increase due to the heating of soil and increased nutrient content in the soil, though studies have also found complete loss of microbes on the top layer of soil after a fire. Overall, soils become more basic (higher pH) following fires because of acid combustion. By driving novel chemical reactions at high temperatures, fire can even alter the texture and structure of soils by affecting the clay content and the soil's porosity. Removal of vegetation following a fire can cause several effects on the soil, such as increasing the temperatures of the soil during the day due to increased solar radiation on the soil surface, and greater cooling due to loss of radiative heat at night. Less plant matter to intercept rain will allow more to reach the soil surface, and with fewer plants to absorb the water, the amount of water content in the soils might increase. However, ash can be water repellent when dry, and therefore water content and availability might not actually increase. Biotic responses and adaptations Plants Plants have evolved many adaptations to cope with fire. Of these adaptations, one of the best-known is likely pyriscence, where maturation and release of seeds is triggered, in whole or in part, by fire or smoke; this behaviour is often erroneously called serotiny, although this term truly denotes the much broader category of seed release activated by any stimulus. All pyriscent plants are serotinous, but not all serotinous plants are pyriscent (some are necriscent, hygriscent, xeriscent, soliscent, or some combination thereof). On the other hand, germination of seed activated by trigger is not to be confused with pyriscence; it is known as physiological dormancy. In chaparral communities in Southern California, for example, some plants have leaves coated in flammable oils that encourage an intense fire. This heat causes their fire-activated seeds to germinate (an example of dormancy) and the young plants can then capitalize on the lack of competition in a burnt landscape. Other plants have smoke-activated seeds, or fire-activated buds. The cones of the Lodgepole pine (Pinus contorta) are, conversely, pyriscent: they are sealed with a resin that a fire melts away, releasing the seeds. Many plant species, including the shade-intolerant giant sequoia (Sequoiadendron giganteum), require fire to make gaps in the vegetation canopy that will let in light, allowing their seedlings to compete with the more shade-tolerant seedlings of other species, and so establish themselves. Because their stationary nature precludes any fire avoidance, plant species may only be fire-intolerant, fire-tolerant or fire-resistant. Fire intolerance Fire-intolerant plant species tend to be highly flammable and are destroyed completely by fire. Some of these plants and their seeds may simply fade from the community after a fire and not return; others have adapted to ensure that their offspring survives into the next generation. "Obligate seeders" are plants with large, fire-activated seed banks that germinate, grow, and mature rapidly following a fire, in order to reproduce and renew the seed bank before the next fire. Seeds may contain the receptor protein KAI2, that is activated by the growth hormones karrikin released by the fire. Fire tolerance Fire-tolerant species are able to withstand a degree of burning and continue growing despite damage from fire. These plants are sometimes referred to as "resprouters". Ecologists have shown that some species of resprouters store extra energy in their roots to aid recovery and re-growth following a fire. For example, after an Australian bushfire, the Mountain Grey Gum tree (Eucalyptus cypellocarpa) starts producing a mass of shoots of leaves from the base of the tree all the way up the trunk towards the top, making it look like a black stick completely covered with young, green leaves. Fire resistance Fire-resistant plants suffer little damage during a characteristic fire regime. These include large trees whose flammable parts are high above surface fires. Mature ponderosa pine (Pinus ponderosa) is an example of a tree species that suffers little to no crown damage during a low severity fire because it sheds its lower, vulnerable branches as it matures. Animals, birds and microbes Like plants, animals display a range of abilities to cope with fire, but they differ from most plants in that they must avoid the actual fire to survive. Although birds may be vulnerable when nesting, they are generally able to escape a fire; indeed they often profit from being able to take prey fleeing from a fire and to recolonize burned areas quickly afterwards. In fact, many wildlife species globally are dependent on recurring fires in fire-dependent ecosystems to create and maintain habitat. Some anthropological and ethno-ornithological evidence suggests that certain species of fire-foraging raptors may engage in intentional fire propagation to flush out prey. Mammals are often capable of fleeing a fire, or seeking cover if they can burrow. Amphibians and reptiles may avoid flames by burrowing into the ground or using the burrows of other animals. Amphibians in particular are able to take refuge in water or very wet mud. Some arthropods also take shelter during a fire, although the heat and smoke may actually attract some of them, to their peril. Microbial organisms in the soil vary in their heat tolerance but are more likely to be able to survive a fire the deeper they are in the soil. A low fire intensity, a quick passing of the flames and a dry soil will also help. An increase in available nutrients after the fire has passed may result in larger microbial communities than before the fire. The generally greater heat tolerance of bacteria relative to fungi makes it possible for soil microbial population diversity to change following a fire, depending on the severity of the fire, the depth of the microbes in the soil, and the presence of plant cover. Certain species of fungi, such as Cylindrocarpon destructans appear to be unaffected by combustion contaminants, which can inhibit re-population of burnt soil by other microorganisms, and therefore have a higher chance of surviving fire disturbance and then recolonizing and out-competing other fungal species afterwards. Fire and ecological succession Fire behavior is different in every ecosystem and the organisms in those ecosystems have adapted accordingly. One sweeping generality is that in all ecosystems, fire creates a mosaic of different habitat patches, with areas ranging from those having just been burned to those that have been untouched by fire for many years. This is a form of ecological succession in which a freshly burned site will progress through continuous and directional phases of colonization following the destruction caused by the fire. Ecologists usually characterize succession through the changes in vegetation that successively arise. After a fire, the first species to re-colonize will be those with seeds are already present in the soil, or those with seeds are able to travel into the burned area quickly. These are generally fast-growing herbaceous plants that require light and are intolerant of shading. As time passes, more slowly growing, shade-tolerant woody species will suppress some of the herbaceous plants. Conifers are often early successional species, while broad leaf trees frequently replace them in the absence of fire. Hence, many conifer forests are themselves dependent upon recurring fire. Both natural and human fires affect all ecosystems from peatlands to shrublands to forests and tropical landscapes. This impacts the way that the ecosystem is structured and functions. Though there have always been wildfires naturally, the frequency of wildfires has increased at a rapid rate in recent years. This is largely due to decreases in precipitation, increases in temperature, and increases in human ignitions. Different species of plants, animals, and microbes specialize in exploiting different stages in this process of succession, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape. Soil characteristics will be a factor in determining the specific nature of a fire-adapted ecosystem, as will climate and topography. Different frequencies of fire also result in different successional pathways; short intervals between fires often eliminate tree species due to the time required to rebuild a seed bank, resulting in replacement by lighter seeded species like grasses and forbs. Examples of fire in different ecosystems Forests Mild to moderate fires burn in the forest understory, removing small trees and herbaceous groundcover. High-severity fires will burn into the crowns of the trees and kill most of the dominant vegetation. Crown fires may require support from ground fuels to maintain the fire in the forest canopy (passive crown fires), or the fire may burn in the canopy independently of any ground fuel support (an active crown fire). High-severity fire creates complex early seral forest habitat, or snag forest with high levels of biodiversity. When a forest burns frequently and thus has less plant litter build-up, below-ground soil temperatures rise only slightly and will not be lethal to roots that lie deep in the soil. Although other characteristics of a forest will influence the impact of fire upon it, factors such as climate and topography play an important role in determining fire severity and fire extent. Fires spread most widely during drought years, are most severe on upper slopes and are influenced by the type of vegetation that is growing. Forests in British Columbia In Canada, forests cover about 10% of the land area and yet harbor 70% of the country’s bird and terrestrial mammal species. Natural fire regimes are important in maintaining a diverse assemblage of vertebrate species in up to twelve different forest types in British Columbia. Different species have adapted to exploit the different stages of succession, regrowth and habitat change that occurs following an episode of burning, such as downed trees and debris. The characteristics of the initial fire, such as its size and intensity, cause the habitat to evolve differentially afterwards and influence how vertebrate species are able to use the burned areas. The change in forest fire intensity over time has been studied for the period since 1600 in an area of central British Columbia and is consistent with fire suppression since regulation was introduced. Shrublands Shrub fires typically concentrate in the canopy and spread continuously if the shrubs are close enough together. Shrublands are typically dry and are prone to accumulations of highly volatile fuels, especially on hillsides. Fires will follow the path of least moisture and the greatest amount of dead fuel material. Surface and below-ground soil temperatures during a burn are generally higher than those of forest fires because the centers of combustion lie closer to the ground, although this can vary greatly. Common plants in shrubland or chaparral include manzanita, chamise and coyote brush. California shrublands California shrubland, commonly known as chaparral, is a widespread plant community of low growing species, typically on arid sloping areas of the California Coast Ranges or western foothills of the Sierra Nevada. There are a number of common shrubs and tree shrub forms in this association, including salal, toyon, coffeeberry and Western poison oak. Regeneration following a fire is usually a major factor in the association of these species. South African Fynbos shrublands Fynbos shrublands occur in a small belt across South Africa. The plant species in this ecosystem are highly diverse, yet the majority of these species are obligate seeders, that is, a fire will cause germination of the seeds and the plants will begin a new life-cycle because of it. These plants may have coevolved into obligate seeders as a response to fire and nutrient-poor soils. Because fire is common in this ecosystem and the soil has limited nutrients, it is most efficient for plants to produce many seeds and then die in the next fire. Investing a lot of energy in roots to survive the next fire when those roots will be able to extract little extra benefit from the nutrient-poor soil would be less efficient. It is possible that the rapid generation time that these obligate seeders display has led to more rapid evolution and speciation in this ecosystem, resulting in its highly diverse plant community. Grasslands Grasslands burn more readily than forest and shrub ecosystems, with the fire moving through the stems and leaves of herbaceous plants and only lightly heating the underlying soil, even in cases of high intensity. In most grassland ecosystems, fire is the primary mode of decomposition, making it crucial in the recycling of nutrients. In some grassland systems, fire only became the primary mode of decomposition after the disappearance of large migratory herds of browsing or grazing megafauna driven by predator pressure. In the absence of functional communities of large migratory herds of herbivorous megafauna and attendant predators, overuse of fire to maintain grassland ecosystems may lead to excessive oxidation, loss of carbon, and desertification in susceptible climates. Some grassland ecosystems respond poorly to fire. North American grasslands In North America fire-adapted invasive grasses such as Bromus tectorum contribute to increased fire frequency which exerts selective pressure against native species. This is a concern for grasslands in the Western United States. In less arid grassland presettlement fires worked in concert with grazing to create a healthy grassland ecosystem as indicated by the accumulation of soil organic matter significantly altered by fire. The tallgrass prairie ecosystem in the Flint Hills of eastern Kansas and Oklahoma is responding positively to the current use of fire in combination with grazing. South African savanna In the savanna of South Africa, recently burned areas have new growth that provides palatable and nutritious forage compared to older, tougher grasses. This new forage attracts large herbivores from areas of unburned and grazed grassland that has been kept short by constant grazing. On these unburned "lawns", only those plant species adapted to heavy grazing are able to persist; but the distraction provided by the newly burned areas allows grazing-intolerant grasses to grow back into the lawns that have been temporarily abandoned, so allowing these species to persist within that ecosystem. Longleaf pine savannas Much of the southeastern United States was once open longleaf pine forest with a rich understory of grasses, sedges, carnivorous plants and orchids. These ecosystems had the highest fire frequency of any habitat, once per decade or less. Without fire, deciduous forest trees invade, and their shade eliminates both the pines and the understory. Some of the typical plants associated with fire include yellow pitcher plant and rose pogonia. The abundance and diversity of such plants is closely related to fire frequency. Rare animals such as gopher tortoises and indigo snakes also depend upon these open grasslands and flatwoods. Hence, the restoration of fire is a priority to maintain species composition and biological diversity. Fire in wetlands Many kinds of wetlands are also influenced by fire. This usually occurs during periods of drought. In landscapes with peat soils, such as bogs, the peat substrate itself may burn, leaving holes that refill with water as new ponds. Fires that are less intense will remove accumulated litter and allow other wetland plants to regenerate from buried seeds, or from rhizomes. Wetlands that are influenced by fire include coastal marshes, wet prairies, peat bogs, floodplains, prairie marshes and flatwoods. Since wetlands can store large amounts of carbon in peat, the fire frequency of vast northern peatlands is linked to processes controlling the carbon dioxide levels of the atmosphere, and to the phenomenon of global warming. Dissolved organic carbon (DOC) is abundant in wetlands and plays a critical role in their ecology. In the Florida Everglades, a significant portion of the DOC is "dissolved charcoal" indicating that fire can play a critical role in wetland ecosystems. Fire suppression Fire serves many important functions within fire-adapted ecosystems. Fire plays an important role in nutrient cycling, diversity maintenance and habitat structure. The suppression of fire can lead to unforeseen changes in ecosystems that often adversely affect the plants, animals and humans that depend upon that habitat. Wildfires that deviate from a historical fire regime because of fire suppression are called "uncharacteristic fires". Chaparral communities In 2003, southern California witnessed powerful chaparral wildfires. Hundreds of homes and hundreds of thousands of acres of land went up in flames. Extreme fire weather (low humidity, low fuel moisture and high winds) and the accumulation of dead plant material from eight years of drought, contributed to a catastrophic outcome. Although some have maintained that fire suppression contributed to an unnatural buildup of fuel loads, a detailed analysis of historical fire data has showed that this may not have been the case. Fire suppression activities had failed to exclude fire from the southern California chaparral. Research showing differences in fire size and frequency between southern California and Baja has been used to imply that the larger fires north of the border are the result of fire suppression, but this opinion has been challenged by numerous investigators and ecologists. One consequence of the fires in 2003 has been the increased density of invasive and non-native plant species that have quickly colonized burned areas, especially those that had already been burned in the previous 15 years. Because shrubs in these communities are adapted to a particular historical fire regime, altered fire regimes may change the selective pressures on plants and favor invasive and non-native species that are better able to exploit the novel post-fire conditions. Fish impacts The Boise National Forest is a US national forest located north and east of the city of Boise, Idaho. Following several uncharacteristically large wildfires, an immediately negative impact on fish populations was observed, posing particular danger to small and isolated fish populations. In the long term, however, fire appears to rejuvenate fish habitats by causing hydraulic changes that increase flooding and lead to silt removal and the deposition of a favorable habitat substrate. This leads to larger post-fire populations of the fish that are able to recolonize these improved areas. Fire as a management tool Restoration ecology is the name given to an attempt to reverse or mitigate some of the changes that humans have caused to an ecosystem. Controlled burning is one tool that is currently receiving considerable attention as a means of restoration and management. Applying fire to an ecosystem may create habitats for species that have been negatively impacted by fire suppression, or fire may be used as a way of controlling invasive species without resorting to herbicides or pesticides. However, there is debate as to what land managers should aim to restore their ecosystems to, especially as to whether it be pre-human or pre-European conditions. Native American use of fire, along with natural fire, historically maintained the diversity of the savannas of North America. The Great Plains shortgrass prairie A combination of heavy livestock grazing and fire-suppression has drastically altered the structure, composition, and diversity of the shortgrass prairie ecosystem on the Great Plains, allowing woody species to dominate many areas and promoting fire-intolerant invasive species. In semi-arid ecosystems where the decomposition of woody material is slow, fire is crucial for returning nutrients to the soil and allowing the grasslands to maintain their high productivity. Although fire can occur during the growing or the dormant seasons, managed fire during the dormant season is most effective at increasing the grass and forb cover, biodiversity and plant nutrient uptake in shortgrass prairies. Managers must also take into account, however, how invasive and non-native species respond to fire if they want to restore the integrity of a native ecosystem. For example, fire can only control the invasive spotted knapweed (Centaurea maculosa) on the Michigan tallgrass prairie in the summer, because this is the time in the knapweed's life cycle that is most important to its reproductive growth. Mixed conifer forests in the US Sierra Nevada Mixed conifer forests in the United States Sierra Nevada used to have fire return intervals that ranged from 5 years up to 300 years, depending on the locale. Lower elevations tended to have more frequent fire return intervals, whilst higher and wetter sites saw longer intervals between fires. Native Americans tended to set fires during fall and winter, and land at higher elevations was generally occupied by Native Americans only during the summer. Finnish boreal forests The decline of habitat area and quality has caused many species populations to be red-listed by the International Union for Conservation of Nature. According to a study on forest management of Finnish boreal forests, improving the habitat quality of areas outside reserves can help in conservation efforts of endangered deadwood-dependent beetles. These beetles and various types of fungi both need dead trees in order to survive. Old growth forests can provide this particular habitat. However, most Fennoscandian boreal forested areas are used for timber and therefore are unprotected. The use of controlled burning and tree retention of a forested area with deadwood was studied and its effect on the endangered beetles. The study found that after the first year of management the number of species increased in abundance and richness compared to pre-fire treatment. The abundance of beetles continued to increase the following year in sites where tree retention was high and deadwood was abundant. The correlation between forest fire management and increased beetle populations shows a key to conserving these red-listed species. Australian eucalypt forests Much of the old growth eucalypt forest in Australia is designated for conservation. Management of these forests is important because species like Eucalyptus grandis rely on fire to survive. There are a few eucalypt species that do not have a lignotuber, a root swelling structure that contains buds where new shoots can then sprout. During a fire a lignotuber is helpful in the reestablishment of the plant. Because some eucalypts do not have this particular mechanism, forest fire management can be helpful by creating rich soil, killing competitors, and allowing seeds to be released. See also Crown sprouting Evolutionary history of plants Fire history Fire-stick farming Peat bog fire Pyrophyte Keystone species reintroduction: (sufficient) native keystone grazing species in grasslands will promote tree growth, reducing wildfire likelihood References Bibliography Archibald, S., W.J. Bond, W.D. Stock and D.H.K. Fairbanks. 2005. "Shaping the landscape: fire-grazer interactions in an African Savanna". Ecological Applications 15:96–109. Begon, M., J.L. Harper and C.R. Townsend. (1996). Ecology: individuals, populations, and communities, 3rd. ed., Blackwell Science Ltd., Cambridge, MA. DeBano, L.F., D.G. Neary, P.F. Ffolliot. (1998). Fire’s Effects on Ecosystems. John Wiley & Sons, Inc., New York. Keddy, P.A. 2007. Plants and Vegetation: Origins, Processes, Consequences. Cambridge University Press, Cambridge. 666 p. Keddy, P.A. 2010. Wetland Ecology: Principles and Conservation (2nd ed.), Cambridge University Press, Cambridge. 497 p. Keeley J.E., Bond W.J., Bradstock R.A., Pausas J.G. & Rundel P.W. (2012). Fire in Mediterranean Ecosystems: Ecology, Evolution and Management. Cambridge University Press. Link Kramp, B.A., D.R. Patton, and W.W. Brady. 1986. Run wild: wildlife/habitat relationships. U.S. Forest Service, Southwestern Region. Pyne, S.J. "How Plants Use Fire (And Are Used By It)." 2002. PBS NOVA Online. 1 January 2006. https://www.pbs.org/wgbh/nova/fire/plants.html. Allan Savory; Jody Butterfield (2016). "Holistic Management", (3rd ed.) A Commonsense Revolution to Restore Our Environment. Island Press. . United States Department of Fish and Agriculture (USDA) Forest Service. www.fs.fed.us. Federal Wildland Fire Management Policy and Program Review (FWFMP). http://www.fs.fed.us/land/wdfire.htm. United States National Park Service (USNPS). www.nps.gov. Sequoia and King’s Canyon National Parks. 13 February 2006. "Giant Sequoias and Fire." https://www.nps.gov/seki/learn/nature/fic_segi.htm Vitt, D.H., L.A. Halsey and B.J. Nicholson. 2005. The Mackenzie River basin. pp. 166–202 in L.H. Fraser and P.A. Keddy (eds.). The World’s Largest Wetlands: Ecology and Conservation. Cambridge University Press, Cambridge. 488 p. Whitlock, C., Higuera, P. E., McWethy, D. B., & Briles, C. E. 2010. "Paleoecological perspectives on fire ecology: revisiting the fire-regime concept". Open Ecology Journal 3: 6–23. Wisheu, I.C., M.L. Rosenzweig, L. Olsvig-Whittaker, A. Shmida. 2000. "What makes nutrient-poor Mediterranean heathlands so rich in plant diversity?" Evolutionary Ecology Research 2: 935–955. External links US Forest Service: Fire Ecology Yellowstone National Park: Fire Ecology The Nature Conservancy's web site for fire practitioners – Fire Ecology The Nature Conservancy: Why We Work with Fire The International Journal of Wildland Fire Fire Ecology Journal Fire and Environmental Research Applications Word Spy – pyrogeography Forest ecology Ecological succession Ecology terminology Fire Wildfire suppression Wildfire prevention Environmental terminology Habitat
Fire ecology
[ "Chemistry", "Biology" ]
6,211
[ "Ecology terminology", "Combustion", "Fire" ]
27,215,025
https://en.wikipedia.org/wiki/Protein%20fragment%20library
Protein backbone fragment libraries have been used successfully in a variety of structural biology applications, including homology modeling, de novo structure prediction, and structure determination. By reducing the complexity of the search space, these fragment libraries enable more rapid search of conformational space, leading to more efficient and accurate models. Motivation Proteins can adopt an exponential number of states when modeled discretely. Typically, a protein's conformations are represented as sets of dihedral angles, bond lengths, and bond angles between all connected atoms. The most common simplification is to assume ideal bond lengths and bond angles. However, this still leaves the phi-psi angles of the backbone, and up to four dihedral angles for each side chain, leading to a worst case complexity of k6*n possible states of the protein, where n is the number of residues and k is the number of discrete states modeled for each dihedral angle. In order to reduce the conformational space, one can use protein fragment libraries rather than explicitly model every phi-psi angle. Fragments are short segments of the peptide backbone, typically from 5 to 15 residues long, and do not include the side chains. They may specify the location of just the C-alpha atoms if it is a reduced atom representation, or all the backbone heavy atoms (N, C-alpha, C carbonyl, O). Note that side chains are typically not modeled using the fragment library approach. To model discrete states of a side chain, one could use a rotamer library approach. This approach operates under the assumption that local interactions play a large role in stabilizing the overall protein conformation. In any short sequence, the molecular forces constrain the structure, leading to only a small number of possible conformations, which can be modeled by fragments. Indeed, according to Levinthal's paradox, a protein could not possibly sample all possible conformations within a biologically reasonable amount of time. Locally stabilized structures would reduce the search space and allow proteins to fold on the order of milliseconds. Construction Libraries of these fragments are constructed from an analysis of the Protein Data Bank (PDB). First, a representative subset of the PDB is chosen which should cover a diverse array of structures, preferably at a good resolution. Then, for each structure, every set of n consecutive residues is taken as a sample fragment. The samples are then clustered into k groups, based upon how similar they are to each other in spatial configuration, using algorithms such as k-means clustering. The parameters n and k are chosen according to the application (see discussion on complexity below). The centroids of the clusters are then taken to represent the fragment. Further optimization can be performed to ensure that the centroid possesses ideal bond geometry, as it was derived by averaging other geometries. Because the fragments are derived from structures that exist in nature, the segment of backbone they represent will have realistic bonding geometries. This helps avoid having to explore the full space of conformation angles, much of which would lead to unrealistic geometries. The clustering above can be performed without regard to the identities of the residues, or it can be residue-specific. That is, for any given input sequence of amino acids, a clustering can be derived using only samples found in the PDB with the same sequence in the k-mer fragment. This requires more computational work than deriving a sequence-independent fragment library but can potentially produce more accurate models. Conversely, a larger sample set is required, and one may not achieve full coverage. Example use: loop modeling In homology modeling, a common application of fragment libraries is to model the loops of the structure. Typically, the alpha helices and beta sheets are threaded against a template structure, but the loops in between are not specified and need to be predicted. Finding the loop with the optimal configuration is NP-hard. To reduce the conformational space that needs to be explored, one can model the loop as a series of overlapping fragments. The space can then be sampled, or if the space is now small enough, exhaustively enumerated. One approach for exhaustive enumeration goes as follows. Loop construction begins by aligning all possible fragments to overlap with the three residues at the N terminus of the loop (the anchor point). Then all possible choices for a second fragment are aligned to (all possible choices of) the first fragment, ensuring that the last three residues of the first fragment overlap with the first three residues of the second fragment. This ensures that the fragment chain forms realistic angles both within the fragment and between fragments. This is then repeated until a loop with the correct length of residues is constructed. The loop must both begin at the anchor on the N side and end at the anchor on the C side. Each loop must therefore be tested to see if its last few residues overlap with the C terminal anchor. Very few of these exponential numbers of candidate loops will close the loop. After filtering out loops that don't close, one must then determine which loop has the optimal configuration, as determined by having the lowest energy using some molecular mechanics force field. Complexity The complexity of the state space is still exponential in the number of residues, even after using fragment libraries. However, the degree of the exponent is reduced. For a library of F-mer fragments, with L fragments in the library, and to model a chain of N residues overlapping each fragment by 3, there will be L[N/(F-3)]+1 possible chains. This is much less than the KN possibilities if explicitly modeling the phi-psi angles as K possible combinations, as the complexity grows at a degree smaller than N. The complexity increases in L, the size of the fragment library. However, libraries with more fragments will capture a greater diversity of fragment structures, so there is a trade off in the accuracy of the model vs the speed of exploring the search space. This choice governs what K is used when performing the clustering. Additionally, for any fixed L, the diversity of structures capable of being modeled decreases as the length of the fragments increases. Shorter fragments are more capable of covering the diverse array of structures found in the PDB than longer ones. Recently, it was shown that libraries of up to length 15 are capable of modeling 91% of the fragments in the PDB to within 2.0 angstroms. See also De novo protein structure prediction Homology modeling Protein design Protein structure prediction Protein structure prediction software Structural alignment References Bioinformatics Protein structure
Protein fragment library
[ "Chemistry", "Engineering", "Biology" ]
1,329
[ "Bioinformatics", "Biological engineering", "Protein structure", "Structural biology" ]
27,216,689
https://en.wikipedia.org/wiki/Coding%20theory%20approaches%20to%20nucleic%20acid%20design
DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation. Introduction DNA sequences are known to appear in the form of double helices in living cells, in which one DNA strand is hybridized to its complementary strand through a series of hydrogen bonds. For the purpose of this entry, we shall focus on only oligonucleotides. DNA computing involves allowing synthetic oligonucleotide strands to hybridize in such a way as to perform computation. DNA computing requires that the self-assembly of the oligonucleotide strands happen in such a way that hybridization should occur in a manner compatible with the goals of computation. The field of DNA computing was established in Leonard M. Adelman's seminal paper. His work is significant for a number of reasons: It shows how one could use the highly parallel nature of computation performed by DNA to solve problems that are difficult or almost impossible to solve using the traditional methods. It's an example of computation at a molecular level, on the lines of nanocomputing, and this potentially is a major advantage as far as the information density on storage media is considered, which can never be reached by the semiconductor industry. It demonstrates unique aspects of DNA as a data structure. This capability for massively parallel computation in DNA computing can be exploited in solving many computational problems on an enormously large scale such as cell-based computational systems for cancer diagnostics and treatment, and ultra-high density storage media. This selection of codewords (sequences of DNA oligonucleotides) is a major hurdle in itself due to the phenomenon of secondary structure formation (in which DNA strands tend to fold onto themselves during hybridization and hence rendering themselves useless in further computations. This is also known as self-hybridization). The Nussinov-Jacobson algorithm is used to predict secondary structures and also to identify certain design criteria that reduce the possibility of secondary structure formation in a codeword. In essence this algorithm shows how the presence of a cyclic structure in a DNA code reduces the complexity of the problem of testing the codewords for secondary structures. Novel constructions of such codes include using cyclic reversible extended generalized Hadamard matrices, and a binary approach. Before diving into these constructions, we shall revisit certain fundamental genetic terminology. The motivation for the theorems presented in this article, is that they concur with the Nussinov - Jacobson algorithm, in that the existence of cyclic structure helps in reducing complexity and thus prevents secondary structure formation. i.e. these algorithms satisfy some or all the design requirements for DNA oligonucleotides at the time of hybridization (which is the core of the DNA computing process) and hence do not suffer from the problems of self - hybridization. Definitions A DNA code is simply a set of sequences over the alphabet . Each purine base is the Watson-Crick complement of a unique pyrimidine base (and vice versa) – adenine and thymine form a complementary pair, as do guanine and cytosine. This pairing can be described as follows – . Such pairing is chemically very stable and strong. However, pairing of mismatching bases does occur at times due to biological mutations. Most of the focus on DNA coding has been on constructing large sets of DNA codewords with prescribed minimum distance properties. For this purpose let us lay down the required groundwork to proceed further. Let be a word of length over the alphabet . For , we will use the notation to denote the subsequence . Furthermore, the sequence obtained by reversing will be denoted as . The Watson-Crick complement, or the reverse-complement of q, is defined to be , where denotes the Watson-Crick complement base pair of . For any pair of length- words and over , the Hamming distance is the number of positions at which . Further, define reverse-Hamming distance as . Similarly, reverse-complement Hamming distance is . (where stands for reverse complement) Another important code design consideration linked to the process of oligonucleotide hybridization pertains to the GC content of sequences in a DNA code. The GC-content, , of a DNA sequence is defined to be the number of indices such that . A DNA code in which all codewords have the same GC-content, , is called a constant GC-content code. A generalized Hadamard matrix is an square matrix with entries taken from the set of th roots of unity, , that satisfies = . Here denotes the identity matrix of order , while * stands for complex-conjugation. We will only concern ourselves with the case for some prime . A necessary condition for the existence of generalized Hadamard matrices is that . The exponent matrix, , of is the matrix with the entries in , is obtained by replacing each entry in by the exponent . The elements of the Hadamard exponent matrix lie in the Galois field , and its row vectors constitute the codewords of what shall be called a generalized Hadamard code. Here, the elements of lie in the Galois field . By definition, a generalized Hadamard matrix in its standard form has only 1s in its first row and column. The square matrix formed by the remaining entries of is called the core of , and the corresponding submatrix of the exponent matrix is called the core of construction. Thus, by omission of the all-zero first column cyclic generalized Hadamard codes are possible, whose codewords are the row vectors of the punctured matrix. Also, the rows of such an exponent matrix satisfy the following two properties: (i) in each of the nonzero rows of the exponent matrix, each element of appears a constant number, , of times; and (ii) the Hamming distance between any two rows is . Property U Let be the cyclic group generated by , where is a complex primitive th root of unity, and is a fixed prime. Further, let , denote arbitrary vectors over which are of length , where is a positive integer. Define the collection of differences between exponents , where is the multiplicity of element of which appears in . Vector is said to satisfy Property U if and only if each element of appears in exactly times () The following lemma is of fundamental importance in constructing generalized Hadamard codes. Lemma. Orthogonality of vectors over – For fixed primes , arbitrary vectors of length , whose elements are from , are orthogonal if the vector satisfies Property U, where is the collection of differences between the Hadamard exponents associated with . M sequences Let be an arbitrary vector of length whose elements are in the finite field , where is a prime. Let the elements of a vector constitute the first period of an infinite sequence which is periodic of period . If is the smallest period for conceiving any subsequence, the sequence is called an M-sequence, or a maximal sequence of least period obtained by cyclically permuting elements. If whenever the elements of are permuted arbitrarily to yield , the sequence is an M-sequence, then the sequence is called M-invariant. The theorems that follow present conditions that ensure M-invariance. In conjunction with a certain uniformity property of polynomial coefficients, these conditions yield a simple method by which complex Hadamard matrices with cyclic core can be constructed. The goal here is to find cyclic matrix whose elements are in Galois field and whose dimension is . The rows of will be the nonzero codewords of a linear cyclic code , if and only if there is polynomial with coefficients in , which is a proper divisor of and which generates . In order to have nonzero codewords, must be of degree . Further, in order to generate a cyclic Hadamard core, the vector (of coefficients of) when operated upon with the cyclic shift operation must be of period , and the vector difference of two arbitrary rows of (augmented with zero) must satisfy the uniformity condition of Butson, previously referred to as Property U. One necessary condition for -periodicity is that , where is monic irreducible over. The approach here is to replace the last requirement with the condition that the coefficients of the vector are uniformly distributed over , i.e. each residue appears the same number of times (Property U). A proof that this heuristic approach always produces a cyclic core is given below. Examples of code construction Code construction using complex Hadamard matrices Construction algorithm Consider a monic irreducible polynomial over of degree having a suitable companion of degree such that , where the vector satisfies Property U. This requires only a simple computer algorithm for long division over . Since , the ideal generated by is a cyclic code . Moreover, Property U guarantees the nonzero codewords form a cyclic matrix, each row of period under cyclic permutation, which serves as a cyclic core for the Hadamard matrix . As an example, a cyclic core for results from the companions and . The coefficients of indicate that is the relative difference set, . Theorem Let be a prime and , with a monic polynomial of degree whose extended vector of coefficients are elements of . Suppose the following conditions hold: vector satisfies the property U, and , where is a monic irreducible polynomial of degree . Then there exists a p-ary linear cyclic code of blocksize , such that the augmented code is the exponent matrix for the Hadamard matrix , with , where the core of is a cyclic matrix. Proof: First note that is monic and divides with degree . Now, we need to show that the matrix whose rows are nonzero codewords constitutes a cyclic core for some complex Hadamard matrix . Given that satisfies property U, all of the nonzero residues of lie in C. By cyclically permuting elements of , we get the desired exponent matrix where we can get every codeword in by permuting the first codeword. (This is because the sequence obtained by cyclically permuting is M-invariant.) We also see that augmentation of each codeword of by adding a leading zero element produces a vector which satisfies Property U. Also, since the code is linear, the vector difference of two arbitrary codewords is also a codeword and thus satisfy Property U. Therefore, the row vectors of the augmented code form a Hadamard exponent. Thus, is the standard form of some complex Hadamard matrix . Thus from the above property, we see that the core of is a circulant matrix consisting of all the cyclic shifts of its first row. Such a core is called a cyclic core where in each element of appears in each row of exactly times, and the Hamming distance between any two rows is exactly . The rows of the core form a constant-composition code - one consisting of cyclic shifts of some length over the set . Hamming distance between any two codewords in is . The following can be inferred from the theorem as explained above. (For more detailed reading, the reader is referred to the paper by Heng and Cooke.) Let for prime and . Let be a monic polynomial over , of degree N − k such that over , for some monic irreducible polynomial . Suppose that the vector , with for (N − k) < i < N, has the property that it contains each element of the same number of times. Then, the cyclic shifts of the vector form the core of the exponent matrix of some Hadamard matrix . DNA codes with constant GC-content can obviously be constructed from constant-composition codes (A constant composition code over a k-ary alphabet has the property that the numbers of occurrences of the k symbols within a codeword is the same for each codeword) over by mapping the symbols of to the symbols of the DNA alphabet, . For example, using cyclic constant composition code of length over guaranteed by the theorem proved above and the resulting property, and using the mapping that takes to , to and to , we obtain a DNA code with and a GC-content of . Clearly and in fact since and no codeword in contains no symbol , we also have . This is summarized in the following corollary. Corollary For any , there exists DNA codes with codewords of length , constant GC-content , and in which every codeword is a cyclic shift of a fixed generator codeword . Each of the following vectors generates a cyclic core of a Hadamard matrix (where , and in this example): ; . Where, . Thus, we see how DNA codes can be obtained from such generators by mapping onto . The actual choice of mapping plays a major role in secondary structure formations in the codewords. We see that all such mappings yield codes with essentially the same parameters. However the actual choice of mapping has a strong influence on the secondary structure of the codewords. For example, the codeword illustrated was obtained from via the mapping , while the codeword was obtained from the same generator via the mapping . Code construction via a Binary Mapping Perhaps a simpler approach to building/designing DNA codewords is by having a binary mapping by looking at the design problem as that of constructing the codewords as binary codes. i.e. map the DNA codeword alphabet onto the set of 2-bit length binary words as shown: , , , . As we can see, the first bit of a binary image clearly determines which complementary pair it belongs to. Let be a DNA sequence. The sequence obtained by applying the mapping given above to , is called the binary image of . Now, let . Now, let the subsequence be called the even subsequence of , and be called the odd subsequence of . Thus, for example, for , then, . Then and . Let us define an even component as , and an odd component as . From this choice of binary mapping, the GC-content of DNA sequence = Hamming weight of . Hence, a DNA code is a constant GC-content codeword if and only if its even component is a constant-weight code. Let be a binary code consisting of codewords of length and minimum distance , such that implies that . For , consider the constant-weight subcode , where denotes Hamming weight. Choose such that , and consider a DNA code, , with the following choice for its even and odd components: , . Where denotes lexicographic ordering. The in the definition of ensures that if , then , so that distinct codewords in cannot be reverse-complements of each other. The code has codewords of length and constant weight . Furthermore, and ( this is because is a subset of the codewords in ). Also, . Note that and both have weight . This implies that and have weight . And due to the weight constraint on , we must have for all , . Thus, the code has codewords of length . From this, we see that (because the component codewords of are taken from ). Similarly, . Therefore, the DNA code with , has codewords of length , and satisfies and . From the examples listed above, one can wonder what could be the future potential of DNA-based computers? Despite its enormous potential, this method is highly unlikely to be implemented in home computers or even computers at offices, etc. because of the sheer flexibility and speed as well as cost factors that favor silicon chip based devices used for the computers today. However, such a method could be used in situations where the only available method is this and requires the accuracy associated with the DNA hybridization mechanism; applications which require operations to be performed with a high degree of reliability. Currently, there are several software packages, such as the Vienna package, which can predict secondary structure formations in single stranded DNAs (i.e. oligonucleotides) or RNA sequences. See also Coding theory Bioinformatics Biocomputers Computational gene References External links Atri Rudra's course at The State University of New York, Buffalo DNA nanotechnology
Coding theory approaches to nucleic acid design
[ "Materials_science" ]
3,310
[ "Nanotechnology", "DNA nanotechnology" ]
27,220,034
https://en.wikipedia.org/wiki/Ammonium%20phosphate%20%28compound%29
Ammonium phosphate refers to three different chemical compounds, all of which are formed by the reaction of ammonia with phosphoric acid and have the general formula [NH4]x[H3−xPO4], where 1 ≤ x ≤ 3: Ammonium dihydrogenphosphate, [NH4][H2PO4] Diammonium phosphate, [NH4]2[HPO4] Ammonium phosphate, [NH4]3[PO4] Ammonium compounds Phosphates Set index articles on chemistry
Ammonium phosphate (compound)
[ "Chemistry" ]
112
[ "Phosphates", "Ammonium compounds", "Salts" ]