id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
7,984,958
https://en.wikipedia.org/wiki/Bhatnagar%E2%80%93Gross%E2%80%93Krook%20operator
The Bhatnagar–Gross–Krook operator (abbreviated BGK operator) term refers to a collision operator used in the Boltzmann equation and in the lattice Boltzmann method, a computational fluid dynamics technique. It is given by the formula where is a local equilibrium value for the population of particles in the direction of link . The term is a relaxation time and related to the viscosity. The operator is named after Prabhu L. Bhatnagar, Eugene P. Gross, and Max Krook, the three scientists who introduced it in an article in Physical Review in 1954. References Statistical mechanics Computational fluid dynamics
Bhatnagar–Gross–Krook operator
[ "Physics", "Chemistry" ]
129
[ "Fluid dynamics stubs", "Computational fluid dynamics", "Computational physics", "Statistical mechanics", "Computational physics stubs", "Fluid dynamics" ]
7,989,919
https://en.wikipedia.org/wiki/Thermomechanical%20analysis
Thermomechanical analysis (TMA) is a technique used in thermal analysis, a branch of materials science which studies the properties of materials as they change with temperature. Thermomechanical analysis is a subdiscipline of the thermomechanometry (TM) technique. Related techniques and terminology Thermomechanometry is the measurement of a change of a dimension or a mechanical property of the sample while it is subjected to a temperature regime. An associated thermoanalytical method is thermomechanical analysis. A special related technique is thermodilatometry (TD), the measurement of a change of a dimension of the sample with a negligible force acting on the sample while it is subjected to a temperature regime. The associated thermoanalytical method is thermodilatometric analysis (TDA). TDA is often referred to as zero force TMA. The temperature regime may be heating, cooling at a rate of temperature change that can include stepwise temperature changes, linear rate of change, temperature modulation with a set frequency and amplitude, free (uncontrolled) heating or cooling, or maintaining a constant increase in temperature. The sequence of temperatures with respect to time may be predetermined (temperature programmed) or sample controlled (controlled by a feedback signal from the sample response). Thermomechanometry includes several variations according to the force and the way the force is applied. Static force TM (sf-TM) is when the applied force is constant; previously called TMA with TD as the special case of zero force. Dynamic force TM (df-TM) is when the force is changed as for the case of a typical stress–strain analysis; previously called TMA with the term dynamic meaning any alteration of the variable with time, and not to be confused with dynamic mechanical analysis (DMA). Modulated force TM (mf-TM) is when the force is changed with a frequency and amplitude; previously called DMA. The term modulated is a special variant of dynamic, used to be consistent with modulated temperature differential scanning calorimetry (mt-DSC) and other situations when a variable is imposed in a cyclic manner. Mechanical test Mechanical testing seeks to measure mechanical properties of materials using various test specimen and fixture geometries using a range of probe types. Measurement is desired to take place with minimal disturbance of the material being measured. Some characteristics of a material can be measured without disturbance, such as dimensions, mass, volume, density. However, measurement of mechanical properties normally involves disturbance of the system being measured. The measurement often reflects the combined material and measuring device as the system. Knowledge of a structure can be gained by imposing an external stimulus and measuring the response of the material with a suitable probe. The external stimulus can be a stress or strain, however in thermal analysis the influence is often temperature. Thermomechanometry is where a stress is applied to a material and the resulting strain is measured while the material is subjected to a controlled temperature program. The simplest mode of TM is where the imposed stress is zero. No mechanical stimulus is imposed upon the material, the material response is generated by a thermal stress, either by heating or cooling. Zero force thermomechanometry Zero force TM (a variant of sf-TM or TD) measures the response of the material to changes in temperature and the basic change is due to activation of atomic or molecular phonons. Increased thermal vibrations produce thermal expansion characterized by the coefficient of thermal expansion (CTE) that is the gradient of the graph of dimensional change versus temperature. CTE depends upon thermal transitions such as the glass transition. CTE of the glassy state is low, while at the glass transition temperature (Tg) increased degrees of molecular segmental motion are released so CTE of the rubbery state is high. Changes in an amorphous polymer may involve other sub-Tg thermal transitions associated with short molecular segments, side-chains and branches. The linearity of the sf-TM curve will be changed by such transitions. Other relaxations may be due to release of internal stress arising from the non-equilibrium state of the glassy amorphous polymer. Such stress is referred to as thermal aging. Other stresses may be as a result of moulding pressures, extrusion orientation, thermal gradients during solidification and externally imparted stresses. Semi-crystalline polymers Semi-crystalline polymers are more complex than amorphous polymers, since the crystalline regions are interspersed with amorphous regions. Amorphous regions in close association to the crystals or contain common molecules as tie molecules have less degrees of freedom than the bulk amorphous phase. These immobilised amorphous regions are called the rigid amorphous phase. CTE of the rigid amorphous phase is expected to be lower than that of the bulk amorphous phase. The crystallite are typically not at equilibrium and they may contain different polymorphs. The crystals re-organize during heating so that they approach the equilibrium crystalline state. Crystal re-organization is a thermally activated process. Further crystallization of the amorphous phase may take place. Each of these processes will interfere with thermal expansion of the material. The material may be a blend or a two-phase block or graft copolymer. If both phases are amorphous then two Tg will be observed if the material exists as two phases. If one Tg is exhibited then it will be between the Tg of the components and the resultant Tg will likely be described by a relationship such as the Flory–Fox or Kwei equations. If one of the components is semi-crystalline then the complexity of a pure crystalline phase and either one or two amorphous phases will result. If both components are semi-crystalline then the morphology will be complex since both crystal phases will likely form separately, though with influence on each other. Cross-linking Cross-linking will restrict the molecular response to temperature change since degree of freedom for segmental motions are reduced as molecules become irreversibly linked. Cross-linking chemically links molecules, while crystallinity and fillers introduce physical constraints to motion. Mechanical properties such as derived from stress-strain testing are used to calculate crosslink density that is usually expressed as the molar mass between crosslinks (Mc). The sensitivity of zero stress TMA to crosslinking is low since the structure receives minimum disturbance. Sensitivity to crosslinks requires high strain such that the segments between crosslinks become fully extended. Zero force TM will only be sensitive to changes in the bulk that are expressed as a change in a linear dimension of the material. The measured change will be the resultant of all processes occurring as the temperature is changed. Some of the processes will be reversible, others irreversible, and others time-dependent. The methodology must be chosen to best detect, distinguish and resolve the thermal expansion or contractions observable. The TM instrument need only apply sufficient stress to keep the probe in contact with the specimen surface, but it must have high sensitivity to dimensional change. The experiment must be conducted at a temperature change rate slow enough for the material to approach thermal equilibrium throughout. While the temperature should be the same throughout the material it will not necessarily be at thermal equilibrium in the context of molecular relaxations. The temperature of the molecules relative to equilibrium is expressed as the fictive temperature. The fictive temperature is the temperature at which the unrelaxed molecules would be at equilibrium. Zero-stress thermomechanometry experimental TM is sufficient for zero stress experiments since superimposition of a frequency to create a dynamic mechanical experiment will have no effect since there is no stress other than a nominal contact stress. The material can be best characterized by an experiment in which the original material is first heated to the upper temperature required, then the material should be cooled at the same rate, followed by a second heating scan. The first heating scan provides a measure of the material with all of its structural complexities. The cooling scan allows and measures the material as the molecules lose mobility, so it is going from an equilibrium state and gradually moving away from equilibrium as the cooling rate exceeds the relaxation rate. The second heating scan will differ from the first heating scan because of thermal relaxation during the first scan and the equilibration achieved during the cooling scan. A second cooling scan followed by a third heating scan can be performed to check on the reliability of the prior scans. Different heating and cooling rates can be used to produce different equilibrations. Annealing at specific temperatures can be used to provide different isothermal relaxations that can be measured by a subsequent heating scan. Static-force TM The sf-TM experiments duplicate experiments that can be performed using differential scanning calorimetry (DSC). A limitation of DSC is that the heat exchange during a process or due to the heat capacity of the material cannot be measured over long times or at slow heating or cooling rates since the finite quantity of heat exchanges will be dispersed over too long a time to be detected. The limitation does not apply to sf-TM since the dimensional change of the material can be measured over any time. The constraint is the practical time for the experiment. The application of multiple scans is shown above to distinguish reversible from irreversible changes. Thermal cycling and annealing steps can be added to provide complex thermal programs to test various attributes of a material as more becomes known about the material. Modulated-temperature TM Modulated temperature TM (mt-TM) has been used as an analogous experiment to modulated-temperature DSC (mtDSC). The principle of mt-TM is similar to the DSC analogy. The temperature is modulated as the TM experiment proceeds. Some thermal processes are reversible, such as the true CTE, while others such as stress relief, orientation randomization and crystallization are irreversible within the conditions of the experiment. The modulation conditions should be different from mt-DSC since the sample and test fixture and enclosure is larger thus requiring longer equilibration time. mt-DSC typically uses a period of 60 s, amplitude 0.5-1.0 °C and average heating or cooling rate of 2 °C·min-1. MT-TMA may have a period of 1000 s with the other parameters similar to mt-DSC. These conditions will require long scan times. Another experiment is an isothermal equilibration where the material is heated rapidly to a temperature where relaxations can proceed more rapidly. Thermal aging can take several hours or more under ideal conditions. Internal stresses may relax rapidly. TM can be used to measure the relaxation rates and hence characteristic times for these events, provides they are within practical measurements times available for the instrument. Temperature is the variable that can be changed to bring relaxations into measurable time ranges. Table 1. Typical zero-stress thermomechanometry parameters Static force thermomechanometry experimental Creep and stress relaxation measures the elasticity, viscoelasticity and viscous behaviour of materials under a selected stress and temperature. Tensile geometry is the most common for creep measurements. A small force is initially imparted to keep the specimen aligned and straight. The selected stress is applied rapidly and held constant for the required time; this may be 1 h or more. During application of force the elastic property is observed as an immediate elongation or strain. During the constant force period the time dependent elastic response or viscoelasticity, together with the viscous response, result in further increase in strain. The force is removed rapidly, though the small alignment force is maintained. The recovery measurement time should be four times the creep time, so in this example the recovery time should be 4 h. Upon removal of the force the elastic component results in an immediate contraction. The viscoelastic recovery is exponential as the material slowly recovers some of the previously imparted creep strain. After recovery there is a permanent unrecovered strain due to the viscous component of the properties. Analysis of the data is performed using the four component viscoelastic model where the elements are represented by combinations of springs and dashpots. The experiment can be repeated using different creep forces. The results for varying forces after the same creep time can be used to construct isochronal stress–strain curves. The creep and recovery experiment can be repeated under different temperatures. The creep–time curves measured at various temperatures can be extended using the time-temperature-superposition principle to construct a creep and recovery mastercurve that extends the data to very long and very short times. These times would be impractical to measure directly. Creep at very long timeframes is important for prediction of long term properties and product lifetimes. A complementary property is stress relaxation, where a strain is applied and the corresponding stress change is measured. The mode of measurement is not directly available with most thermomechanical instruments. Stress relaxation is available using any standard universal test instruments, since their mode of operation is application of strain, while the stress is measured. Dynamic force thermomechanometry experimental Experiments where the force is changed with time are called dynamic force thermomechanometry (df-TM). This use of the term dynamic is distinct from the situation where the force is periodically changed with time, typically following a sine relationship, where the term modulated is recommended. Most thermomechanical instruments are force controlled, that is they apply a force, then measure a resulting change in a dimension of the test specimen. Usually a constant strain rate is used for stress–strain measurements, but in the case of df-TM the stress will be applied at a chosen rate. The result of a stress-strain analysis is a curve that will reveal the modulus (hardness) or compliance (softness, the reciprocal of modulus). The modulus is the slope of the initial linear region of the stress–strain curve. Various ways of selecting the region to calculate gradient are used such as the initial part of the curve, another is to select a region defined by the secant to the curve. If the test material is a thermoplastic a yield zone may be observed and a yield stress (strength) calculated. A brittle material will break before it yields. A ductile material will further deform after yielding. When the material breaks a break stress (ultimate stress) and break strain are calculated. The area under the stress–strain curve is the energy required to break (toughness). Thermomechanical instruments are distinct in that they can measure only small changes in linear dimension (typically 1 to 10 mm) so it is possible to measure yield and break properties for small specimens and those that do not change dimensions very much before exhibiting these properties. A purpose of measuring a stress–strain curve is to establish the linear viscoelastic region (LVR). LVR is this initial linear part of a stress–strain curve where an increase in stress is accompanied by a proportional increase in strain, that is the modulus is constant and the change in dimension is reversible. A knowledge of LVR is a prerequisite for any modulated force thermomechanometry experiments. Conduct of complex experiments should be preceded by preliminary experiments with a limited range of variables to establish the behaviour of the test material for selection of further instrument configuration and operating parameters. Modulated temperature thermomechanometry experimental Modulated temperature conditions are where the temperature is changed in a cyclic manner such as in a sine, isothermal-heating, isothermal-cooling or heat-cool. The underlying temperature can increase, decrease or be constant. Modulated temperature conditions enable separation of the data into reversing data that is in-phase with the temperature changes, and non-reversing that is out-of-phase with the temperature changes. Sf-TM is required since the force should be constant while the temperature is modulated, or at least constant for each modulation period. A reversing properties is coefficient of thermal expansion. Non-reversing properties are thermal relaxations, stress relief and morphological changes that occur during heating, causing the material to approach thermal equilibrium. References Prof. Robert A. Shanks, Thermechanometry of Polymers (2009) Scientific techniques Materials science
Thermomechanical analysis
[ "Physics", "Materials_science", "Engineering" ]
3,378
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
7,990,093
https://en.wikipedia.org/wiki/Annulation
In organic chemistry, annulation (; occasionally annelation) is a chemical reaction in which a new ring is constructed on a molecule. Examples are the Robinson annulation, Danheiser annulation and certain cycloadditions. Annular molecules are constructed from side-on condensed cyclic segments, for example helicenes and acenes. In transannulation a bicyclic molecule is created by intramolecular carbon-carbon bond formation in a large monocyclic ring. An example is the samarium(II) iodide induced ketone - alkene cyclization of 5-methylenecyclooctanone which proceeds through a ketyl intermediate: Benzannulation The term benzannulated compounds refers to derivatives of cyclic compounds (usually aromatic) which are fused to a benzene ring. Examples are listed in the table below: In contemporary chemical literature, the term benzannulation also means "construction of benzene rings from acyclic precursors". Transannular interaction A transannular interaction in chemistry is any chemical interaction (favorable or nonfavorable) between different non-bonding molecular groups in a large ring or macrocycle. See for example atranes. References Ring forming reactions
Annulation
[ "Chemistry" ]
257
[ "Ring forming reactions", "Organic reactions" ]
4,628,609
https://en.wikipedia.org/wiki/Aprotinin
The drug aprotinin (Trasylol, previously Bayer and now Nordic Group pharmaceuticals), is a small protein bovine pancreatic trypsin inhibitor (BPTI), or basic trypsin inhibitor of bovine pancreas, which is an antifibrinolytic molecule that inhibits trypsin and related proteolytic enzymes. Under the trade name Trasylol, aprotinin was used as a medication administered by injection to reduce bleeding during complex surgery, such as heart and liver surgery. Its main effect is the slowing down of fibrinolysis, the process that leads to the breakdown of blood clots. The aim in its use was to decrease the need for blood transfusions during surgery, as well as end-organ damage due to hypotension (low blood pressure) as a result of marked blood loss. The drug was temporarily withdrawn worldwide in 2007 after studies suggested that its use increased the risk of complications or death; this was confirmed by follow-up studies. Trasylol sales were suspended in May 2008, except for very restricted research use. In February 2012 the European Medicines Agency (EMA) scientific committee reverted its previous standpoint regarding aprotinin, and has recommended that the suspension be lifted. Nordic became distributor of aprotinin in 2012. Chemistry Aprotinin is a monomeric (single-chain) globular polypeptide derived from bovine lung tissue. It has a molecular weight of 6512 Da and consists of 16 different amino acid types arranged in a chain 58 residues long that folds into a stable, compact tertiary structure of the 'small SS-rich" type, containing 3 disulfides, a twisted β-hairpin and a C-terminal α-helix. The amino acid sequence for bovine BPTI is RPDFC LEPPY TGPCK ARIIR YFYNA KAGLC QTFVY GGCRA KRNNF KSAED CMRTC GGA. There are 10 positively charged lysine (K) and arginine (R) side chains and only 4 negative aspartate (D) and glutamates (E), making the protein strongly basic, which accounts for the basic in its name. (Because of the usual source organism, BPTI is sometimes referred to as bovine pancreatic trypsin inhibitor.) The high stability of the molecule is due to the 3 disulfide bonds linking the 6 cysteine members of the chain (Cys5-Cys55, Cys14-Cys38 and Cys30-Cys51). The long, basic lysine 15 side chain on the exposed loop (at top left in the image) binds very tightly in the specificity pocket at the active site of trypsin and inhibits its enzymatic action. BPTI is synthesized as a longer, precursor sequence, which folds up and then is cleaved into the mature sequence given above. BPTI is the classic member of the protein family of Kunitz-type serine protease inhibitors. Its physiological functions include the protective inhibition of the major digestive enzyme trypsin when small amounts are produced, by cleavage of the trypsinogen precursor during storage in the pancreas. Mechanism of drug action Aprotinin is a competitive inhibitor of several serine proteases, specifically trypsin, chymotrypsin and plasmin at a concentration of about 125,000 IU/ml, and kallikrein at 300,000 IU/ml. Its action on kallikrein leads to the inhibition of the formation of factor XIIa. As a result, both the intrinsic pathway of coagulation and fibrinolysis are inhibited. Its action on plasmin independently slows fibrinolysis. Drug efficacy In cardiac surgery with a high risk of significant blood loss, aprotinin significantly reduced bleeding, mortality and hospital stay. Beneficial effects were also reported in high-risk orthopedic surgery. In liver transplantation, initial reports of benefit were overshadowed by concerns about toxicity. In a meta-analysis performed in 2004, transfusion requirements decreased by 39% in coronary artery bypass graft (CABG) surgery. In orthopedic surgery, a decrease of blood transfusions was likewise confirmed. Drug safety There have been concerns about the safety of aprotinin. Anaphylaxis (a severe allergic reaction) occurs at a rate of 1:200 in first-time use, but serology (measuring antibodies against aprotinin in the blood) is not carried out in practice to predict anaphylaxis risk because the correct interpretation of these tests is difficult. Thrombosis, presumably from overactive inhibition of the fibrinolytic system, may occur at a higher rate, but until 2006 there was limited evidence for this association. Similarly, while biochemical measures of renal function were known to occasionally deteriorate, there was no evidence that this greatly influenced outcomes. A study performed in cardiac surgery patients reported in 2006 showed that there was indeed a risk of acute renal failure, myocardial infarction and heart failure, as well as stroke and encephalopathy. The study authors recommend older antifibrinolytics (such as tranexamic acid) in which these risks were not documented. The same group updated their data in 2007 and demonstrated similar findings. In September 2006, Bayer A.G. was faulted by the FDA for not revealing during testimony the existence of a commissioned retrospective study of 67,000 patients, 30,000 of whom received aprotinin and the rest other anti-fibrinolytics. The study concluded aprotinin carried greater risks. The FDA was alerted to the study by one of the researchers involved. Although the FDA issued a statement of concern they did not change their recommendation that the drug may benefit certain subpopulations of patients. In a Public Health Advisory Update dated October 3, 2006, the FDA recommended that "physicians consider limiting Trasylol use to those situations in which the clinical benefit of reduced blood loss is necessary to medical management and outweighs the potential risks" and carefully monitor patients. On October 25, 2007, the FDA issued a statement regarding the "Blood conservation using antifibrinolytics" (BART) randomized trial in a cardiac surgery population. The preliminary findings suggest that, compared to other antifibrinolytic drugs (epsilon-aminocaproic acid and tranexamic acid) aprotinin may increase the risk of death. On October 29, 2006 the Food and Drug Administration issued a warning that aprotinin may have serious kidney and cardiovascular toxicity. The producer, Bayer, reported to the FDA that additional observation studies showed that it may increase the chance for death, serious kidney damage, congestive heart failure and strokes. FDA warned clinicians to consider limiting use to those situations where the clinical benefit of reduced blood loss is essential to medical management and outweighs the potential risks. On November 5, 2007, Bayer announced that it was withdrawing Aprotinin because of a Canadian study that showed it increased the risk of death when used to prevent bleeding during heart surgery. Two studies published in early 2008, both comparing aprotinin with aminocaproic acid, found that mortality was increased by 32 and 64%, respectively. One study found an increased risk in need for dialysis and revascularisation. No cases of bovine spongiform encephalopathy transmission by aprotinin have been reported, although the drug was withdrawn in Italy due to fears of this. In vitro use Small amounts of aprotinin can be added to tubes of drawn blood to enable laboratory measurement of certain rapidly degraded proteins such as glucagon. In cell biology aprotinin is used as an enzyme inhibitor to prevent protein degradation during lysis or homogenization of cells and tissues. Aprotinin can be labelled with fluorescein isothiocyanate. The conjugate retains its antiproteolytic and carbohydrate-binding properties and has been used as a fluorescent histochemical reagent for staining glycoconjugates (mucosubstances) that are rich in uronic or sialic acids. History Initially named "kallikrein inactivator", aprotinin was first isolated from cow parotid glands in 1930. and independently as a trypsin inhibitor from bovine pancreas in 1936. It was purified from bovine lung in 1964. As it inhibits pancreatic enzymes, it was initially used in the treatment for acute pancreatitis, in which destruction of the gland by its own enzymes is thought to be part of the pathogenesis. Its use in major surgery commenced in the 1960s. BPTI is one of the most thoroughly studied proteins in terms of structural biology, experimental and computational dynamics, mutagenesis, and folding pathway. It was one of the earliest protein crystal structures solved, in 1970 in the laboratory of Robert Huber, and it's substrate-like interaction mode deciphered in the context of the bovine trypsin complex in 1974. It later also became famous being the first protein to have its structure determined by NMR spectroscopy, in the laboratory of Kurt Wuthrich at the ETH in Zurich in the early 1980s. Because it is a small, stable protein whose structure had been determined at high resolution by 1975, it was the first macromolecule of scientific interest to be simulated using molecular dynamics computation, in 1977 by J. Andrew McCammon and Bruce Gelin, in the Karplus group at Harvard. That study confirmed the then-surprising fact found in the NMR work that even well-packed aromatic sidechains in the interior of a stable protein can flip over rather rapidly (microsecond to millisecond time scale). Rate constants were determined by NMR for the hydrogen exchange of individual peptide NH groups along the chain, ranging from too fast to measure on the most exposed surface to many months for the most buried hydrogen-bonded groups in the center of the β sheet, and those values also correlate fairly well with degree of motion seen in the dynamics simulations. BPTI was important in the development of knowledge about the process of protein folding, the self-assembly of a polypeptide chain into a specific arrangement in 3D. The problem of achieving the correct pairings among the 6 Cys sidechains was shown to be especially difficult for the two buried, close-together SS near the BPTI chain termini, requiring a non-native intermediate for folding the mature sequence in vitro (it was later discovered that the precursor sequence folds more easily in vivo). BPTI was the cover image on a protein folding compendium volume by Thomas Creighton in 1992. Current findings One scientific study in rats reported that treatment with aprotinin prevents disruption of the blood–brain barrier during the C. neoformans infection. Another study in cell cultures suggests that the drug inhibits SARS-CoV-2 Replication. References External links The MEROPS online database for peptidases and their inhibitors: I02.001 Antifibrinolytics Proteins
Aprotinin
[ "Chemistry" ]
2,338
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
4,630,125
https://en.wikipedia.org/wiki/Artificial%20cell
An artificial cell, synthetic cell or minimal cell is an engineered particle that mimics one or many functions of a biological cell. Often, artificial cells are biological or polymeric membranes which enclose biologically active materials. As such, liposomes, polymersomes, nanoparticles, microcapsules and a number of other particles can qualify as artificial cells. The terms "artificial cell" and "synthetic cell" are used in a variety of different fields and can have different meanings, as it is also reflected in the different sections of this article. Some stricter definitions are based on the assumption that the term "cell" directly relates to biological cells and that these structures therefore have to be alive (or part of a living organism) and, further, that the term "artificial" implies that these structures are artificially built from the bottom-up, i.e. from basic components. As such, in the area of synthetic biology, an artificial cell can be understood as a completely synthetically made cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to replicate. This kind of artificial cell has not yet been made. However, in other cases, the term "artificial" does not imply that the entire structure is man-made, but instead, it can refer to the idea that certain functions or structures of biological cells can be modified, simplified, replaced or supplemented with a synthetic entity. In other fields, the term "artificial cell" can refer to any compartment that somewhat resembles a biological cell in size or structure, but is synthetically made, or even fully made from non-biological components. The term "artificial cell" is also used for structures with direct applications such as compartments for drug delivery. Micro-encapsulation allows for metabolism within the membrane, exchange of small molecules and prevention of passage of large substances across it. The main advantages of encapsulation include improved mimicry in the body, increased solubility of the cargo and decreased immune responses. Notably, artificial cells have been clinically successful in hemoperfusion. Bottom-up engineering of living artificial cells The German pathologist Rudolf Virchow brought forward the idea that not only does life arise from cells, but every cell comes from another cell; "Omnis cellula e cellula". Until now, most attempts to create an artificial cell have engineered modules that can mimic certain functions of living cells. Advances in cell-free transcription and translation reactions allow the expression of many genes as well as interdependent genetic and metabolic networks, but these efforts are still far from producing a fully operational cell. A bottom-up approach to build an artificial cell would involve creating a protocell de novo, entirely from non-living materials. As the term "cell" implies, one prerequisite is the generation of some sort of compartment that defines an individual, cellular unit. Phospholipid membranes are an obvious choice as compartmentalizing boundaries, as they act as selective barriers in all living biological cells. Scientists can encapsulate biomolecules in cell-sized phospholipid vesicles and by doing so, observe these molecules to act similarly as in biological cells and thereby recreate certain cell functions. In a similar way, functional biological building blocks can be encapsulated in these lipid compartments to achieve the synthesis of (however rudimentary) artificial cells. It is proposed to create a phospholipid bilayer vesicle with DNA capable of self-reproducing using synthetic genetic information. The three primary elements of such artificial cells are the formation of a lipid membrane, DNA and RNA replication through a template process and the harvesting of chemical energy for active transport across the membrane. The main hurdles foreseen and encountered with this proposed protocell are the creation of a minimal synthetic DNA that holds all sufficient information for life, and the reproduction of non-genetic components that are integral in cell development such as molecular self-organization. However, it is hoped that this kind of bottom-up approach would provide insight into the fundamental questions of organizations at the cellular level and the origins of biological life. So far, no completely artificial cell capable of self-reproduction has been synthesized using the molecules of life, and this objective is still in a distant future although various groups are currently working towards this goal. Another method proposed to create a protocell more closely resembles the conditions believed to have been present during evolution known as the primordial soup. Various RNA polymers could be encapsulated in vesicles and in such small boundary conditions, chemical reactions would be tested for. Ethics and controversy Protocell research has created controversy and opposing opinions, including critics of the vague definition of "artificial life". The creation of a basic unit of life is the most pressing ethical concern. Synthetic organisms could escape and cause damage to human health and ecosystems, or the technology could be used to make a biological weapon. Cells with certain non-standard biochemistries, such as mirror life, could also have a competitive advantage over natural organisms. International Research Community In the mid-2010s the research community started recognising the need to unify the field of synthetic cell research, acknowledging that the task of constructing an entire living organism from non-living components was beyond the resources of a single country. In 2017 the NSF-funded international Build-a-Cell large-scale research collaboration for the construction of synthetic living cell was started,. Build-a-Cell has conducted nine interdisciplinary workshopping events, open to all interested, to discuss and guide the future of the synthetic cell community. Build-a-Cell was followed by national synthetic cell organizations in several other countries. Those national organizations include FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative. Top-down approach to create a minimal living cell Members from the J. Craig Venter Institute have used a top-down computational approach to knock out genes in a living organism to a minimum set of genes. In 2010, the team succeeded in creating a replicating strain (named Mycoplasma laboratorium) of Mycoplasma mycoides using synthetically created DNA deemed to be the minimum requirement for life which was inserted into a genomically empty bacterium. It is hoped that the process of top-down biosynthesis will enable the insertion of new genes that would perform profitable functions such as generation of hydrogen for fuel or capturing excess carbon dioxide in the atmosphere. The myriad regulatory, metabolic, and signaling networks are not completely characterized. These top-down approaches have limitations for the understanding of fundamental molecular regulation, since the host organisms have a complex and incompletely defined molecular composition. In 2019 a complete computational model of all pathways in Mycoplasma Syn3.0 cell was published, representing the first complete in silico model for a living minimal organism. Heavy investing in biology has been done by large companies such as ExxonMobil, who has partnered with Synthetic Genomics Inc; Craig Venter's own biosynthetics company in the development of fuel from algae. As of 2016, Mycoplasma genitalium is the only organism used as a starting point for engineering a minimal cell, since it has the smallest known genome that can be cultivated under laboratory conditions; the wild-type variety has 482, and removing exactly 100 genes deemed non-essential resulted in a viable strain with improved growth rates. Reduced-genome Escherichia coli is considered more useful, and viable strains have been developed with 15% of the genome removed. A variation of an artificial cell has been created in which a completely synthetic genome was introduced to genomically emptied host cells. Although not completely artificial because the cytoplasmic components as well as the membrane from the host cell are kept, the engineered cell is under control of a synthetic genome and is able to replicate. Artificial cells for medical applications History In the 1960s Thomas Chang developed microcapsules which he would later call "artificial cells", as they were cell-sized compartments made from artificial materials. These cells consisted of ultrathin membranes of nylon, collodion or crosslinked protein whose semipermeable properties allowed diffusion of small molecules in and out of the cell. These cells were micron-sized and contained cells, enzymes, hemoglobin, magnetic materials, adsorbents and proteins. Later artificial cells have ranged from hundred-micrometer to nanometer dimensions and can carry microorganisms, vaccines, genes, drugs, hormones and peptides. The first clinical use of artificial cells was in hemoperfusion by the encapsulation of activated charcoal. In the 1970s, researchers were able to introduce enzymes, proteins and hormones to biodegradable microcapsules, later leading to clinical use in diseases such as Lesch–Nyhan syndrome. Although Chang's initial research focused on artificial red blood cells, only in the mid-1990s were biodegradable artificial red blood cells developed. Artificial cells in biological cell encapsulation were first used in the clinic in 1994 for treatment in a diabetic patient and since then other types of cells such as hepatocytes, adult stem cells and genetically engineered cells have been encapsulated and are under study for use in tissue regeneration. Materials Membranes for artificial cells can be made of simple polymers, crosslinked proteins, lipid membranes or polymer-lipid complexes. Further, membranes can be engineered to present surface proteins such as albumin, antigens, Na/K-ATPase carriers, or pores such as ion channels. Commonly used materials for the production of membranes include hydrogel polymers such as alginate, cellulose and thermoplastic polymers such as hydroxyethyl methacrylate-methyl methacrylate (HEMA- MMA), polyacrylonitrile-polyvinyl chloride (PAN-PVC), as well as variations of the above-mentioned. The material used determines the permeability of the cell membrane, which for polymer depends on the is important in determining adequate diffusion of nutrients, waste and other critical molecules. Hydrophilic polymers have the potential to be biocompatible and can be fabricated into a variety of forms which include polymer micelles, sol-gel mixtures, physical blends and crosslinked particles and nanoparticles. Of special interest are stimuli-responsive polymers that respond to pH or temperature changes for the use in targeted delivery. These polymers may be administered in the liquid form through a macroscopic injection and solidify or gel in situ because of the difference in pH or temperature. Nanoparticle and liposome preparations are also routinely used for material encapsulation and delivery. A major advantage of liposomes is their ability to fuse to cell and organelle membranes. Preparation Many variations for artificial cell preparation and encapsulation have been developed. Typically, vesicles such as a nanoparticle, polymersome or liposome are synthesized. An emulsion is typically made through the use of high pressure equipment such as a high pressure homogenizer or a Microfluidizer. Two micro-encapsulation methods for nitrocellulose are also described below. High-pressure homogenization In a high-pressure homogenizer, two liquids in oil/liquid suspension are forced through a small orifice under very high pressure. This process divides the products and allows the creation of extremely fine particles, as small as 1 nm. Microfluidization This technique uses a patented Microfluidizer to obtain a greater amount of homogenous suspensions that can create smaller particles than homogenizers. A homogenizer is first used to create a coarse suspension which is then pumped into the microfluidizer under high pressure. The flow is then split into two streams which will react at very high velocities in an interaction chamber until desired particle size is obtained. This technique allows for large scale production of phospholipid liposomes and subsequent material nanoencapsulations. Drop method In this method, a cell solution is incorporated dropwise into a collodion solution of cellulose nitrate. As the drop travels through the collodion, it is coated with a membrane thanks to the interfacial polymerization properties of the collodion. The cell later settles into paraffin, where the membrane sets, which is then suspended using a saline solution. The drop method is used for the creation of large artificial cells which encapsulate biological cells, stem cells and genetically engineered stem cells. Emulsion method The emulsion method differs in that the material to be encapsulated is usually smaller and is placed in the bottom of a reaction chamber where the collodion is added on top and centrifuged, or otherwise disturbed in order to create an emulsion. The encapsulated material is then dispersed and suspended in saline solution. Clinical relevance Drug release and delivery Artificial cells used for drug delivery differ from other artificial cells since their contents are intended to diffuse out of the membrane, or be engulfed and digested by a host target cell. Often used are submicron, lipid membrane artificial cells that may be referred to as nanocapsules, nanoparticles, polymersomes, or other variations of the term. A temperature-responsive system has been developed to use RNA thermometers to control the timing and location of cargo release from artificial cells. This is done by having artificial cells express a pore forming protein - alpha hemolysin - under the control of an RNA thermometer, allowing for cargo release to be coupled to temperature changes. Enzyme therapy Enzyme therapy is being actively studied for genetic metabolic diseases where an enzyme is over-expressed, under-expressed, defective, or not at all there. In the case of under-expression or expression of a defective enzyme, an active form of the enzyme is introduced in the body to compensate for the deficit. On the other hand, an enzymatic over-expression may be counteracted by introduction of a competing non-functional enzyme; that is, an enzyme which metabolizes the substrate into non-active products. When placed within an artificial cell, enzymes can carry out their function for a much longer period compared to free enzymes and can be further optimized by polymer conjugation. The first enzyme studied under artificial cell encapsulation was asparaginase for the treatment of lymphosarcoma in mice. This treatment delayed the onset and growth of the tumor. These initial findings led to further research in the use of artificial cells for enzyme delivery in tyrosine dependent melanomas. These tumors have a higher dependency on tyrosine than normal cells for growth, and research has shown that lowering systemic levels of tyrosine in mice can inhibit growth of melanomas. The use of artificial cells in the delivery of tyrosinase; and enzyme that digests tyrosine, allows for better enzyme stability and is shown effective in the removal of tyrosine without the severe side-effects associated with tyrosine deprivation in the diet. Artificial cell enzyme therapy is also of interest for the activation of prodrugs such as ifosfamide in certain cancers. Artificial cells encapsulating the cytochrome p450 enzyme which converts this prodrug into the active drug can be tailored to accumulate in the pancreatic carcinoma or implanting the artificial cells close to the tumor site. Here, the local concentration of the activated ifosfamide will be much higher than in the rest of the body thus preventing systemic toxicity. The treatment was successful in animals and showed a doubling in median survivals amongst patients with advanced-stage pancreatic cancer in phase I/II clinical trials, and a tripling in one-year survival rate. Gene therapy In treatment of genetic diseases, gene therapy aims to insert, alter or remove genes within an afflicted individual's cells. The technology relies heavily on viral vectors which raises concerns about insertional mutagenesis and systemic immune response that have led to human deaths and development of leukemia in clinical trials. Circumventing the need for vectors by using naked or plasmid DNA as its own delivery system also encounters problems such as low transduction efficiency and poor tissue targeting when given systemically. Artificial cells have been proposed as a non-viral vector by which genetically modified non-autologous cells are encapsulated and implanted to deliver recombinant proteins in vivo. This type of immuno-isolation has been proven efficient in mice through delivery of artificial cells containing mouse growth hormone which rescued a growth-retardation in mutant mice. A few strategies have advanced to human clinical trials for the treatment of pancreatic cancer, lateral sclerosis and pain control. Hemoperfusion The first clinical use of artificial cells was in hemoperfusion by the encapsulation of activated charcoal. Activated charcoal has the capability of adsorbing many large molecules and has for a long time been known for its ability to remove toxic substances from the blood in accidental poisoning or overdose. However, perfusion through direct charcoal administration is toxic as it leads to embolisms and damage of blood cells followed by removal by platelets. Artificial cells allow toxins to diffuse into the cell while keeping the dangerous cargo within their ultrathin membrane. Artificial cell hemoperfusion has been proposed as a less costly and more efficient detoxifying option than hemodialysis, in which blood filtering takes place only through size separation by a physical membrane. In hemoperfusion, thousands of adsorbent artificial cells are retained inside a small container through the use of two screens on either end through which patient blood perfuses. As the blood circulates, toxins or drugs diffuse into the cells and are retained by the absorbing material. The membranes of artificial cells are much thinner those used in dialysis and their small size means that they have a high membrane surface area. This means that a portion of cell can have a theoretical mass transfer that is a hundredfold higher than that of a whole artificial kidney machine. The device has been established as a routine clinical method for patients treated for accidental or suicidal poisoning but has also been introduced as therapy in liver failure and kidney failure by carrying out part of the function of these organs. Artificial cell hemoperfusion has also been proposed for use in immunoadsorption through which antibodies can be removed from the body by attaching an immunoadsorbing material such as albumin on the surface of the artificial cells. This principle has been used to remove blood group antibodies from plasma for bone marrow transplantation and for the treatment of hypercholesterolemia through monoclonal antibodies to remove low-density lipoproteins. Hemoperfusion is especially useful in countries with a weak hemodialysis manufacturing industry as the devices tend to be cheaper there and used in kidney failure patients. Encapsulated cells The most common method of preparation of artificial cells is through cell encapsulation. Encapsulated cells are typically achieved through the generation of controlled-size droplets from a liquid cell suspension which are then rapidly solidified or gelated to provide added stability. The stabilization may be achieved through a change in temperature or via material crosslinking. The microenvironment that a cell sees changes upon encapsulation. It typically goes from being on a monolayer to a suspension in a polymer scaffold within a polymeric membrane. A drawback of the technique is that encapsulating a cell decreases its viability and ability to proliferate and differentiate. Further, after some time within the microcapsule, cells form clusters that inhibit the exchange of oxygen and metabolic waste, leading to apoptosis and necrosis thus limiting the efficacy of the cells and activating the host's immune system. Artificial cells have been successful for transplanting a number of cells including islets of Langerhans for diabetes treatment, parathyroid cells and adrenal cortex cells. Encapsulated hepatocytes Shortage of organ donors make artificial cells key players in alternative therapies for liver failure. The use of artificial cells for hepatocyte transplantation has demonstrated feasibility and efficacy in providing liver function in models of animal liver disease and bioartificial liver devices. Research stemmed off experiments in which the hepatocytes were attached to the surface of a micro-carriers and has evolved into hepatocytes which are encapsulated in a three-dimensional matrix in alginate microdroplets covered by an outer skin of polylysine. A key advantage to this delivery method is the circumvention of immunosuppression therapy for the duration of the treatment. Hepatocyte encapsulations have been proposed for use in a bioartificial liver. The device consists of a cylindrical chamber imbedded with isolated hepatocytes through which patient plasma is circulated extra-corporeally in a type of hemoperfusion. Because microcapsules have a high surface area to volume ratio, they provide large surface for substrate diffusion and can accommodate a large number of hepatocytes. Treatment to induced liver failure mice showed a significant increase in the rate of survival. Artificial liver systems are still in early development but show potential for patients waiting for organ transplant or while a patient's own liver regenerates sufficiently to resume normal function. So far, clinical trials using artificial liver systems and hepatocyte transplantation in end-stage liver diseases have shown improvement of health markers but have not yet improved survival. The short longevity and aggregation of artificial hepatocytes after transplantation are the main obstacles encountered. Hepatocytes co-encapsulated with stem cells show greater viability in culture and after implantation and implantation of artificial stem cells alone have also shown liver regeneration. As such interest has arisen in the use of stem cells for encapsulation in regenerative medicine. Encapsulated bacterial cells The oral ingestion of live bacterial cell colonies has been proposed and is currently in therapy for the modulation of intestinal microflora, prevention of diarrheal diseases, treatment of H. Pylori infections, atopic inflammations, lactose intolerance and immune modulation, amongst others. The proposed mechanism of action is not fully understood but is believed to have two main effects. The first is the nutritional effect, in which the bacteria compete with toxin producing bacteria. The second is the sanitary effect, which stimulates resistance to colonization and stimulates immune response. The oral delivery of bacterial cultures is often a problem because they are targeted by the immune system and often destroyed when taken orally. Artificial cells help address these issues by providing mimicry into the body and selective or long term release thus increasing the viability of bacteria reaching the gastrointestinal system. In addition, live bacterial cell encapsulation can be engineered to allow diffusion of small molecules including peptides into the body for therapeutic purposes. Membranes that have proven successful for bacterial delivery include cellulose acetate and variants of alginate. Additional uses that have arosen from encapsulation of bacterial cells include protection against challenge from M. Tuberculosis and upregulation of Ig secreting cells from the immune system. The technology is limited by the risk of systemic infections, adverse metabolic activities and the risk of gene transfer. However, the greater challenge remains the delivery of sufficient viable bacteria to the site of interest. Artificial blood cells as oxygen carriers Nano sized oxygen carriers are used as a type of red blood cell substitutes, although they lack other components of red blood cells. They are composed of a synthetic polymersome or an artificial membrane surrounding purified animal, human or recombinant hemoglobin. Overall, hemoglobin delivery continues to be a challenge because it is highly toxic when delivered without any modifications. In some clinical trials, vasopressor effects have been observed. Artificial red blood cells Research interest in the use of artificial cells for blood arose after the AIDS scare of the 1980s. Besides bypassing the potential for disease transmission, artificial red blood cells are desired because they eliminate drawbacks associated with allogenic blood transfusions such as blood typing, immune reactions and its short storage life of 42 days. A hemoglobin substitute may be stored at room temperature and not under refrigeration for more than a year. Attempts have been made to develop a complete working red blood cell which comprises carbonic not only an oxygen carrier but also the enzymes associated with the cell. The first attempt was made in 1957 by replacing the red blood cell membrane by an ultrathin polymeric membrane which was followed by encapsulation through a lipid membrane and more recently a biodegradable polymeric membrane. A biological red blood cell membrane including lipids and associated proteins can also be used to encapsulate nanoparticles and increase residence time in vivo by bypassing macrophage uptake and systemic clearance. Artificial leuko-polymersomes A leuko-polymersome is a polymersome engineered to have the adhesive properties of a leukocyte. Polymersomes are vesicles composed of a bilayer sheet that can encapsulate many active molecules such as drugs or enzymes. By adding the adhesive properties of a leukocyte to their membranes, they can be made to slow down, or roll along epithelial walls within the quickly flowing circulatory system. Unconventional types of artificial cells Electronic artificial cell The concept of an Electronic Artificial Cell has been expanded in a series of 3 EU projects coordinated by John McCaskill from 2004 to 2015. The European Commission sponsored the development of the Programmable Artificial Cell Evolution (PACE) program from 2004 to 2008 whose goal was to lay the foundation for the creation of "microscopic self-organizing, self-replicating, and evolvable autonomous entities built from simple organic and inorganic substances that can be genetically programmed to perform specific functions" for the eventual integration into information systems. The PACE project developed the first Omega Machine, a microfluidic life support system for artificial cells that could complement chemically missing functionalities (as originally proposed by Norman Packard, Steen Rasmussen, Mark Beadau and John McCaskill). The ultimate aim was to attain an evolvable hybrid cell in a complex microscale programmable environment. The functions of the Omega Machine could then be removed stepwise, posing a series of solvable evolution challenges to the artificial cell chemistry. The project achieved chemical integration up to the level of pairs of the three core functions of artificial cells (a genetic subsystem, a containment system and a metabolic system), and generated novel spatially resolved programmable microfluidic environments for the integration of containment and genetic amplification. The project led to the creation of the European center for living technology. Following this research, in 2007, John McCaskill proposed to concentrate on an electronically complemented artificial cell, called the Electronic Chemical Cell. The key idea was to use a massively parallel array of electrodes coupled to locally dedicated electronic circuitry, in a two-dimensional thin film, to complement emerging chemical cellular functionality. Local electronic information defining the electrode switching and sensing circuits could serve as an electronic genome, complementing the molecular sequential information in the emerging protocols. A research proposal was successful with the European Commission and an international team of scientists partially overlapping with the PACE consortium commenced work 2008–2012 on the project Electronic Chemical Cells. The project demonstrated among other things that electronically controlled local transport of specific sequences could be used as an artificial spatial control system for the genetic proliferation of future artificial cells, and that core processes of metabolism could be delivered by suitably coated electrode arrays. The major limitation of this approach, apart from the initial difficulties in mastering microscale electrochemistry and electrokinetics, is that the electronic system is interconnected as a rigid non-autonomous piece of macroscopic hardware. In 2011, McCaskill proposed to invert the geometry of electronics and chemistry : instead of placing chemicals in an active electronic medium, to place microscopic autonomous electronics in a chemical medium. He organized a project to tackle a third generation of Electronic Artificial Cells at the 100 μm scale that could self-assemble from two half-cells "lablets" to enclose an internal chemical space, and function with the aid of active electronics powered by the medium they are immersed in. Such cells can copy both their electronic and chemical contents and will be capable of evolution within the constraints provided by their special pre-synthesized microscopic building blocks. In September 2012 work commenced on this project. Artificial neurons Jeewanu Jeewanu protocells are synthetic chemical particles that possess cell-like structure and seem to have some functional living properties. First synthesized in 1963 from simple minerals and basic organics while exposed to sunlight, it is still reported to have some metabolic capabilities, the presence of semipermeable membrane, amino acids, phospholipids, carbohydrates and RNA-like molecules. However, the nature and properties of the Jeewanu remains to be clarified. Semi-artificial cyborg cells See also Protocell Synthetic biology Artificial life Targeted drug delivery Respirocyte Chemoton Jeewanu References Artificial cell Synthetic biology it:Cellula artificiale
Artificial cell
[ "Engineering", "Biology" ]
6,035
[ "Synthetic biology", "Biological engineering", "Cell biology", "Bioinformatics", "Molecular genetics" ]
4,632,596
https://en.wikipedia.org/wiki/Bombieri%E2%80%93Vinogradov%20theorem
In mathematics, the Bombieri–Vinogradov theorem (sometimes simply called Bombieri's theorem) is a major result of analytic number theory, obtained in the mid-1960s, concerning the distribution of primes in arithmetic progressions, averaged over a range of moduli. The first result of this kind was obtained by Mark Barban in 1961 and the Bombieri–Vinogradov theorem is a refinement of Barban's result. The Bombieri–Vinogradov theorem is named after Enrico Bombieri and A. I. Vinogradov, who published on a related topic, the density hypothesis, in 1965. This result is a major application of the large sieve method, which developed rapidly in the early 1960s, from its beginnings in work of Yuri Linnik two decades earlier. Besides Bombieri, Klaus Roth was working in this area. In the late 1960s and early 1970s, many of the key ingredients and estimates were simplified by Patrick X. Gallagher. Statement of the Bombieri–Vinogradov theorem Let and be any two positive real numbers with Then Here is the Euler totient function, which is the number of summands for the modulus q, and where denotes the von Mangoldt function. A verbal description of this result is that it addresses the error term in the prime number theorem for arithmetic progressions, averaged over the moduli q up to Q. For a certain range of Q, which are around if we neglect logarithmic factors, the error averaged is nearly as small as . This is not obvious, and without the averaging is about of the strength of the Generalized Riemann Hypothesis (GRH). See also Elliott–Halberstam conjecture (a generalization of Bombieri–Vinogradov) Vinogradov's theorem (named after Ivan Matveyevich Vinogradov) Siegel–Walfisz theorem Notes External links The Bombieri-Vinogradov Theorem, R.C. Vaughan's Lecture note. Sieve theory Theorems in analytic number theory
Bombieri–Vinogradov theorem
[ "Mathematics" ]
422
[ "Theorems in mathematical analysis", "Sieve theory", "Theorems in analytic number theory", "Combinatorics", "Theorems in number theory" ]
4,633,401
https://en.wikipedia.org/wiki/Ribonuclease%20P
Ribonuclease P (, RNase P) is a type of ribonuclease which cleaves RNA. RNase P is unique from other RNases in that it is a ribozyme – a ribonucleic acid that acts as a catalyst in the same way that a protein-based enzyme would. Its function is to cleave off an extra, or precursor, sequence of RNA on tRNA molecules. Further, RNase P is one of two known multiple turnover ribozymes in nature (the other being the ribosome), the discovery of which earned Sidney Altman and Thomas Cech the Nobel Prize in Chemistry in 1989: in the 1970s, Altman discovered the existence of precursor tRNA with flanking sequences and was the first to characterize RNase P and its activity in processing of the 5' leader sequence of precursor tRNA. Recent findings also reveal that RNase P has a new function. It has been shown that human nuclear RNase P is required for the normal and efficient transcription of various small noncoding RNAs, such as tRNA, 5S rRNA, SRP RNA and U6 snRNA genes, which are transcribed by RNA polymerase III, one of three major nuclear RNA polymerases in human cells. In Bacteria Bacterial RNase P has two components: an RNA chain, called M1 RNA, and a polypeptide chain, or protein, called C5 protein. In vivo, both components are necessary for the ribozyme to function properly, but in vitro, the M1 RNA can act alone as a catalyst. The primary role of the C5 protein is to enhance the substrate binding affinity and the catalytic rate of the M1 RNA enzyme probably by increasing the metal ion affinity in the active site. The crystal structure of a bacterial RNase P holoenzyme with tRNA has been recently resolved, showing how the large, coaxially stacked helical domains of the RNase P RNA engage in shape selective recognition of the pre-tRNA target. This crystal structure confirms earlier models of substrate recognition and catalysis, identifies the location of the active site, and shows how the protein component increases RNase P functionality. Bacterial RNase P class A and B Ribonuclease P (RNase P) is a ubiquitous endoribonuclease, found in archaea, bacteria and eukarya as well as chloroplasts and mitochondria. Its best characterised activity is the generation of mature 5'-ends of tRNAs by cleaving the 5'-leader elements of precursor-tRNAs. Cellular RNase Ps are ribonucleoproteins (RNP). RNA from bacterial RNase Ps retains its catalytic activity in the absence of the protein subunit, i.e. it is a ribozyme. Isolated eukaryotic and archaeal RNase P RNA has not been shown to retain its catalytic function, but is still essential for the catalytic activity of the holoenzyme. Although the archaeal and eukaryotic holoenzymes have a much greater protein content than the eubacterial ones, the RNA cores from all the three lineages are homologous—helices corresponding to P1, P2, P3, P4, and P10/11 are common to all cellular RNase P RNAs. Yet, there is considerable sequence variation, particularly among the eukaryotic RNAs. In Archaea In archaea, RNase P ribonucleoproteins consist of 4–5 protein subunits that are associated with RNA. As revealed by in vitro reconstitution experiments these protein subunits are individually dispensable for tRNA processing that is essentially mediated by the RNA component. The structures of protein subunits of archaeal RNase P have been resolved by x-ray crystallography and NMR, thus revealing new protein domains and folding fundamental for function. Using comparative genomics and improved computational methods, a radically minimized form of the RNase P RNA, dubbed "Type T", has been found in all complete genomes in the crenarchaeal phylogenetic family Thermoproteaceae, including species in the genera Pyrobaculum, Caldivirga and Vulcanisaeta. All retain a conventional catalytic domain, but lack a recognizable specificity domain. 5′ tRNA processing activity of the RNA alone was experimentally confirmed. The Pyrobaculum and Caldivirga RNase P RNAs are the smallest naturally occurring form yet discovered to function as trans-acting ribozymes. Loss of the specificity domain in these RNAs suggests potential altered substrate specificity. It has recently been argued that the archaebacterium Nanoarchaeum equitans does not possess RNase P. Computational and experimental studies failed to find evidence for its existence. In this organism the tRNA promoter is close to the tRNA gene and it is thought that transcription starts at the first base of the tRNA thus removing the requirement for RNase P. In eukaryotes In eukaryotes, such as humans and yeast, most RNase P consists of an RNA chain that is structurally similar to that found in bacteria as well as nine to ten associated proteins (as opposed to the single bacterial RNase P protein, C5). Five of these protein subunits exhibit homology to archaeal counterparts. These protein subunits of RNase P are shared with RNase MRP, a catalytic ribonucleoprotein involved in processing of ribosomal RNA in the nucleolus. RNase P from eukaryotes was only recently demonstrated to be a ribozyme. Accordingly, the numerous protein subunits of eukaryotic RNase P have a minor contribution to tRNA processing per se, while they seem to be essential for the function of RNase P and RNase MRP in other biological settings, such as gene transcription and the cell cycle. Despite the bacterial origins of mitochondria and chloroplasts, plastids from higher animals and plants do not appear to contain an RNA-based RNase P. It has been shown that human mitochondrial RNase P is a protein and does not contain RNA. Spinach chloroplast RNase P has also been shown to function without an RNA subunit. Therapies using RNase P RNase P is now being studied as a potential therapy for diseases such as herpes simplex virus, cytomegalovirus, influenza and other respiratory infections, HIV-1 and cancer caused by fusion gene BCR-ABL. External guide sequences (EGSs) are formed with complementarity to viral or oncogenic mRNA and structures that mimic the T loop and acceptor stem of tRNA. These structures allow RNase P to recognize the EGS and cleave the target mRNA. EGS therapies have shown to be effective in culture and in live mice. References Further reading External links Nobel Lecture of Sidney Altman, Nobel prize in Chemistry 1989 RNase P Database at ncsu.edu Ribonucleases Ribozymes RNA splicing EC 3.1.26 Ribonucleoproteins
Ribonuclease P
[ "Chemistry" ]
1,498
[ "Catalysis", "Ribozymes" ]
4,636,561
https://en.wikipedia.org/wiki/Charge-transfer%20insulators
Charge-transfer insulators are a class of materials predicted to be conductors following conventional band theory, but which are in fact insulators due to a charge-transfer process. Unlike in Mott insulators, where the insulating properties arise from electrons hopping between unit cells, the electrons in charge-transfer insulators move between atoms within the unit cell. In the Mott–Hubbard case, it's easier for electrons to transfer between two adjacent metal sites (on-site Coulomb interaction U); here we have an excitation corresponding to the Coulomb energy U with . In the charge-transfer case, the excitation happens from the anion (e.g., oxygen) p level to the metal d level with the charge-transfer energy Δ: . U is determined by repulsive/exchange effects between the cation valence electrons. Δ is tuned by the chemistry between the cation and anion. One important difference is the creation of an oxygen p hole, corresponding to the change from a 'normal' O^2- to the ionic O- state. In this case the ligand hole is often denoted as . Distinguishing between Mott-Hubbard and charge-transfer insulators can be done using the Zaanen-Sawatzky-Allen (ZSA) scheme. Exchange interaction Analogous to Mott insulators we also have to consider superexchange in charge-transfer insulators. One contribution is similar to the Mott case: the hopping of a d electron from one transition metal site to another and then back the same way. This process can be written as . This will result in an antiferromagnetic exchange (for nondegenerate d levels) with an exchange constant . In the charge-transfer insulator case . This process also yields an antiferromagnetic exchange : The difference between these two possibilities is the intermediate state, which has one ligand hole for the first exchange () and two for the second (). The total exchange energy is the sum of both contributions: . Depending on the ratio of , the process is dominated by one of the terms and thus the resulting state is either Mott-Hubbard or charge-transfer insulating. References Quantum phases Electronic band structures
Charge-transfer insulators
[ "Physics", "Chemistry", "Materials_science" ]
462
[ "Quantum phases", "Electron", "Phases of matter", "Quantum mechanics", "Electronic band structures", "Condensed matter physics", "Matter" ]
33,516
https://en.wikipedia.org/wiki/Wave
In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Periodic waves oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a travelling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. There are two types of waves that are most commonly studied in classical physics: mechanical waves and electromagnetic waves. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, as determined by their frequencies (or wavelengths), have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays. Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy, momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals. On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains. A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute. Mathematical description Single waves A wave can be described just like a field, namely as a function where is a position and is a time. The value of is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space . However, in many cases one can ignore one dimension, and let be a point of the Cartesian plane . This is the case, for example, when studying vibrations of a drum skin. One may even restrict to a point of the Cartesian line – that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time , on the other hand, is always assumed to be a scalar; that is, a real number. The value of can be any physical quantity of interest assigned to the point that may vary with time. For example, if represents the vibrations inside an elastic solid, the value of is usually a vector that gives the current displacement from of the material particles that would be at the point in the absence of vibration. For an electromagnetic wave, the value of can be the electric field vector , or the magnetic field vector , or any related quantity, such as the Poynting vector . In fluid dynamics, the value of could be the velocity vector of the fluid at the point , or any scalar property like pressure, temperature, or density. In a chemical reaction, could be the concentration of some substance in the neighborhood of point of the reaction medium. For any dimension (1, 2, or 3), the wave's domain is then a subset of , such that the function value is defined for any point in . For example, when describing the motion of a drum skin, one can consider to be a disk (circle) on the plane with center at the origin , and let be the vertical displacement of the skin at the point of and at time . Superposition Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space. Wave spectrum Wave families Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echoes one could get from an airplane that may be approaching an airport. In some of those situations, one may describe such a family of waves by a function that depends on certain parameters , besides and . Then one can obtain different waves – that is, different functions of and – by choosing different values for those parameters. For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as The parameter defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); is the speed of sound; is the length of the bore; and is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position should be measured from the mouthpiece, and the time from any moment at which the pressure at the mouthpiece is maximum. The quantity is the wavelength of the emitted note, and is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters. As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance from the center of the skin to the strike point, and on the strength of the strike. Then the vibration for all possible strikes can be described by a function . Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function such that is the initial temperature at each point of the bar. Then the temperatures at later times can be expressed by a function that depends on the function (that is, a functional operator), so that the temperature at a later time is Differential wave equations Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of , only constrains how those values can change with time. Then the family of waves in question consists of all functions that satisfy those constraints – that is, all solutions of the equation. This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation where is the heat that is being generated per unit of volume and time in the neighborhood of at time (for example, by chemical reactions happening there); are the Cartesian coordinates of the point ; is the (first) derivative of with respect to ; and is the second derivative of relative to . (The symbol "" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.) This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures. For another example, we can describe all possible sounds echoing within a container of gas by a function that gives the pressure at a point and time within that container. If the gas was initially at uniform temperature and composition, the evolution of is constrained by the formula Here is some extra compression force that is being applied to the gas near by some external process, such as a loudspeaker or piston right next to . This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is , the second derivative of with respect to time, rather than the first derivative . Yet this small change makes a huge difference on the set of solutions . This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves. Wave in elastic medium Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling in the direction in space. For example, let the positive direction be to the right, and the negative direction be to the left. with constant amplitude with constant velocity , where is independent of wavelength (no dispersion) independent of amplitude (linear media, not nonlinear). with constant waveform, or shape This wave can then be described by the two-dimensional functions or, more generally, by d'Alembert's formula: representing two component waveforms and traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation General solutions are based upon Duhamel's principle. Wave forms The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction). In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v. Amplitude and modulation The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form: where is the amplitude envelope of the wave, is the wavenumber and is the phase. If the group velocity (see below) is wavelength-independent, this equation can be simplified as: showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation. Phase velocity and group velocity There are two velocities that are associated with waves, the phase velocity and the group velocity. Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength (lambda) and period as Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes—modulation or envelope of the wave. Special waves Sine waves Plane waves A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction () and time (). Since the wave profile only depends on the position in the combination , any displacement in directions perpendicular to cannot affect the value of the field. Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other. Standing waves A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions. The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time. Solitary waves A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. Physical properties Propagation Wave propagation is any of the ways in which waves travel. With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves. Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium. Reflection of plane waves in a half-space The propagation and reflection of plane waves—e.g. Pressure waves (P wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated. SV wave propagation The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture. P wave propagation Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different. Wave velocity Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as: where: vp is the phase velocity (with SI unit m/s), ω is the angular frequency (with SI unit rad/s), k is the wavenumber (with SI unit rad/m). The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship: In the special case , with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves). The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation: In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium. Waves exhibit common behaviors under a number of standard situations, for example: Transmission and media Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories: A bounded medium if it is finite in extent, otherwise an unbounded medium A linear medium if the amplitudes of different waves at any particular point in the medium can be added A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space An anisotropic medium if one or more of its physical properties differ in one or more directions An isotropic medium if its physical properties are the same in all directions Absorption Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored. Reflection When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line. Refraction Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law. Diffraction A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave. Interference When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one were not present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern. Polarization The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter. Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel. Dispersion Dispersion is the frequency dependence of the refractive index, a consequence of the atomic nature of materials. A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton was the first to recognize that this meant that white light was a mixture of light of different colors. Doppler effect The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. Mechanical waves A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Waves on strings The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies. The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ): where the linear density μ is the mass per unit length of the string. Acoustic waves Acoustic or sound waves are compression waves which travel as body waves at the speed given by: or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound). Water waves Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths. Sound, a mechanical wave that propagates through gases, liquids, solids and plasmas. Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect. Ocean surface waves, which are perturbations that propagate through water. Body waves Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves. Seismic waves Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves—the primary (P waves) and secondary waves (S waves)—and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves. Shock waves A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium. Shear waves Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity. Other Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions. Electromagnetic waves An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. The unification of light and electromagnetic waves was experimentally confirmed by Hertz in the end of the 1880s. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye. Quantum mechanical waves Schrödinger equation The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Dirac equation The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin- particles. de Broglie waves Louis de Broglie postulated that all particles with momentum have a wavelength where h is the Planck constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m. A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows: where the wavelength is determined by the wave vector k as: and the momentum by: However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet, a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value. In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet. Gaussian wave packets also are used to analyze water waves. For example, a Gaussian wavefunction ψ might take the form: at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis, or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian. Given the Gaussian: the Fourier transform is: The Gaussian in space therefore is made up of waves: that is, a number of waves of wavelengths λ such that kλ = 2 π. The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k. Gravity waves Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example. Gravitational waves Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016. Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity. See also Index of wave articles Waves in general Parameters Waveforms Electromagnetic waves In fluids Airy wave theory, in fluid dynamics Capillary wave, in fluid dynamics Cnoidal wave, in fluid dynamics Edge wave, a surface gravity wave fixed by refraction against a rigid boundary Faraday wave, a type of wave in liquids Gravity wave, in fluid dynamics Internal wave, a wave within a fluid medium Shock wave, in aerodynamics Sound wave, a wave of sound through a medium such as air or water Tidal wave, a scientifically incorrect name for a tsunami Tollmien–Schlichting wave, in fluid dynamics Wind wave In quantum mechanics In relativity Other specific types of waves Alfvén wave, in plasma physics Atmospheric wave, a periodic disturbance in the fields of atmospheric variables Fir wave, a forest configuration Lamb waves, in solid materials Rayleigh wave, surface acoustic waves that travel on solids Spin wave, in magnetism Spin density wave, in solid materials Trojan wave packet, in particle science Waves in plasmas, in plasma physics Related topics Absorption (electromagnetic radiation) Antenna (radio) Beat (acoustics) Branched flow Cymatics Diffraction Dispersion (water waves) Doppler effect Envelope detector Fourier transform for computing periodicity in evenly spaced data Group velocity Harmonic Huygens–Fresnel principle Index of wave articles Inertial wave Least-squares spectral analysis for computing periodicity in unevenly spaced data List of waves named after people Phase velocity Photon Polarization (physics) Propagation constant Radio propagation Ray (optics) Reaction–diffusion system Reflection (physics) Refraction Resonance Ripple tank Rogue wave Scattering Shallow water equations Shive wave machine Sound Standing wave Transmission medium Velocity factor Wave equation Wave power Wave turbulence Wind wave Wind wave#Formation References Sources . . Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, Free online version External links The Feynman Lectures on Physics: Waves Linear and nonlinear waves Science Aid: Wave properties – Concise guide aimed at teens "AT&T Archives: Similiarities of Wave Behavior" demonstrated by J.N. Shive of Bell Labs (video on YouTube) Differential equations Articles containing video clips
Wave
[ "Physics", "Mathematics" ]
6,778
[ "Physical phenomena", "Mathematical objects", "Differential equations", "Equations", "Waves", "Motion (physics)" ]
33,537
https://en.wikipedia.org/wiki/Wing
A wing is a type of fin that produces both lift and drag while moving through air. Wings are defined by two shape characteristics, an airfoil section and a planform. Wing efficiency is expressed as lift-to-drag ratio, which compares the benefit of lift with the air resistance of a given wing shape, as it flies. Aerodynamics is the study of wing performance in air. Equivalent foils that move through water are found on hydrofoil power vessels and foiling sailboats that lift out of the water at speed and on submarines that use diving planes to point the boat upwards or downwards, while running submerged. Hydrodynamics is the study of foil performance in water. Etymology and usage The word "wing" from the Old Norse vængr for many centuries referred mainly to the foremost limbs of birds (in addition to the architectural aisle). But in recent centuries the word's meaning has extended to include lift producing appendages of insects, bats, pterosaurs, boomerangs, some sail boats and aircraft, or the airfoil on a race car. Aerodynamics The design and analysis of the wings of aircraft is one of the principal applications of the science of aerodynamics, which is a branch of fluid mechanics. The properties of the airflow around any moving object can be found by solving the Navier-Stokes equations of fluid dynamics. Except for simple geometries, these equations are difficult to solve. Simpler explanations can be given. For a wing to produce "lift", it must be oriented at a suitable angle of attack relative to the flow of air past the wing. When this occurs, the wing deflects the airflow downwards, "turning" the air as it passes the wing. Since the wing exerts a force on the air to change its direction, the air must exert a force on the wing, equal in size but opposite in direction. This force arises from different air pressures that exist on the upper and lower surfaces of the wing. Lower-than-ambient air pressure is generated on the top surface of the wing, with a higher-than ambient-pressure on the bottom of the wing. (See: airfoil) These air pressure differences can be either measured using a pressure-measuring device, or can be calculated from the airspeed] using physical principles including Bernoulli's principle, which relates changes in air speed to changes in air pressure. The lower air pressure on the top of the wing generates a smaller downward force on the top of the wing than the upward force generated by the higher air pressure on the bottom of the wing. This gives an upward force on the wing. This force is called the lift generated by the wing. The different velocities of the air passing by the wing, the air pressure differences, the change in direction of the airflow, and the lift on the wing are different ways of describing how lift is produced so it is possible to calculate lift from any one of the other three. For example, the lift can be calculated from the pressure differences, or from different velocities of the air above and below the wing, or from the total momentum change of the deflected air. Fluid dynamics offers other approaches to solving these problems all which methods produce the same answer if correctly calculated. Given a particular wing and its velocity through the air, debates over which mathematical approach is the most convenient to use can be mistaken by those not familiar with the study of aerodynamics as differences of opinion about the basic principles of flight. Cross-sectional shape Wings with an asymmetrical cross-section are the norm in subsonic flight. Wings with a symmetrical cross-section can also generate lift by using a positive angle of attack to deflect air downward. Symmetrical airfoils have higher stalling speeds than cambered airfoils of the same wing area but are used in aerobatic aircraft as they provide the same flight characteristics whether the aircraft is upright or inverted. Another example comes from sailboats, where the sail is a thin sheet. For flight speeds near the speed of sound (transonic flight), specific asymmetrical airfoil sections are used to minimize the very pronounced increase in drag associated with airflow near the speed of sound. These airfoils, called supercritical airfoils, are flat on top and curved on the bottom. Design features Aircraft wings may feature some of the following: A rounded leading edge cross-section A sharp trailing edge cross-section Leading-edge devices such as slats, slots, or extensions Trailing-edge devices such as flaps or flaperons (combination of flaps and ailerons) Winglets to keep wingtip vortices from increasing drag and decreasing lift Dihedral, or a positive wing angle to the horizontal, increases spiral stability around the roll axis, whereas anhedral, or a negative wing angle to the horizontal, decreases spiral stability. Aircraft wings may have various devices, such as flaps or slats, that the pilot uses to modify the shape and surface area of the wing to change its operating characteristics in flight. Ailerons (usually near the wingtips) to roll the aircraft Spoilers on the upper surface to increase drag for descent and to reduce lift for more weight on wheels during braking Vortex generators to help prevent flow separation in transonic flow Wing fences to keep flow attached to the wing by stopping boundary layer separation from spreading roll direction. Folding wings allow more aircraft storage in the confined space of the hangar deck of an aircraft carrier Variable-sweep wing or "swing wings" that allow outstretched wings during low-speed flight (e.g., take-off, landing and loitering) and swept back wings for high-speed flight (including supersonic flight), such as in the F-111 Aardvark, the F-14 Tomcat, the Panavia Tornado, the MiG-23, the MiG-27, the Tu-160 and the B-1B Lancer. Applications Besides fixed-wing aircraft, applications for wing shapes include: Hang gliders, which use wings ranging from fully flexible (paragliders, gliding parachutes), flexible (framed sail wings), to rigid Kites, which use a variety of lifting surfaces Flying model airplanes Helicopters, which use a rotating wing with a variable pitch angle to provide directional forces Propellers, whose blades generate lift for propulsion. The NASA Space Shuttle, which uses its wings only to glide during its descent to a runway. These types of aircraft are called spaceplanes. Some racing cars, especially Formula One cars, which use upside-down wings (or airfoils) to provide greater traction at high speeds Sailboats, which use sails as vertical wings with variable fullness and direction to move across water Flexible wings In 1948, Francis Rogallo invented the fully limp flexible wing. Domina Jalbert invented flexible un-sparred ram-air airfoiled thick wings. In nature Wings have evolved multiple times in history: in dinosaurs (see Pterosaurs), insects, birds (see Bird wing), mammals (see Bats), fish, reptiles and plants. Wings of pterosaurs, birds, bats, and reptiles all evolved from existing limbs, however insect wings evolved as a completely separate structure. Wings facilitated increased locomotion, dispersal, and diversification. Various species of penguins and other flighted or flightless water birds such as auks, cormorants, guillemots, shearwaters, eider and scoter ducks and diving petrels are efficient underwater swimmers, and use their wings to propel through water. See also Flight Natural world: Bird flight Flight feather Flying and gliding animals Insect flight List of soaring birds Samara (winged seeds of trees) Aviation: Aircraft Blade solidity FanWing and Flettner airplane (experimental wing types) Flight dynamics (fixed-wing aircraft) Kite types Ornithopter – Flapping-wing aircraft (research prototypes, simple toys and models) Otto Lilienthal Wing configuration Wingsuit Sailing: Sails Forces on sails Wingsail References External links How Wings Work - Holger Babinsky Physics Education 2003 How Airplanes Fly: A Physical Description of Lift Demystifying the Science of Flight – Audio segment on NPR's Talk of the Nation Science Friday NASA's explanations and simulations Flight of the StyroHawk wing See How It Flies Aerodynamics Aerospace engineering Aircraft wing components Bird anatomy Bird flight Insect anatomy Mammal anatomy es:Ala (zoología)
Wing
[ "Chemistry", "Engineering" ]
1,725
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
33,550
https://en.wikipedia.org/wiki/Wood
Wood is a structural tissue/material found as xylem in the stems and roots of trees and other woody plants. It is an organic materiala natural composite of cellulosic fibers that are strong in tension and embedded in a matrix of lignin that resists compression. Wood is sometimes defined as only the secondary xylem in the stems of trees, or more broadly to include the same type of tissue elsewhere, such as in the roots of trees or shrubs. In a living tree, it performs a mechanical-support function, enabling woody plants to grow large or to stand up by themselves. It also conveys water and nutrients among the leaves, other growing tissues, and the roots. Wood may also refer to other plant materials with comparable properties, and to material engineered from wood, woodchips, or fibers. Wood has been used for thousands of years for fuel, as a construction material, for making tools and weapons, furniture and paper. More recently it emerged as a feedstock for the production of purified cellulose and its derivatives, such as cellophane and cellulose acetate. As of 2020, the growing stock of forests worldwide was about 557 billion cubic meters. As an abundant, carbon-neutral renewable resource, woody materials have been of intense interest as a source of renewable energy. In 2008, approximately 3.97 billion cubic meters of wood were harvested. Dominant uses were for furniture and building construction. Wood is scientifically studied and researched through the discipline of wood science, which was initiated since the beginning of the 20th century. History A 2011 discovery in the Canadian province of New Brunswick yielded the earliest known plants to have grown wood, approximately 395 to 400 million years ago. Wood can be dated by carbon dating and in some species by dendrochronology to determine when a wooden object was created. People have used wood for thousands of years for many purposes, including as a fuel or as a construction material for making houses, tools, weapons, furniture, packaging, artworks, and paper. Known constructions using wood date back ten thousand years. Buildings like the longhouses in Neolithic Europe were made primarily of wood. Recent use of wood has been enhanced by the addition of steel and bronze into construction. The year-to-year variation in tree-ring widths and isotopic abundances gives clues to the prevailing climate at the time a tree was cut. Physical properties Growth rings Wood, in the strict sense, is yielded by trees, which increase in diameter by the formation, between the existing wood and the inner bark, of new woody layers which envelop the entire stem, living branches, and roots. This process is known as secondary growth; it is the result of cell division in the vascular cambium, a lateral meristem, and subsequent expansion of the new cells. These cells then go on to form thickened secondary cell walls, composed mainly of cellulose, hemicellulose and lignin. Where the differences between the seasons are distinct, e.g. New Zealand, growth can occur in a discrete annual or seasonal pattern, leading to growth rings; these can usually be most clearly seen on the end of a log, but are also visible on the other surfaces. If the distinctiveness between seasons is annual (as is the case in equatorial regions, e.g. Singapore), these growth rings are referred to as annual rings. Where there is little seasonal difference growth rings are likely to be indistinct or absent. If the bark of the tree has been removed in a particular area, the rings will likely be deformed as the plant overgrows the scar. If there are differences within a growth ring, then the part of a growth ring nearest the center of the tree, and formed early in the growing season when growth is rapid, is usually composed of wider elements. It is usually lighter in color than that near the outer portion of the ring, and is known as earlywood or springwood. The outer portion formed later in the season is then known as the latewood or summerwood. There are major differences, depending on the kind of wood. If a tree grows all its life in the open and the conditions of soil and site remain unchanged, it will make its most rapid growth in youth, and gradually decline. The annual rings of growth are for many years quite wide, but later they become narrower and narrower. Since each succeeding ring is laid down on the outside of the wood previously formed, it follows that unless a tree materially increases its production of wood from year to year, the rings must necessarily become thinner as the trunk gets wider. As a tree reaches maturity its crown becomes more open and the annual wood production is lessened, thereby reducing still more the width of the growth rings. In the case of forest-grown trees so much depends upon the competition of the trees in their struggle for light and nourishment that periods of rapid and slow growth may alternate. Some trees, such as southern oaks, maintain the same width of ring for hundreds of years. On the whole, as a tree gets larger in diameter the width of the growth rings decreases. Knots As a tree grows, lower branches often die, and their bases may become overgrown and enclosed by subsequent layers of trunk wood, forming a type of imperfection known as a knot. The dead branch may not be attached to the trunk wood except at its base and can drop out after the tree has been sawn into boards. Knots affect the technical properties of the wood, usually reducing tension strength, but may be exploited for visual effect. In a longitudinally sawn plank, a knot will appear as a roughly circular "solid" (usually darker) piece of wood around which the grain of the rest of the wood "flows" (parts and rejoins). Within a knot, the direction of the wood (grain direction) is up to 90 degrees different from the grain direction of the regular wood. In the tree a knot is either the base of a side branch or a dormant bud. A knot (when the base of a side branch) is conical in shape (hence the roughly circular cross-section) with the inner tip at the point in stem diameter at which the plant's vascular cambium was located when the branch formed as a bud. In grading lumber and structural timber, knots are classified according to their form, size, soundness, and the firmness with which they are held in place. This firmness is affected by, among other factors, the length of time for which the branch was dead while the attaching stem continued to grow. Knots do not necessarily influence the stiffness of structural timber; this will depend on the size and location. Stiffness and elastic strength are more dependent upon the sound wood than upon localized defects. The breaking strength is very susceptible to defects. Sound knots do not weaken wood when subject to compression parallel to the grain. In some decorative applications, wood with knots may be desirable to add visual interest. In applications where wood is painted, such as skirting boards, fascia boards, door frames and furniture, resins present in the timber may continue to 'bleed' through to the surface of a knot for months or even years after manufacture and show as a yellow or brownish stain. A knot primer paint or solution (knotting), correctly applied during preparation, may do much to reduce this problem but it is difficult to control completely, especially when using mass-produced kiln-dried timber stocks. Heartwood and sapwood Heartwood (or duramen) is wood that as a result of a naturally occurring chemical transformation has become more resistant to decay. Heartwood formation is a genetically programmed process that occurs spontaneously. Some uncertainty exists as to whether the wood dies during heartwood formation, as it can still chemically react to decay organisms, but only once. The term heartwood derives solely from its position and not from any vital importance to the tree. This is evidenced by the fact that a tree can thrive with its heart completely decayed. Some species begin to form heartwood very early in life, so having only a thin layer of live sapwood, while in others the change comes slowly. Thin sapwood is characteristic of such species as chestnut, black locust, mulberry, osage-orange, and sassafras, while in maple, ash, hickory, hackberry, beech, and pine, thick sapwood is the rule. Some others never form heartwood. Heartwood is often visually distinct from the living sapwood and can be distinguished in a cross-section where the boundary will tend to follow the growth rings. For example, it is sometimes much darker. Other processes such as decay or insect invasion can also discolor wood, even in woody plants that do not form heartwood, which may lead to confusion. Sapwood (or alburnum) is the younger, outermost wood; in the growing tree it is living wood, and its principal functions are to conduct water from the roots to the leaves and to store up and give back according to the season the reserves prepared in the leaves. By the time they become competent to conduct water, all xylem tracheids and vessels have lost their cytoplasm and the cells are therefore functionally dead. All wood in a tree is first formed as sapwood. The more leaves a tree bears and the more vigorous its growth, the larger the volume of sapwood required. Hence trees making rapid growth in the open have thicker sapwood for their size than trees of the same species growing in dense forests. Sometimes trees (of species that do form heartwood) grown in the open may become of considerable size, or more in diameter, before any heartwood begins to form, for example, in second growth hickory, or open-grown pines. No definite relation exists between the annual rings of growth and the amount of sapwood. Within the same species the cross-sectional area of the sapwood is very roughly proportional to the size of the crown of the tree. If the rings are narrow, more of them are required than where they are wide. As the tree gets larger, the sapwood must necessarily become thinner or increase materially in volume. Sapwood is relatively thicker in the upper portion of the trunk of a tree than near the base, because the age and the diameter of the upper sections are less. When a tree is very young it is covered with limbs almost, if not entirely, to the ground, but as it grows older some or all of them will eventually die and are either broken off or fall off. Subsequent growth of wood may completely conceal the stubs which will remain as knots. No matter how smooth and clear a log is on the outside, it is more or less knotty near the middle. Consequently, the sapwood of an old tree, and particularly of a forest-grown tree, will be freer from knots than the inner heartwood. Since in most uses of wood, knots are defects that weaken the timber and interfere with its ease of working and other properties, it follows that a given piece of sapwood, because of its position in the tree, may well be stronger than a piece of heartwood from the same tree. Different pieces of wood cut from a large tree may differ decidedly, particularly if the tree is big and mature. In some trees, the wood laid on late in the life of a tree is softer, lighter, weaker, and more even textured than that produced earlier, but in other trees, the reverse applies. This may or may not correspond to heartwood and sapwood. In a large log the sapwood, because of the time in the life of the tree when it was grown, may be inferior in hardness, strength, and toughness to equally sound heartwood from the same log. In a smaller tree, the reverse may be true. Color In species which show a distinct difference between heartwood and sapwood the natural color of heartwood is usually darker than that of the sapwood, and very frequently the contrast is conspicuous (see section of yew log above). This is produced by deposits in the heartwood of chemical substances, so that a dramatic color variation does not imply a significant difference in the mechanical properties of heartwood and sapwood, although there may be a marked biochemical difference between the two. Some experiments on very resinous longleaf pine specimens indicate an increase in strength, due to the resin which increases the strength when dry. Such resin-saturated heartwood is called "fat lighter". Structures built of fat lighter are almost impervious to rot and termites, and very flammable. Tree stumps of old longleaf pines are often dug, split into small pieces and sold as kindling for fires. Stumps thus dug may actually remain a century or more since being cut. Spruce impregnated with crude resin and dried is also greatly increased in strength thereby. Since the latewood of a growth ring is usually darker in color than the earlywood, this fact may be used in visually judging the density, and therefore the hardness and strength of the material. This is particularly the case with coniferous woods. In ring-porous woods the vessels of the early wood often appear on a finished surface as darker than the denser latewood, though on cross sections of heartwood the reverse is commonly true. Otherwise the color of wood is no indication of strength. Abnormal discoloration of wood often denotes a diseased condition, indicating unsoundness. The black check in western hemlock is the result of insect attacks. The reddish-brown streaks so common in hickory and certain other woods are mostly the result of injury by birds. The discoloration is merely an indication of an injury, and in all probability does not of itself affect the properties of the wood. Certain rot-producing fungi impart to wood characteristic colors which thus become symptomatic of weakness. Ordinary sap-staining is due to fungal growth, but does not necessarily produce a weakening effect. Water content Water occurs in living wood in three locations, namely: in the cell walls in the protoplasmic contents of the cells as free water in the cell cavities and spaces, especially of the xylem In heartwood it occurs only in the first and last forms. Wood that is thoroughly air-dried (in equilibrium with the moisture content of the air) retains 8–16% of the water in the cell walls, and none, or practically none, in the other forms. Even oven-dried wood retains a small percentage of moisture, but for all except chemical purposes, may be considered absolutely dry. The general effect of the water content upon the wood substance is to render it softer and more pliable. A similar effect occurs in the softening action of water on rawhide, paper, or cloth. Within certain limits, the greater the water content, the greater its softening effect. The moisture in wood can be measured by several different moisture meters. Drying produces a decided increase in the strength of wood, particularly in small specimens. An extreme example is the case of a completely dry spruce block 5 cm in section, which will sustain a permanent load four times as great as a green (undried) block of the same size will. The greatest strength increase due to drying is in the ultimate crushing strength, and strength at elastic limit in endwise compression; these are followed by the modulus of rupture, and stress at elastic limit in cross-bending, while the modulus of elasticity is least affected. Structure Wood is a heterogeneous, hygroscopic, cellular and anisotropic (or more specifically, orthotropic) material. It consists of cells, and the cell walls are composed of micro-fibrils of cellulose (40–50%) and hemicellulose (15–25%) impregnated with lignin (15–30%). In coniferous or softwood species the wood cells are mostly of one kind, tracheids, and as a result the material is much more uniform in structure than that of most hardwoods. There are no vessels ("pores") in coniferous wood such as one sees so prominently in oak and ash, for example. The structure of hardwoods is more complex. The water conducting capability is mostly taken care of by vessels: in some cases (oak, chestnut, ash) these are quite large and distinct, in others (buckeye, poplar, willow) too small to be seen without a hand lens. In discussing such woods it is customary to divide them into two large classes, ring-porous and diffuse-porous. In ring-porous species, such as ash, black locust, catalpa, chestnut, elm, hickory, mulberry, and oak, the larger vessels or pores (as cross sections of vessels are called) are localized in the part of the growth ring formed in spring, thus forming a region of more or less open and porous tissue. The rest of the ring, produced in summer, is made up of smaller vessels and a much greater proportion of wood fibers. These fibers are the elements which give strength and toughness to wood, while the vessels are a source of weakness. In diffuse-porous woods the pores are evenly sized so that the water conducting capability is scattered throughout the growth ring instead of being collected in a band or row. Examples of this kind of wood are alder, basswood, birch, buckeye, maple, willow, and the Populus species such as aspen, cottonwood and poplar. Some species, such as walnut and cherry, are on the border between the two classes, forming an intermediate group. Earlywood and latewood In softwood In temperate softwoods, there often is a marked difference between latewood and earlywood. The latewood will be denser than that formed early in the season. When examined under a microscope, the cells of dense latewood are seen to be very thick-walled and with very small cell cavities, while those formed first in the season have thin walls and large cell cavities. The strength is in the walls, not the cavities. Hence the greater the proportion of latewood, the greater the density and strength. In choosing a piece of pine where strength or stiffness is the important consideration, the principal thing to observe is the comparative amounts of earlywood and latewood. The width of ring is not nearly so important as the proportion and nature of the latewood in the ring. If a heavy piece of pine is compared with a lightweight piece it will be seen at once that the heavier one contains a larger proportion of latewood than the other, and is therefore showing more clearly demarcated growth rings. In white pines there is not much contrast between the different parts of the ring, and as a result the wood is very uniform in texture and is easy to work. In hard pines, on the other hand, the latewood is very dense and is deep-colored, presenting a very decided contrast to the soft, straw-colored earlywood. It is not only the proportion of latewood, but also its quality, that counts. In specimens that show a very large proportion of latewood it may be noticeably more porous and weigh considerably less than the latewood in pieces that contain less latewood. One can judge comparative density, and therefore to some extent strength, by visual inspection. No satisfactory explanation can as yet be given for the exact mechanisms determining the formation of earlywood and latewood. Several factors may be involved. In conifers, at least, rate of growth alone does not determine the proportion of the two portions of the ring, for in some cases the wood of slow growth is very hard and heavy, while in others the opposite is true. The quality of the site where the tree grows undoubtedly affects the character of the wood formed, though it is not possible to formulate a rule governing it. In general, where strength or ease of working is essential, woods of moderate to slow growth should be chosen. In ring-porous woods In ring-porous woods, each season's growth is always well defined, because the large pores formed early in the season abut on the denser tissue of the year before. In the case of the ring-porous hardwoods, there seems to exist a pretty definite relation between the rate of growth of timber and its properties. This may be briefly summed up in the general statement that the more rapid the growth or the wider the rings of growth, the heavier, harder, stronger, and stiffer the wood. This, it must be remembered, applies only to ring-porous woods such as oak, ash, hickory, and others of the same group, and is, of course, subject to some exceptions and limitations. In ring-porous woods of good growth, it is usually the latewood in which the thick-walled, strength-giving fibers are most abundant. As the breadth of ring diminishes, this latewood is reduced so that very slow growth produces comparatively light, porous wood composed of thin-walled vessels and wood parenchyma. In good oak, these large vessels of the earlywood occupy from six to ten percent of the volume of the log, while in inferior material they may make up 25% or more. The latewood of good oak is dark colored and firm, and consists mostly of thick-walled fibers which form one-half or more of the wood. In inferior oak, this latewood is much reduced both in quantity and quality. Such variation is very largely the result of rate of growth. Wide-ringed wood is often called "second-growth", because the growth of the young timber in open stands after the old trees have been removed is more rapid than in trees in a closed forest, and in the manufacture of articles where strength is an important consideration such "second-growth" hardwood material is preferred. This is particularly the case in the choice of hickory for handles and spokes. Here not only strength, but toughness and resilience are important. The results of a series of tests on hickory by the U.S. Forest Service show that: "The work or shock-resisting ability is greatest in wide-ringed wood that has from 5 to 14 rings per inch (rings 1.8-5 mm thick), is fairly constant from 14 to 38 rings per inch (rings 0.7–1.8 mm thick), and decreases rapidly from 38 to 47 rings per inch (rings 0.5–0.7 mm thick). The strength at maximum load is not so great with the most rapid-growing wood; it is maximum with from 14 to 20 rings per inch (rings 1.3–1.8 mm thick), and again becomes less as the wood becomes more closely ringed. The natural deduction is that wood of first-class mechanical value shows from 5 to 20 rings per inch (rings 1.3–5 mm thick) and that slower growth yields poorer stock. Thus the inspector or buyer of hickory should discriminate against timber that has more than 20 rings per inch (rings less than 1.3 mm thick). Exceptions exist, however, in the case of normal growth upon dry situations, in which the slow-growing material may be strong and tough." The effect of rate of growth on the qualities of chestnut wood is summarized by the same authority as follows: "When the rings are wide, the transition from spring wood to summer wood is gradual, while in the narrow rings the spring wood passes into summer wood abruptly. The width of the spring wood changes but little with the width of the annual ring, so that the narrowing or broadening of the annual ring is always at the expense of the summer wood. The narrow vessels of the summer wood make it richer in wood substance than the spring wood composed of wide vessels. Therefore, rapid-growing specimens with wide rings have more wood substance than slow-growing trees with narrow rings. Since the more the wood substance the greater the weight, and the greater the weight the stronger the wood, chestnuts with wide rings must have stronger wood than chestnuts with narrow rings. This agrees with the accepted view that sprouts (which always have wide rings) yield better and stronger wood than seedling chestnuts, which grow more slowly in diameter." In diffuse-porous woods In the diffuse-porous woods, the demarcation between rings is not always so clear and in some cases is almost (if not entirely) invisible to the unaided eye. Conversely, when there is a clear demarcation there may not be a noticeable difference in structure within the growth ring. In diffuse-porous woods, as has been stated, the vessels or pores are even-sized, so that the water conducting capability is scattered throughout the ring instead of collected in the earlywood. The effect of rate of growth is, therefore, not the same as in the ring-porous woods, approaching more nearly the conditions in the conifers. In general, it may be stated that such woods of medium growth afford stronger material than when very rapidly or very slowly grown. In many uses of wood, total strength is not the main consideration. If ease of working is prized, wood should be chosen with regard to its uniformity of texture and straightness of grain, which will in most cases occur when there is little contrast between the latewood of one season's growth and the earlywood of the next. Monocots Structural material that resembles ordinary, "dicot" or conifer timber in its gross handling characteristics is produced by a number of monocot plants, and these also are colloquially called wood. Of these, bamboo, botanically a member of the grass family, has considerable economic importance, larger culms being widely used as a building and construction material and in the manufacture of engineered flooring, panels and veneer. Another major plant group that produces material that often is called wood are the palms. Of much less importance are plants such as Pandanus, Dracaena and Cordyline. With all this material, the structure and composition of the processed raw material is quite different from ordinary wood. Specific gravity The single most revealing property of wood as an indicator of wood quality is specific gravity (Timell 1986), as both pulp yield and lumber strength are determined by it. Specific gravity is the ratio of the mass of a substance to the mass of an equal volume of water; density is the ratio of a mass of a quantity of a substance to the volume of that quantity and is expressed in mass per unit substance, e.g., grams per milliliter (g/cm3 or g/ml). The terms are essentially equivalent as long as the metric system is used. Upon drying, wood shrinks and its density increases. Minimum values are associated with green (water-saturated) wood and are referred to as basic specific gravity (Timell 1986). The U.S. Forest Products Laboratory lists a variety of ways to define specific gravity (G) and density (ρ) for wood: The FPL has adopted Gb and G12 for specific gravity, in accordance with the ASTM D2555 standard. These are scientifically useful, but don't represent any condition that could physically occur. The FPL Wood Handbook also provides formulas for approximately converting any of these measurements to any other. Density Wood density is determined by multiple growth and physiological factors compounded into "one fairly easily measured wood characteristic" (Elliott 1970). Age, diameter, height, radial (trunk) growth, geographical location, site and growing conditions, silvicultural treatment, and seed source all to some degree influence wood density. Variation is to be expected. Within an individual tree, the variation in wood density is often as great as or even greater than that between different trees (Timell 1986). Variation of specific gravity within the bole of a tree can occur in either the horizontal or vertical direction. Because the specific gravity as defined above uses an unrealistic condition, woodworkers tend to use the "average dried weight", which is a density based on mass at 12% moisture content and volume at the same (ρ12). This condition occurs when the wood is at equilibrium moisture content with air at about 65% relative humidity and temperature at 30 °C (86 °F). This density is expressed in units of kg/m3 or lbs/ft3. Tables The following tables list the mechanical properties of wood and lumber plant species, including bamboo. See also Mechanical properties of tonewoods for additional properties. Wood properties: Bamboo properties: Hard versus soft It is common to classify wood as either softwood or hardwood. The wood from conifers (e.g. pine) is called softwood, and the wood from dicotyledons (usually broad-leaved trees, e.g. oak) is called hardwood. These names are a bit misleading, as hardwoods are not necessarily hard, and softwoods are not necessarily soft. The well-known balsa (a hardwood) is actually softer than any commercial softwood. Conversely, some softwoods (e.g. yew) are harder than many hardwoods. There is a strong relationship between the properties of wood and the properties of the particular tree that yielded it, at least for certain species. For example, in loblolly pine, wind exposure and stem position greatly affect the hardness of wood, as well as compression wood content. The density of wood varies with species. The density of a wood correlates with its strength (mechanical properties). For example, mahogany is a medium-dense hardwood that is excellent for fine furniture crafting, whereas balsa is light, making it useful for model building. One of the densest woods is black ironwood. Chemistry The chemical composition of wood varies from species to species, but is approximately 50% carbon, 42% oxygen, 6% hydrogen, 1% nitrogen, and 1% other elements (mainly calcium, potassium, sodium, magnesium, iron, and manganese) by weight. Wood also contains sulfur, chlorine, silicon, phosphorus, and other elements in small quantity. Aside from water, wood has three main components. Cellulose, a crystalline polymer derived from glucose, constitutes about 41–43%. Next in abundance is hemicellulose, which is around 20% in deciduous trees but near 30% in conifers. It is mainly five-carbon sugars that are linked in an irregular manner, in contrast to the cellulose. Lignin is the third component at around 27% in coniferous wood vs. 23% in deciduous trees. Lignin confers the hydrophobic properties reflecting the fact that it is based on aromatic rings. These three components are interwoven, and direct covalent linkages exist between the lignin and the hemicellulose. A major focus of the paper industry is the separation of the lignin from the cellulose, from which paper is made. In chemical terms, the difference between hardwood and softwood is reflected in the composition of the constituent lignin. Hardwood lignin is primarily derived from sinapyl alcohol and coniferyl alcohol. Softwood lignin is mainly derived from coniferyl alcohol. Extractives Aside from the structural polymers, i.e. cellulose, hemicellulose and lignin (lignocellulose), wood contains a large variety of non-structural constituents, composed of low molecular weight organic compounds, called extractives. These compounds are present in the extracellular space and can be extracted from the wood using different neutral solvents, such as acetone. Analogous content is present in the so-called exudate produced by trees in response to mechanical damage or after being attacked by insects or fungi. Unlike the structural constituents, the composition of extractives varies over wide ranges and depends on many factors. The amount and composition of extractives differs between tree species, various parts of the same tree, and depends on genetic factors and growth conditions, such as climate and geography. For example, slower growing trees and higher parts of trees have higher content of extractives. Generally, the softwood is richer in extractives than the hardwood. Their concentration increases from the cambium to the pith. Barks and branches also contain extractives. Although extractives represent a small fraction of the wood content, usually less than 10%, they are extraordinarily diverse and thus characterize the chemistry of the wood species. Most extractives are secondary metabolites and some of them serve as precursors to other chemicals. Wood extractives display different activities, some of them are produced in response to wounds, and some of them participate in natural defense against insects and fungi. These compounds contribute to various physical and chemical properties of the wood, such as wood color, fragnance, durability, acoustic properties, hygroscopicity, adhesion, and drying. Considering these impacts, wood extractives also affect the properties of pulp and paper, and importantly cause many problems in paper industry. Some extractives are surface-active substances and unavoidably affect the surface properties of paper, such as water adsorption, friction and strength. Lipophilic extractives often give rise to sticky deposits during kraft pulping and may leave spots on paper. Extractives also account for paper smell, which is important when making food contact materials. Most wood extractives are lipophilic and only a little part is water-soluble. The lipophilic portion of extractives, which is collectively referred as wood resin, contains fats and fatty acids, sterols and steryl esters, terpenes, terpenoids, resin acids, and waxes. The heating of resin, i.e. distillation, vaporizes the volatile terpenes and leaves the solid component – rosin. The concentrated liquid of volatile compounds extracted during steam distillation is called essential oil. Distillation of oleoresin obtained from many pines provides rosin and turpentine. Most extractives can be categorized into three groups: aliphatic compounds, terpenes and phenolic compounds. The latter are more water-soluble and usually are absent in the resin. Aliphatic compounds include fatty acids, fatty alcohols and their esters with glycerol, fatty alcohols (waxes) and sterols (steryl esters). Hydrocarbons, such as alkanes, are also present in the wood. Suberin is a polyester, made of suberin acids and glycerol, mainly found in barks. Fats serve as a source of energy for the wood cells. The most common wood sterol is sitosterol, and less commonly sitostanol, citrostadienol, campesterol or cholesterol. The main terpenes occurring in the softwood include mono-, sesqui- and diterpenes. Meanwhile, the terpene composition of the hardwood is considerably different, consisting of triterpenoids, polyprenols and other higher terpenes. Examples of mono-, di- and sesquiterpenes are α- and β-pinenes, 3-carene, β-myrcene, limonene, thujaplicins, α- and β-phellandrenes, α-muurolene, δ-cadinene, α- and δ-cadinols, α- and β-cedrenes, juniperol, longifolene, cis-abienol, borneol, pinifolic acid, nootkatin, chanootin, phytol, geranyl-linalool, β-epimanool, manoyloxide, pimaral and pimarol. Resin acids are usually tricyclic terpenoids, examples of which are pimaric acid, sandaracopimaric acid, isopimaric acid, abietic acid, levopimaric acid, palustric acid, neoabietic acid and dehydroabietic acid. Bicyclic resin acids are also found, such as lambertianic acid, communic acid, mercusic acid and secodehydroabietic acid. Cycloartenol, betulin and squalene are triterpenoids purified from hardwood. Examples of wood polyterpenes are rubber (cis-polypren), gutta percha (trans-polypren), gutta-balatá (trans-polypren) and betulaprenols (acyclic polyterpenoids). The mono- and sesquiterpenes of the softwood are responsible for the typical smell of pine forest. Many monoterpenoids, such as β-myrcene, are used in the preparation of flavors and fragrances. Tropolones, such as hinokitiol and other thujaplicins, are present in decay-resistant trees and display fungicidal and insecticidal properties. Tropolones strongly bind metal ions and can cause digester corrosion in the process kraft pulping. Owing to their metal-binding and ionophoric properties, especially thujaplicins are used in physiology experiments. Different other in-vitro biological activities of thujaplicins have been studied, such as insecticidal, anti-browning, anti-viral, anti-bacterial, anti-fungal, anti-proliferative and anti-oxidant. Phenolic compounds are especially found in the hardwood and the bark. The most well-known wood phenolic constituents are stilbenes (e.g. pinosylvin), lignans (e.g. pinoresinol, conidendrin, plicatic acid, hydroxymatairesinol), norlignans (e.g. nyasol, puerosides A and B, hydroxysugiresinol, sequirin-C), tannins (e.g. gallic acid, ellagic acid), flavonoids (e.g. chrysin, taxifolin, catechin, genistein). Most of the phenolic compounds have fungicidal properties and protect the wood from fungal decay. Together with the neolignans the phenolic compounds influence on the color of the wood. Resin acids and phenolic compounds are the main toxic contaminants present in the untreated effluents from pulping. Polyphenolic compounds are one of the most abundant biomolecules produced by plants, such as flavonoids and tannins. Tannins are used in leather industry and have shown to exhibit different biological activities. Flavonoids are very diverse, widely distributed in the plant kingdom and have numerous biological activities and roles. Uses Production Global production of roundwood rose from 3.5 billion m³ in 2000 to 4 billion m³ in 2021. In 2021, wood fuel was the main product with a 49 percent share of the total (2 billion m³), followed by coniferous industrial roundwood with 30 percent (1.2 billion m³) and non-coniferous industrial roundwood with 21 percent (0.9 billion m³). Asia and the Americas are the two main producing regions, accounting for 29 and 28 percent of the total roundwood production, respectively; Africa and Europe have similar shares of 20–21 percent, while Oceania produces the remaining 2 percent. Fuel Wood has a long history of being used as fuel, which continues to this day, mostly in rural areas of the world. Hardwood is preferred over softwood because it creates less smoke and burns longer. Adding a woodstove or fireplace to a home is often felt to add ambiance and warmth. Pulpwood Pulpwood is wood that is raised specifically for use in making paper. Construction Wood has been an important construction material since humans began building shelters, houses and boats. Nearly all boats were made out of wood until the late 19th century, and wood remains in common use today in boat construction. Elm in particular was used for this purpose as it resisted decay as long as it was kept wet (it also served for water pipe before the advent of more modern plumbing). Wood to be used for construction work is commonly known as lumber in North America. Elsewhere, lumber usually refers to felled trees, and the word for sawn planks ready for use is timber. In Medieval Europe oak was the wood of choice for all wood construction, including beams, walls, doors, and floors. Today a wider variety of woods is used: solid wood doors are often made from poplar, small-knotted pine, and Douglas fir. New domestic housing in many parts of the world today is commonly made from timber-framed construction. Engineered wood products are becoming a bigger part of the construction industry. They may be used in both residential and commercial buildings as structural and aesthetic materials. In buildings made of other materials, wood will still be found as a supporting material, especially in roof construction, in interior doors and their frames, and as exterior cladding. Wood is also commonly used as shuttering material to form the mold into which concrete is poured during reinforced concrete construction. Flooring A solid wood floor is a floor laid with planks or battens created from a single piece of timber, usually a hardwood. Since wood is hydroscopic (it acquires and loses moisture from the ambient conditions around it) this potential instability effectively limits the length and width of the boards. Solid hardwood flooring is usually cheaper than engineered timbers and damaged areas can be sanded down and refinished repeatedly, the number of times being limited only by the thickness of wood above the tongue. Solid hardwood floors were originally used for structural purposes, being installed perpendicular to the wooden support beams of a building (the joists or bearers) and solid construction timber is still often used for sports floors as well as most traditional wood blocks, mosaics and parquetry. Engineered products Engineered wood products, glued building products "engineered" for application-specific performance requirements, are often used in construction and industrial applications. Glued engineered wood products are manufactured by bonding together wood strands, veneers, lumber or other forms of wood fiber with glue to form a larger, more efficient composite structural unit. These products include glued laminated timber (glulam), wood structural panels (including plywood, oriented strand board and composite panels), laminated veneer lumber (LVL) and other structural composite lumber (SCL) products, parallel strand lumber, and I-joists. Approximately 100 million cubic meters of wood was consumed for this purpose in 1991. The trends suggest that particle board and fiber board will overtake plywood. Wood unsuitable for construction in its native form may be broken down mechanically (into fibers or chips) or chemically (into cellulose) and used as a raw material for other building materials, such as engineered wood, as well as chipboard, hardboard, and medium-density fiberboard (MDF). Such wood derivatives are widely used: wood fibers are an important component of most paper, and cellulose is used as a component of some synthetic materials. Wood derivatives can be used for kinds of flooring, for example laminate flooring. Furniture and utensils Wood has always been used extensively for furniture, such as chairs and beds. It is also used for tool handles and cutlery, such as chopsticks, toothpicks, and other utensils, like the wooden spoon and pencil. Other Further developments include new lignin glue applications, recyclable food packaging, rubber tire replacement applications, anti-bacterial medical agents, and high strength fabrics or composites. As scientists and engineers further learn and develop new techniques to extract various components from wood, or alternatively to modify wood, for example by adding components to wood, new more advanced products will appear on the marketplace. Moisture content electronic monitoring can also enhance next generation wood protection. Art Wood has long been used as an artistic medium. It has been used to make sculptures and carvings for millennia. Examples include the totem poles carved by North American indigenous people from conifer trunks, often Western Red Cedar (Thuja plicata). Other uses of wood in the arts include: Woodcut printmaking and engraving Wood can be a surface to paint on, such as in panel painting Many musical instruments are made mostly or entirely of wood Sports and recreational equipment Many types of sports equipment are made of wood, or were constructed of wood in the past. For example, cricket bats are typically made of white willow. The baseball bats which are legal for use in Major League Baseball are frequently made of ash wood or hickory, and in recent years have been constructed from maple even though that wood is somewhat more fragile. National Basketball Association courts have been traditionally made out of parquetry. Many other types of sports and recreation equipment, such as skis, ice hockey sticks, lacrosse sticks and archery bows, were commonly made of wood in the past, but have since been replaced with more modern materials such as aluminium, titanium or composite materials such as fiberglass and carbon fiber. One noteworthy example of this trend is the family of golf clubs commonly known as the woods, the heads of which were traditionally made of persimmon wood in the early days of the game of golf, but are now generally made of metal or (especially in the case of drivers) carbon-fiber composites. Bacterial degradation Little is known about the bacteria that degrade cellulose. Symbiotic bacteria in Xylophaga may play a role in the degradation of sunken wood. Alphaproteobacteria, Flavobacteria, Actinomycetota, Clostridia, and Bacteroidota have been detected in wood submerged for over a year. See also Acetylated wood Ancient Chinese wooden architecture Ash burner Burl Carpentry Certified wood Conservation and restoration of waterlogged wood Conservation and restoration of wooden artifacts Driftwood Dunnage Forestry Fossil wood Furfurylated wood Green building and wood Helsinki Central Library Oodi International Wood Products Journal List of tallest wooden buildings List of woods Log building Log cabin Log house Mineral bonded wood wool board Mjøstårnet Natural building Parquetry Pallet crafts Pellet fuel Petrified wood Pine tar Plyscraper Pulpwood Reclaimed lumber Sawdust brandy Sawdust Thermally modified wood Timber framing Timber pilings Timber recycling Tinder Wood ash Wood degradation Wood drying Wood economy Wood lagging Wood preservation Wood stabilization Wood warping Wood wool Wood-decay fungus Wooden box Wood-plastic composite Woodturning Woodworm Xylology Xylophagy Xylotheque Xylotomy Yakisugi Sources References External links The Wood in Culture Association (archived 27 May 2016) The Wood Explorer: A comprehensive database of commercial wood species () APA – The Engineered Wood Association (archived 14 April 2011) Visual arts materials Biodegradable materials Building materials Energy crops Forestry Natural materials Trees Woodworking materials Materials Natural resources Botany Wood products Plant anatomy Forest products Wood sciences
Wood
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
9,683
[ "Plants", "Natural materials", "Building engineering", "Biodegradable materials", "Materials science", "Architecture", "Biodegradation", "Construction", "Materials", "Botany", "Wood sciences", "Matter", "Building materials" ]
33,629
https://en.wikipedia.org/wiki/Weak%20interaction
In nuclear physics and particle physics, the weak interaction, also called the weak force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion. The theory describing its behaviour and effects is sometimes called quantum flavordynamics (QFD); however, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. Background The Standard Model of particle physics provides a uniform framework for understanding electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the weak interaction, fermions can exchange three types of force carriers, namely , , and  bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over any set distance is typically several orders of magnitude less than that of the electromagnetic force, which itself is further orders of magnitude less than the strong nuclear force. The weak interaction is the only fundamental interaction that breaks parity symmetry, and similarly, but far more rarely, the only interaction to break charge–parity symmetry. Quarks, which make up composite particles like neutrons and protons, come in six "flavours" up, down, charm, strange, top and bottom which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta-minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Weak interaction is important in the fusion of hydrogen into helium in a star. This is because it can convert a proton (hydrogen) into a neutron to form deuterium which is important for the continuation of nuclear fusion to form helium. The accumulation of neutrons facilitates the buildup of heavy nuclei in a star. Most fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium luminescence, and in the related field of betavoltaics (but not similar to radium luminescence). The electroweak force is believed to have separated into the electromagnetic and weak forces during the quark epoch of the early universe. History In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. In the mid-1950s, Chen-Ning Yang and Tsung-Dao Lee first suggested that the handedness of the spins of particles in weak interaction might violate the conservation law or symmetry. In 1957, Chien Shiung Wu and collaborators confirmed the symmetry violation. In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force. The existence of the and  bosons was not directly confirmed until 1983. Properties The electrically charged weak interaction is unique in a number of respects: It is the only interaction that can change the flavour of quarks and leptons (i.e., of changing one type of quark into another). It is the only interaction that violates P, or parity symmetry. It is also the only one that violates charge–parity (CP) symmetry. Both the electrically charged and the electrically neutral interactions are mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, called the and  bosons, are short-lived with a lifetime of under  seconds. The weak interaction has a coupling constant (an indicator of how frequently interactions occur) between and , compared to the electromagnetic coupling constant of about and the strong interaction coupling constant of about 1; consequently the weak interaction is "weak" in terms of intensity. The weak interaction has a very short effective range (around to  m (0.01 to 0.1 fm)). At distances around  meters (0.001 fm), the weak interaction has an intensity of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3 m, the weak interaction becomes 10,000 times weaker. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states, nor does it involve binding energy something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. Its most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon) and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour changing, so this can only proceed by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the strange quark and charm quark, respectively) would also be conserved across all interactions. All mesons are unstable because of weak decay. In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual  boson, which then decays into an electron and an electron antineutrino. Another example is electron capture a common variant of radioactive decay wherein a proton and an electron within an atom interact and are changed to a neutron (an up quark is changed to a down quark), and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about  seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about  seconds, or a hundred million times longer than a neutral pion. A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes. Weak isospin and weak hypercharge All particles have a property called weak isospin (symbol ), which serves as an additive quantum number that restricts how the particle can interact with the of the weak force. Weak isospin plays the same role in the weak interaction with as electric charge does in electromagnetism, and color charge in the strong interaction; a different number with a similar name, weak charge, discussed below, is used for interactions with the . All left-handed fermions have a weak isospin value of either or ; all right-handed fermions have 0 isospin. For example, the up quark has and the down quark has . A quark never decays through the weak interaction into a quark of the same : Quarks with a of only decay into quarks with a of and conversely. In any given strong, electromagnetic, or weak interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) with a weak isospin of +1 normally decays into a (with ) and a (as a right-handed antiparticle, ). For the development of the electroweak theory, another property, weak hypercharge, was invented, defined as where is the weak hypercharge of a particle with electrical charge (in elementary charge units) and weak isospin . Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin- particles have a non-zero weak hypercharge. Interaction types There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the "neutral-current interaction" because the weakly interacting fermions form a current with total electric charge of zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the and bosons, however the naming convention predates the concept of the mediator bosons, and clearly (at least in name) labels the charge of the current (formed from the fermions), not necessarily the bosons. Charged-current interaction In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a  boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron , muon , or tau ) is the same as the type of lepton in the interaction, for example: Similarly, a down-type quark (, , or , with a charge of ) can be converted into an up-type quark (, , or , with a charge of ), by emitting a  boson or by absorbing a  boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a  boson, or absorb a  boson, and thereby be converted into a down-type quark, for example: The W boson is unstable so will rapidly decay, with a very short lifetime. For example: Decay of a W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products. At the quark level, the process can be represented as: Neutral-current interaction In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral boson. For example: Like the  bosons, the  boson also decays rapidly, for example: Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, weak isospin, the neutral-current interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs. The quantum number weak charge () serves the same role in the neutral current interaction with the that electric charge (, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by: Since the weak mixing angle , the parenthetic expression , with its value varying slightly with the momentum difference (called "running") between the particles involved. Hence since by convention , and for all fermions involved in the weak interaction . The weak charge of charged leptons is then close to zero, so these mostly interact with the  boson through the axial coupling. Electroweak theory The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (, , , the three carriers of the weak interaction), and the photon (, the massless gauge boson that carries the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless scalar bosons forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak vector bosons, each similar to the photon. However, at low energies, this gauge symmetry is spontaneously broken down to the symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those "extra" three Higgs bosons become incorporated into the three weak bosons, which then acquire mass through the Higgs mechanism. These three composite bosons are the , , and  bosons actually observed in the weak interaction. The fourth electroweak gauge boson is the photon () of electromagnetism, which does not couple to any of the Higgs fields and so remains massless. This theory has made a number of predictions, including a prediction of the masses of the and  bosons before their discovery and detection in 1983. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and , whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist. In a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken interaction would eventually become confining. Alternative models where becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. Violation of symmetry The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V − A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V − A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis. See also Weakless universe – the postulate that weak interactions are not anthropically necessary Gravity Strong interaction Electromagnetism Footnotes References Sources Technical For general readers External links Harry Cheung, The Weak Force @Fermilab Fundamental Forces @Hyperphysics, Georgia State University. Brian Koberlein, What is the weak force? Weak interaction Fundamental interactions
Weak interaction
[ "Physics" ]
3,865
[ "Physical phenomena", "Force", "Weak interaction", "Physical quantities", "Fundamental interactions", "Particle physics", "Nuclear physics" ]
33,662
https://en.wikipedia.org/wiki/Weak%20topology
In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space (such as a normed vector space) with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis. One may call subsets of a topological vector space weakly closed (respectively, weakly compact, etc.) if they are closed (respectively, compact, etc.) with respect to the weak topology. Likewise, functions are sometimes called weakly continuous (respectively, weakly differentiable, weakly analytic, etc.) if they are continuous (respectively, differentiable, analytic, etc.) with respect to the weak topology. History Starting in the early 1900s, David Hilbert and Marcel Riesz made extensive use of weak convergence. The early pioneers of functional analysis did not elevate norm convergence above weak convergence and oftentimes viewed weak convergence as preferable. In 1929, Banach introduced weak convergence for normed spaces and also introduced the analogous weak-* convergence. The weak topology is called in French and in German. The weak and strong topologies Let be a topological field, namely a field with a topology such that addition, multiplication, and division are continuous. In most applications will be either the field of complex numbers or the field of real numbers with the familiar topologies. Weak topology with respect to a pairing Both the weak topology and the weak* topology are special cases of a more general construction for pairings, which we now describe. The benefit of this more general construction is that any definition or result proved for it applies to both the weak topology and the weak* topology, thereby making redundant the need for many definitions, theorem statements, and proofs. This is also the reason why the weak* topology is also frequently referred to as the "weak topology"; because it is just an instance of the weak topology in the setting of this more general construction. Suppose is a pairing of vector spaces over a topological field (i.e. and are vector spaces over and is a bilinear map). Notation. For all , let denote the linear functional on defined by . Similarly, for all , let be defined by . Definition. The weak topology on induced by (and ) is the weakest topology on , denoted by or simply , making all maps continuous, as ranges over . The weak topology on is now automatically defined as described in the article Dual system. However, for clarity, we now repeat it. Definition. The weak topology on induced by (and ) is the weakest topology on , denoted by or simply , making all maps continuous, as ranges over . If the field has an absolute value , then the weak topology on is induced by the family of seminorms, , defined by for all and . This shows that weak topologies are locally convex. Assumption. We will henceforth assume that is either the real numbers or the complex numbers . Canonical duality We now consider the special case where is a vector subspace of the algebraic dual space of (i.e. a vector space of linear functionals on ). There is a pairing, denoted by or , called the canonical pairing whose bilinear map is the canonical evaluation map, defined by for all and . Note in particular that is just another way of denoting i.e. . Assumption. If is a vector subspace of the algebraic dual space of then we will assume that they are associated with the canonical pairing . In this case, the weak topology on (resp. the weak topology on ), denoted by (resp. by ) is the weak topology on (resp. on ) with respect to the canonical pairing . The topology is the initial topology of with respect to . If is a vector space of linear functionals on , then the continuous dual of with respect to the topology is precisely equal to . The weak and weak* topologies Let be a topological vector space (TVS) over , that is, is a vector space equipped with a topology so that vector addition and scalar multiplication are continuous. We call the topology that starts with the original, starting, or given topology (the reader is cautioned against using the terms "initial topology" and "strong topology" to refer to the original topology since these already have well-known meanings, so using them may cause confusion). We may define a possibly different topology on using the topological or continuous dual space , which consists of all linear functionals from into the base field that are continuous with respect to the given topology. Recall that is the canonical evaluation map defined by for all and , where in particular, . Definition. The weak topology on is the weak topology on with respect to the canonical pairing . That is, it is the weakest topology on making all maps continuous, as ranges over . Definition: The weak topology on is the weak topology on with respect to the canonical pairing . That is, it is the weakest topology on making all maps continuous, as ranges over . This topology is also called the weak* topology. We give alternative definitions below. Weak topology induced by the continuous dual space Alternatively, the weak topology on a TVS is the initial topology with respect to the family . In other words, it is the coarsest topology on X such that each element of remains a continuous function. A subbase for the weak topology is the collection of sets of the form where and is an open subset of the base field . In other words, a subset of is open in the weak topology if and only if it can be written as a union of (possibly infinitely many) sets, each of which is an intersection of finitely many sets of the form . From this point of view, the weak topology is the coarsest polar topology. Weak convergence The weak topology is characterized by the following condition: a net in converges in the weak topology to the element of if and only if converges to in or for all . In particular, if is a sequence in , then converges weakly to if as for all . In this case, it is customary to write or, sometimes, Other properties If is equipped with the weak topology, then addition and scalar multiplication remain continuous operations, and is a locally convex topological vector space. If is a normed space, then the dual space is itself a normed vector space by using the norm This norm gives rise to a topology, called the strong topology, on . This is the topology of uniform convergence. The uniform and strong topologies are generally different for other spaces of linear maps; see below. Weak-* topology The weak* topology is an important example of a polar topology. A space can be embedded into its double dual X** by Thus is an injective linear mapping, though not necessarily surjective (spaces for which this canonical embedding is surjective are called reflexive). The weak-* topology on is the weak topology induced by the image of . In other words, it is the coarsest topology such that the maps Tx, defined by from to the base field or remain continuous. Weak-* convergence A net in is convergent to in the weak-* topology if it converges pointwise: for all . In particular, a sequence of converges to provided that for all . In this case, one writes as . Weak-* convergence is sometimes called the simple convergence or the pointwise convergence. Indeed, it coincides with the pointwise convergence of linear functionals. Properties If is a separable (i.e. has a countable dense subset) locally convex space and H is a norm-bounded subset of its continuous dual space, then H endowed with the weak* (subspace) topology is a metrizable topological space. However, for infinite-dimensional spaces, the metric cannot be translation-invariant. If is a separable metrizable locally convex space then the weak* topology on the continuous dual space of is separable. Properties on normed spaces By definition, the weak* topology is weaker than the weak topology on . An important fact about the weak* topology is the Banach–Alaoglu theorem: if is normed, then the closed unit ball in is weak*-compact (more generally, the polar in of a neighborhood of 0 in is weak*-compact). Moreover, the closed unit ball in a normed space is compact in the weak topology if and only if is reflexive. In more generality, let be locally compact valued field (e.g., the reals, the complex numbers, or any of the p-adic number systems). Let be a normed topological vector space over , compatible with the absolute value in . Then in , the topological dual space of continuous -valued linear functionals on , all norm-closed balls are compact in the weak* topology. If is a normed space, a version of the Heine-Borel theorem holds. In particular, a subset of the continuous dual is weak* compact if and only if it is weak* closed and norm-bounded. This implies, in particular, that when is an infinite-dimensional normed space then the closed unit ball at the origin in the dual space of does not contain any weak* neighborhood of 0 (since any such neighborhood is norm-unbounded). Thus, even though norm-closed balls are compact, X* is not weak* locally compact. If is a normed space, then is separable if and only if the weak* topology on the closed unit ball of is metrizable, in which case the weak* topology is metrizable on norm-bounded subsets of . If a normed space has a dual space that is separable (with respect to the dual-norm topology) then is necessarily separable. If is a Banach space, the weak* topology is not metrizable on all of unless is finite-dimensional. Examples Hilbert spaces Consider, for example, the difference between strong and weak convergence of functions in the Hilbert space . Strong convergence of a sequence to an element means that as . Here the notion of convergence corresponds to the norm on . In contrast weak convergence only demands that for all functions (or, more typically, all f in a dense subset of such as a space of test functions, if the sequence {ψk} is bounded). For given test functions, the relevant notion of convergence only corresponds to the topology used in . For example, in the Hilbert space , the sequence of functions form an orthonormal basis. In particular, the (strong) limit of as does not exist. On the other hand, by the Riemann–Lebesgue lemma, the weak limit exists and is zero. Distributions One normally obtains spaces of distributions by forming the strong dual of a space of test functions (such as the compactly supported smooth functions on ). In an alternative construction of such spaces, one can take the weak dual of a space of test functions inside a Hilbert space such as . Thus one is led to consider the idea of a rigged Hilbert space. Weak topology induced by the algebraic dual Suppose that is a vector space and X# is the algebraic dual space of (i.e. the vector space of all linear functionals on ). If is endowed with the weak topology induced by X# then the continuous dual space of is , every bounded subset of is contained in a finite-dimensional vector subspace of , every vector subspace of is closed and has a topological complement. Operator topologies If and are topological vector spaces, the space of continuous linear operators may carry a variety of different possible topologies. The naming of such topologies depends on the kind of topology one is using on the target space to define operator convergence . There are, in general, a vast array of possible operator topologies on , whose naming is not entirely intuitive. For example, the strong operator topology on is the topology of pointwise convergence. For instance, if is a normed space, then this topology is defined by the seminorms indexed by : More generally, if a family of seminorms Q defines the topology on , then the seminorms on defining the strong topology are given by indexed by and . In particular, see the weak operator topology and weak* operator topology. See also Eberlein compactum, a compact set in the weak topology Weak convergence (Hilbert space) Weak-star operator topology Weak convergence of measures Topologies on spaces of linear maps Topologies on the set of operators on a Hilbert space Vague topology References Bibliography General topology Topology Topology of function spaces
Weak topology
[ "Physics", "Mathematics" ]
2,583
[ "General topology", "Topology", "Space", "Geometry", "Spacetime" ]
33,691
https://en.wikipedia.org/wiki/Wave%20equation
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics. This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation. Introduction The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for as the representation of an electric vector field wave in the absence of wave sources, each coordinate axis component (i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions. The scalar wave equation is where is a fixed non-negative real coefficient representing the propagation speed of the wave is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density) , and are the three spatial coordinates and being the time coordinate. The equation states that, at any given point, the second derivative of with respect to time is proportional to the sum of the second derivatives of with respect to space, with the constant of proportionality being the square of the speed of the wave. Using notations from vector calculus, the wave equation can be written compactly as or where the double subscript denotes the second-order partial derivative with respect to time, is the Laplace operator and the d'Alembert operator, defined as: A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments. Wave equation in one space dimension The wave equation in one spatial dimension can be written as follows: This equation is typically described as having only one spatial dimension , because the only other independent variable is the time . Derivation The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension. Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress). Hooke's law The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of : Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass at the location is: By equating the latter equation with the equation of motion for the weight at the location is obtained: If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array , we can write the above equation as Taking the limit and assuming smoothness, one gets which is from the definition of a second derivative. is the square of the propagation speed in this particular case. Stress pulse in a bar In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes is equal to the volume of the bar, and therefore where is the density of the material. The wave equation reduces to The speed of a stress wave in a bar is therefore . General solution Algebraic approach For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables changes the wave equation into which leads to the general solution In other words, the solution is the sum of a right-traveling function and a left-traveling function . "Traveling" means that the shape of these individual arbitrary functions with respect to stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert. Another way to arrive at this result is to factor the wave equation using two first-order differential operators: Then, for our original equation, we can define and find that we must have This advection equation can be solved by interpreting it as telling us that the directional derivative of in the direction is 0. This means that the value of is constant on characteristic lines of the form , and thus that must depend only on , that is, have the form . Then, to solve the first (inhomogenous) equation relating to , we can note that its homogenous solution must be a function of the form , by logic similar to the above. Guessing a particular solution of the form , we find that Expanding out the left side, rearranging terms, then using the change of variables simplifies the equation to This means we can find a particular solution of the desired form by integration. Thus, we have again shown that obeys . For an initial-value problem, the arbitrary functions and can be determined to satisfy initial conditions: The result is d'Alembert's formula: In the classical sense, if , and , then . However, the waveforms and may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components. Plane-wave eigenmodes Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function: This produces an ordinary differential equation for the spatial part : Therefore, which is precisely an eigenvalue equation for , hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions with wave number . The total wave function for this eigenmode is then the linear combination where complex numbers , depend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor so that a full solution can be decomposed into an eigenmode expansion: or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet , which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of . The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. Vectorial wave equation in three space dimensions The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity that is homogeneous (i.e. independent of ) within the volume element, then its stress tensor is given by , for a vectorial elastic deflection . The local equilibrium of: the tension force due to deflection , and the inertial force caused by the local acceleration can be written as By merging density and elasticity module the sound velocity results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium: (Note: Instead of vectorial only scalar can be used, i.e. waves are travelling only along the axis, and the scalar wave equation follows as .) The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term can be seen that there are two waves travelling in opposite directions and are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For special two-wave equation with the d'Alembert operator results: For this simplifies to Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction results as Scalar wave equation in three space dimensions A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. Spherical waves To obtain a solution with constant frequencies, apply the Fourier transform which transforms the wave equation into an elliptic partial differential equation of the form: This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as: The angular part of the solution take the form of spherical harmonics and the radial function satisfies: independent of , with . Substituting transforms the equation into which is the Bessel equation. Example Consider the case . Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., . In this case, the wave equation reduces to or This equation can be rewritten as where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions. For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation. Monochromatic spherical wave Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane-wave solutions: or From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to , an example of the inverse-square law. Solution of a general initial-value problem The wave equation is linear in and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let be an arbitrary function of three independent variables, and let the spherical wave form be a delta function. Let a family of spherical waves have center at , and let be the radial distance from that point. Thus If is a superposition of such waves with weighting function , then the denominator is a convenience. From the definition of the delta function, may also be written as where , , and are coordinates on the unit sphere , and is the area element on . This result has the interpretation that is times the mean value of on a sphere of radius centered at : It follows that The mean value is an even function of , and hence if then These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point , given depends only on the data on the sphere of radius that is intersected by the light cone drawn backwards from . It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. Scalar wave equation in two space dimensions In two space dimensions, the wave equation is We can use the three-dimensional theory to solve this problem if we regard as a function in three dimensions that is independent of the third dimension. If then the three-dimensional solution formula becomes where and are the first two coordinates on the unit sphere, and is the area element on the sphere. This integral may be rewritten as a double integral over the disc with center and radius It is apparent that the solution at depends not only on the data on the light cone where but also on data that are interior to that cone. Scalar wave equation in general dimension and Kirchhoff's formulae We want to find solutions to for with and . Odd dimensions Assume is an odd integer, and , for . Let and let Then , in , , . Even dimensions Assume is an even integer and , , for . Let and let then in Green's function Consider the inhomogeneous wave equation in dimensionsBy rescaling time, we can set wave speed . Since the wave equation has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity . The effect of inflicting a velocity impulse is to suddenly change the wave displacement . For acceleration impulse, where is the Dirac delta function. The solution to this case is called the Green's function for the wave equation. For velocity impulse, , so if we solve the Green function , the solution for this case is just . Duhamel's principle The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case. Given the Green function , and initial conditions , the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly, For the inhomogeneous case, the solution has one additional term by convolution over spacetime: Solution by Fourier transform By a Fourier transform,The term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by or by , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution givesThe integral can be solved by analytically continuing the Poisson kernel, givingwhere is half the surface area of a -dimensional hypersphere. Solutions in particular dimensions We can relate the Green's function in dimensions to the Green's function in dimensions. Lowering dimensions Given a function and a solution of a differential equation in dimensions, we can trivially extend it to dimensions by setting the additional dimensions to be constant: Since the Green's function is constructed from and , the Green's function in dimensions integrates to the Green's function in dimensions: Raising dimensions The Green's function in dimensions can be related to the Green's function in dimensions. By spherical symmetry, Integrating in polar coordinates, where in the last equality we made the change of variables . Thus, we obtain the recurrence relation Solutions in D = 1, 2, 3 When , the integrand in the Fourier transform is the sinc function where is the sign function and is the unit step function. One solution is the forward solution, the other is the backward solution. The dimension can be raised to give the caseand similarly for the backward solution. This can be integrated down by one dimension to give the case Wavefronts and wakes In case, the Green's function solution is the sum of two wavefronts moving in opposite directions. In odd dimensions, the forward solution is nonzero only at . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,where , and the wave speed is restored. In even dimensions, the forward solution is nonzero in , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:The wavefront itself also involves increasingly higher derivatives of the Dirac delta function. This means that a general Huygens' principle – the wave displacement at a point in spacetime depends only on the state at points on characteristic rays passing – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions. Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients Problems with boundaries One space dimension Reflection and transmission at the boundary of two media For an incident wave traveling from one medium (where the wave speed is ) to another medium (where the wave speed is ), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary. Consider the component of the incident wave with an angular frequency of , which has the waveform At , the incident reaches the boundary between the two media at . Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms The continuity condition at the boundary is This gives the equations and we have the reflectivity and transmissivity When , the reflected wave has a reflection phase change of 180°, since . The energy conservation can be verified by The above discussion holds true for any component, regardless of its angular frequency of . The limiting case of corresponds to a "fixed end" that does not move, whereas the limiting case of corresponds to a "free end". The Sturm–Liouville formulation A flexible string that is stretched between two points and satisfies the wave equation for and . On the boundary points, may satisfy a variety of boundary conditions. A general form that is appropriate for applications is where and are non-negative. The case where is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective or approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form A consequence is that The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem This is a special case of the general problem of Sturm–Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for and can be obtained from expansion of these functions in the appropriate trigonometric series. Several space dimensions The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in , and . On the boundary of , the solution shall satisfy where is the unit outward normal to , and is a non-negative function defined on . The case where vanishes on is a limiting case for approaching infinity. The initial conditions are where and are defined in . This problem may be solved by expanding and in the eigenfunctions of the Laplacian in , which satisfy the boundary conditions. Thus the eigenfunction satisfies in , and on . In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary . If is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle , multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation. If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order. Inhomogeneous wave equation in one dimension The inhomogeneous wave equation in one dimension is with initial conditions The function is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism. One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and . This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is , then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point as . Suppose we integrate the inhomogeneous wave equation over this region: To simplify this greatly, we can use Green's theorem to simplify the left side to get the following: The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute: In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus . For the other two sides of the region, it is worth noting that is a constant, namely , where the sign is chosen appropriately. Using this, we can get the relation , again choosing the right sign: And similarly for the final boundary segment: Adding the three results together and putting them back in the original integral gives Solving for , we arrive at In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. Further generalizations Elastic waves The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: where: and are the so-called Lamé parameters describing the elastic properties of the medium, is the density, is the source function (driving force), is the displacement vector. By using , the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation. Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves. Dispersion relation In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation where is the angular frequency, and is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is , but in general, the constant speed gets replaced by a variable phase velocity: See also Acoustic attenuation Acoustic wave equation Bateman transform Electromagnetic wave equation Helmholtz equation Inhomogeneous electromagnetic wave equation Laplace operator Mathematics of oscillation Maxwell's equations Schrödinger equation Standing wave Vibrations of a circular membrane Wheeler–Feynman absorber theory Notes References Flint, H.T. (1929) "Wave Mechanics" Methuen & Co. Ltd. London. R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962. "Linear Wave Equations", EqWorld: The World of Mathematical Equations. "Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations. William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET. External links Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Wolfram Demonstrations Project. Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki . Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 Equations of physics Hyperbolic partial differential equations Wave mechanics Functions of space and time
Wave equation
[ "Physics", "Mathematics" ]
5,911
[ "Physical phenomena", "Equations of physics", "Functions of space and time", "Mathematical objects", "Classical mechanics", "Equations", "Waves", "Wave mechanics", "Spacetime" ]
33,721
https://en.wikipedia.org/wiki/Weakly%20interacting%20massive%20particle
Weakly interacting massive particles (WIMPs) are hypothetical particles that are one of the proposed candidates for dark matter. There exists no formal definition of a WIMP, but broadly, it is an elementary particle which interacts via gravity and any other force (or forces) which is as weak as or weaker than the weak nuclear force, but also non-vanishing in strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter. Obtaining the correct abundance of dark matter today via thermal production requires a self-annihilation cross section of , which is roughly what is expected for a new particle in the 100 GeV mass range that interacts via the electroweak force. Experimental efforts to detect WIMPs include the search for products of WIMP annihilation, including gamma rays, neutrinos and cosmic rays in nearby galaxies and galaxy clusters; direct detection experiments designed to measure the collision of WIMPs with nuclei in the laboratory, as well as attempts to directly produce WIMPs in colliders, such as the Large Hadron Collider at CERN. Because supersymmetric extensions of the Standard Model of particle physics readily predict a new particle with these properties, this apparent coincidence is known as the "WIMP miracle", and a stable supersymmetric partner has long been a prime WIMP candidate. However, in the early 2010s, results from direct-detection experiments and the lack of evidence for supersymmetry at the Large Hadron Collider (LHC) experiment have cast doubt on the simplest WIMP hypothesis. Theoretical framework and properties WIMP-like particles are predicted by R-parity-conserving supersymmetry, a type of extension to the Standard Model of particle physics, although none of the large number of new particles in supersymmetry have been observed. WIMP-like particles are also predicted by universal extra dimension and little Higgs theories. The main theoretical characteristics of a WIMP are: Interactions only through the weak nuclear force and gravity, or possibly other interactions with cross-sections no higher than the weak scale; Large mass compared to standard particles (WIMPs with sub-GeV masses may be considered to be light dark matter). Because of their lack of electromagnetic interaction with normal matter, WIMPs would be invisible through normal electromagnetic observations. Because of their large mass, they would be relatively slow moving and therefore "cold". Their relatively low velocities would be insufficient to overcome the mutual gravitational attraction, and as a result, WIMPs would tend to clump together. WIMPs are considered one of the main candidates for cold dark matter, the others being massive compact halo objects (MACHOs) and axions. These names were deliberately chosen for contrast, with MACHOs named later than WIMPs. In contrast to WIMPs, there are no known stable particles within the Standard Model of particle physics that have the properties of MACHOs. The particles that have little interaction with normal matter, such as neutrinos, are very light, and hence would be fast moving, or "hot". As dark matter A decade after the dark matter problem was established in the 1970s, WIMPs were suggested as a potential solution to the issue. Although the existence of WIMPs in nature is still hypothetical, it would resolve a number of astrophysical and cosmological problems related to dark matter. There is consensus today among astronomers that most of the mass in the Universe is indeed dark. Simulations of a universe full of cold dark matter produce galaxy distributions that are roughly similar to what is observed. By contrast, hot dark matter would smear out the large-scale structure of galaxies and thus is not considered a viable cosmological model. WIMPs fit the model of a relic dark matter particle from the early Universe, when all particles were in a state of thermal equilibrium. For sufficiently high temperatures, such as those existing in the early Universe, the dark matter particle and its antiparticle would have been both forming from and annihilating into lighter particles. As the Universe expanded and cooled, the average thermal energy of these lighter particles decreased and eventually became insufficient to form a dark matter particle-antiparticle pair. The annihilation of the dark matter particle-antiparticle pairs, however, would have continued, and the number density of dark matter particles would have begun to decrease exponentially. Eventually, however, the number density would become so low that the dark matter particle and antiparticle interaction would cease, and the number of dark matter particles would remain (roughly) constant as the Universe continued to expand. Particles with a larger interaction cross section would continue to annihilate for a longer period of time, and thus would have a smaller number density when the annihilation interaction ceases. Based on the current estimated abundance of dark matter in the Universe, if the dark matter particle is such a relic particle, the interaction cross section governing the particle-antiparticle annihilation can be no larger than the cross section for the weak interaction. If this model is correct, the dark matter particle would have the properties of the WIMP. Indirect detection Because WIMPs may only interact through gravitational and weak forces, they would be extremely difficult to detect. However, there are many experiments underway to attempt to detect WIMPs both directly and indirectly. Indirect detection refers to the observation of annihilation or decay products of WIMPs far away from Earth. Indirect detection efforts typically focus on locations where WIMP dark matter is thought to accumulate the most: in the centers of galaxies and galaxy clusters, as well as in the smaller satellite galaxies of the Milky Way. These are particularly useful since they tend to contain very little baryonic matter, reducing the expected background from standard astrophysical processes. Typical indirect searches look for excess gamma rays, which are predicted both as final-state products of annihilation, or are produced as charged particles interact with ambient radiation via inverse Compton scattering. The spectrum and intensity of a gamma ray signal depends on the annihilation products, and must be computed on a model-by-model basis. Experiments that have placed bounds on WIMP annihilation, via the non-observation of an annihilation signal, include the Fermi-LAT gamma ray telescope and the VERITAS ground-based gamma ray observatory. Although the annihilation of WIMPs into Standard Model particles also predicts the production of high-energy neutrinos, their interaction rate is thought to be too low to reliably detect a dark matter signal at present. Future observations from the IceCube observatory in Antarctica may be able to differentiate WIMP-produced neutrinos from standard astrophysical neutrinos; however, by 2014, only 37 cosmological neutrinos had been observed, making such a distinction impossible. Another type of indirect WIMP signal could come from the Sun. Halo WIMPs may, as they pass through the Sun, interact with solar protons, helium nuclei as well as heavier elements. If a WIMP loses enough energy in such an interaction to fall below the local escape velocity, it would theoretically not have enough energy to escape the gravitational pull of the Sun and would remain gravitationally bound. As more and more WIMPs thermalize inside the Sun, they would begin to annihilate with each other, theoretically forming a variety of particles, including high-energy neutrinos. These neutrinos may then travel to the Earth to be detected in one of the many neutrino telescopes, such as the Super-Kamiokande detector in Japan. The number of neutrino events detected per day at these detectors depends on the properties of the WIMP, as well as on the mass of the Higgs boson. Similar experiments are underway to attempt to detect neutrinos from WIMP annihilations within the Earth and from within the galactic center. Direct detection Direct detection refers to the observation of the effects of a WIMP-nucleus collision as the dark matter passes through a detector in an Earth laboratory. While most WIMP models indicate that a large enough number of WIMPs must be captured in large celestial bodies for indirect detection experiments to succeed, it remains possible that these models are either incorrect or only explain part of the dark matter phenomenon. Thus, even with the multiple experiments dedicated to providing indirect evidence for the existence of cold dark matter, direct detection measurements are also necessary to solidify the theory of WIMPs. Although most WIMPs encountering the Sun or the Earth are expected to pass through without any effect, it is hoped that a large number of dark matter WIMPs crossing a sufficiently large detector will interact often enough to be seen—at least a few events per year. The general strategy of current attempts to detect WIMPs is to find very sensitive systems that can be scaled to large volumes. This follows the lessons learned from the history of the discovery, and (by now routine) detection, of the neutrino. Experimental techniques Cryogenic crystal detectors – A technique used by the Cryogenic Dark Matter Search (CDMS) detector at the Soudan Mine relies on multiple very cold germanium and silicon crystals. The crystals (each about the size of a hockey puck) are cooled to about 50 mK. A layer of metal (aluminium and tungsten) at the surfaces is used to detect a WIMP passing through the crystal. This design hopes to detect vibrations in the crystal matrix generated by an atom being "kicked" by a WIMP. The tungsten transition edge sensors (TES) are held at the critical temperature so they are in the superconducting state. Large crystal vibrations will generate heat in the metal and are detectable because of a change in resistance. CRESST, CoGeNT, and EDELWEISS run similar setups. Noble gas scintillators – Another way of detecting atoms "knocked about" by a WIMP is to use scintillating material, so that light pulses are generated by the moving atom and detected, often with PMTs. Experiments such as DEAP at SNOLAB and DarkSide at the LNGS instrument a very large target mass of liquid argon for sensitive WIMP searches. ZEPLIN, and XENON used xenon to exclude WIMPs at higher sensitivity, with the most stringent limits to date provided by the XENON1T detector, utilizing 3.5 tons of liquid xenon. Even larger multi-ton liquid xenon detectors have been approved for construction from the XENON, LUX-ZEPLIN and PandaX collaborations. Crystal scintillators – Instead of a liquid noble gas, an in principle simpler approach is the use of a scintillating crystal such as NaI(Tl). This approach is taken by DAMA/LIBRA, an experiment that observed an annular modulation of the signal consistent with WIMP detection (see ). Several experiments are attempting to replicate those results, including ANAIS, COSINUS and DM-Ice, which is codeploying NaI crystals with the IceCube detector at the South Pole. KIMS is approaching the same problem using CsI(Tl) as a scintillator. Bubble chambers – The PICASSO (Project In Canada to Search for Supersymmetric Objects) experiment is a direct dark matter search experiment that is located at SNOLAB in Canada. It uses bubble detectors with Freon as the active mass. PICASSO is predominantly sensitive to spin-dependent interactions of WIMPs with the fluorine atoms in the Freon. COUPP, a similar experiment using trifluoroiodomethane(CF3I), published limits for mass above 20 GeV in 2011. The two experiments merged into PICO collaboration in 2012. A bubble detector is a radiation sensitive device that uses small droplets of superheated liquid that are suspended in a gel matrix. It uses the principle of a bubble chamber but, since only the small droplets can undergo a phase transition at a time, the detector can stay active for much longer periods. When enough energy is deposited in a droplet by ionizing radiation, the superheated droplet becomes a gas bubble. The bubble development is accompanied by an acoustic shock wave that is picked up by piezo-electric sensors. The main advantage of the bubble detector technique is that the detector is almost insensitive to background radiation. The detector sensitivity can be adjusted by changing the temperature, typically operated between 15 °C and 55 °C. There is another similar experiment using this technique in Europe called SIMPLE. PICASSO reports results (November 2009) for spin-dependent WIMP interactions on 19F, for masses of 24 Gev new stringent limits have been obtained on the spin-dependent cross section of 13.9 pb (90% CL). The obtained limits restrict recent interpretations of the DAMA/LIBRA annual modulation effect in terms of spin dependent interactions. PICO is an expansion of the concept planned in 2015. Other types of detectors – Time projection chambers (TPCs) filled with low pressure gases are being studied for WIMP detection. The Directional Recoil Identification From Tracks (DRIFT) collaboration is attempting to utilize the predicted directionality of the WIMP signal. DRIFT uses a carbon disulfide target, that allows WIMP recoils to travel several millimetres, leaving a track of charged particles. This charged track is drifted to an MWPC readout plane that allows it to be reconstructed in three dimensions and determine the origin direction. DMTPC is a similar experiment with CF4 gas. The DAMIC (DArk Matter In CCDs) and SENSEI (Sub Electron Noise Skipper CCD Experimental Instrument) collaborations employ the use of scientific Charge Coupled Devices (CCDs) to detect light Dark Matter. The CCDs act as both the detector target and the readout instrumentation. WIMP interactions with the bulk of the CCD can induce the creation of electron-hole pairs, which are then collected and readout by the CCDs. In order to decrease the noise and achieve detection of single electrons, the experiments make use of a type of CCD known as the Skipper CCD, which allows for averaging over repeated measurements of the same collected charge. Recent limits There are currently no confirmed detections of dark matter from direct detection experiments, with the strongest exclusion limits coming from the LUX and SuperCDMS experiments, as shown in figure 2. With 370 kilograms of xenon LUX is more sensitive than XENON or CDMS. First results from October 2013 report that no signals were seen, appearing to refute results obtained from less sensitive instruments. and this was confirmed after the final data run ended in May 2016. Historically there have been four anomalous sets of data from different direct detection experiments, two of which have now been explained with backgrounds (CoGeNT and CRESST-II), and two which remain unexplained (DAMA/LIBRA and CDMS-Si). In February 2010, researchers at CDMS announced that they had observed two events that may have been caused by WIMP-nucleus collisions. CoGeNT, a smaller detector using a single germanium puck, designed to sense WIMPs with smaller masses, reported hundreds of detection events in 56 days. They observed an annual modulation in the event rate that could indicate light dark matter. However a dark matter origin for the CoGeNT events has been refuted by more recent analyses, in favour of an explanation in terms of a background from surface events. Annual modulation is one of the predicted signatures of a WIMP signal, and on this basis the DAMA collaboration has claimed a positive detection. Other groups, however, have not confirmed this result. The CDMS data made public in May 2004 exclude the entire DAMA signal region given certain standard assumptions about the properties of the WIMPs and the dark matter halo, and this has been followed by many other experiments (see Figure 2). The COSINE-100 collaboration (a merging of KIMS and DM-Ice groups) published their results on replicating the DAMA/LIBRA signal in December 2018 in journal Nature; their conclusion was that "this result rules out WIMP–nucleon interactions as the cause of the annual modulation observed by the DAMA collaboration". In 2021 new results from COSINE-100 and ANAIS-112 both failed to replicate the DAMA/LIBRA signal and in August 2022 COSINE-100 applied an analysis method similar to one used by DAMA/LIBRA and found a similar annual modulation suggesting the signal could be just a statistical artifact supporting a hypothesis first put forward in 2020. The future of direct detection The 2020s should see the emergence of several multi-tonne mass direct detection experiments, which will probe WIMP-nucleus cross sections orders of magnitude smaller than the current state-of-the-art sensitivity. Examples of such next-generation experiments are LUX-ZEPLIN (LZ) and XENONnT, which are multi-tonne liquid xenon experiments, followed by DARWIN, another proposed liquid xenon direct detection experiment of 50–100 tonnes. Such multi-tonne experiments will also face a new background in the form of neutrinos, which will limit their ability to probe the WIMP parameter space beyond a certain point, known as the neutrino floor. However, although its name may imply a hard limit, the neutrino floor represents the region of parameter space beyond which experimental sensitivity can only improve at best as the square root of exposure (the product of detector mass and running time). For WIMP masses below 10 GeV the dominant source of neutrino background is from the Sun, while for higher masses the background contains contributions from atmospheric neutrinos and the diffuse supernova neutrino background. In December 2021, results from PandaX have found no signal in their data, with a lowest excluded cross section of at 40 GeV with 90% confidence level. In July 2023 the XENONnT and LZ experiment published the first results of their searches for WIMPs, the first excluding cross sections above at 28 GeV with 90% confidence level and the second excluding cross sections above at 36 GeV with 90% confidence level. See also Feebly interacting particle (FIP) Weakly interacting sub-eV / slender / slight particle (WISP) Theoretical candidates References Further reading External links Particle Data Group review article on WIMP search (S. Eidelman et al. (Particle Data Group), Physical Letters B 592, 1 (2004) Appendix : OMITTED FROM SUMMARY TABLE). Timothy J. Sumner, Experimental Searches for Dark Matter in Living Reviews in Relativity, Vol 5, 2002. Portraits of darkness, New Scientist, August 31, 2013. Preview only. Dark matter Physics beyond the Standard Model Astroparticle physics Exotic matter Hypothetical particles Physics experiments
Weakly interacting massive particle
[ "Physics", "Astronomy" ]
3,864
[ "Dark matter", "Hypothetical particles", "Unsolved problems in astronomy", "Physics experiments", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Experimental physics", "Particle physics", "Subatomic particles", "Exotic matter", "Physics bey...
33,868
https://en.wikipedia.org/wiki/Casorati%E2%80%93Weierstrass%20theorem
In complex analysis, a branch of mathematics, the Casorati–Weierstrass theorem describes the behaviour of holomorphic functions near their essential singularities. It is named for Karl Theodor Wilhelm Weierstrass and Felice Casorati. In Russian literature it is called Sokhotski's theorem, because it was discovered independently by Sokhotski in 1868. Formal statement of the theorem Start with some open subset in the complex plane containing the number , and a function that is holomorphic on , but has an essential singularity at  . The Casorati–Weierstrass theorem then states that This can also be stated as follows: Or in still more descriptive terms: The theorem is considerably strengthened by Picard's great theorem, which states, in the notation above, that assumes every complex value, with one possible exception, infinitely often on . In the case that is an entire function and , the theorem says that the values approach every complex number and , as tends to infinity. It is remarkable that this does not hold for holomorphic maps in higher dimensions, as the famous example of Pierre Fatou shows. Examples The function has an essential singularity at 0, but the function does not (it has a pole at 0). Consider the function This function has the following Laurent series about the essential singular point at 0: Because exists for all points we know that is analytic in a punctured neighborhood of . Hence it is an isolated singularity, as well as being an essential singularity. Using a change of variable to polar coordinates our function, becomes: Taking the absolute value of both sides: Thus, for values of θ such that , we have as , and for , as . Consider what happens, for example when z takes values on a circle of diameter tangent to the imaginary axis. This circle is given by . Then, and Thus, may take any positive value other than zero by the appropriate choice of R. As on the circle, with R fixed. So this part of the equation: takes on all values on the unit circle infinitely often. Hence takes on the value of every number in the complex plane except for zero infinitely often. Proof of the theorem A short proof of the theorem is as follows: Take as given that function is meromorphic on some punctured neighborhood , and that is an essential singularity. Assume by way of contradiction that some value exists that the function can never get close to; that is: assume that there is some complex value and some such that for all in at which is defined. Then the new function: must be holomorphic on , with zeroes at the poles of f, and bounded by 1/ε. It can therefore be analytically continued (or continuously extended, or holomorphically extended) to all of V by Riemann's analytic continuation theorem. So the original function can be expressed in terms of : for all arguments z in V \ {z0}. Consider the two possible cases for If the limit is 0, then f has a pole at z0 . If the limit is not 0, then z0 is a removable singularity of f . Both possibilities contradict the assumption that the point z0 is an essential singularity of the function f . Hence the assumption is false and the theorem holds. History The history of this important theorem is described by Collingwood and Lohwater. It was published by Weierstrass in 1876 (in German) and by Sokhotski in 1868 in his Master thesis (in Russian). So it was called Sokhotski's theorem in the Russian literature and Weierstrass's theorem in the Western literature. The same theorem was published by Casorati in 1868, and by Briot and Bouquet in the first edition of their book (1859). However, Briot and Bouquet removed this theorem from the second edition (1875). References Section 31, Theorem 2 (pp. 124–125) of Theorems in complex analysis Articles containing proofs
Casorati–Weierstrass theorem
[ "Mathematics" ]
823
[ "Articles containing proofs", "Theorems in mathematical analysis", "Theorems in complex analysis" ]
33,894
https://en.wikipedia.org/wiki/Wheatstone%20bridge
A Wheatstone bridge is an electrical circuit used to measure an unknown electrical resistance by balancing two legs of a bridge circuit, one leg of which includes the unknown component. The primary benefit of the circuit is its ability to provide extremely accurate measurements (in contrast with something like a simple voltage divider). Its operation is similar to the original potentiometer. The Wheatstone bridge was invented by Samuel Hunter Christie (sometimes spelled "Christy") in 1833 and improved and popularized by Sir Charles Wheatstone in 1843. One of the Wheatstone bridge's initial uses was for soil analysis and comparison. Operation In the figure, is the fixed, yet unknown, resistance to be measured. , , and are resistors of known resistance and the resistance of is adjustable. The resistance is adjusted until the bridge is "balanced" and no current flows through the galvanometer . At this point, the potential difference between the two midpoints (B and D) will be zero. Therefore the ratio of the two resistances in the known leg is equal to the ratio of the two resistances in the unknown leg . If the bridge is unbalanced, the direction of the current indicates whether is too high or too low. At the point of balance, Detecting zero current with a galvanometer can be done to extremely high precision. Therefore, if , , and are known to high precision, then can be measured to high precision. Very small changes in disrupt the balance and are readily detected. Alternatively, if , , and are known, but is not adjustable, the voltage difference across or current flow through the meter can be used to calculate the value of , using Kirchhoff's circuit laws. This setup is frequently used in strain gauge and resistance thermometer measurements, as it is usually faster to read a voltage level off a meter than to adjust a resistance to zero the voltage. Derivation Quick derivation at balance At the point of balance, both the voltage and the current between the two midpoints (B and D) are zero. Therefore, , , . Because of , then and . Dividing the last two equations by members and using the above currents equalities, then Full derivation using Kirchhoff's circuit laws First, Kirchhoff's first law is used to find the currents in junctions B and D: Then, Kirchhoff's second law is used for finding the voltage in the loops ABDA and BCDB: When the bridge is balanced, then , so the second set of equations can be rewritten as: Then, equation (1) is divided by equation (2) and the resulting equation is rearranged, giving: Due to and being proportional from Kirchhoff's First Law, cancels out of the above equation. The desired value of is now known to be given as: On the other hand, if the resistance of the galvanometer is high enough that is negligible, it is possible to compute from the three other resistor values and the supply voltage (), or the supply voltage from all four resistor values. To do so, one has to work out the voltage from each potential divider and subtract one from the other. The equations for this are: where is the voltage of node D relative to node B. Significance The Wheatstone bridge illustrates the concept of a difference measurement, which can be extremely accurate. Variations on the Wheatstone bridge can be used to measure capacitance, inductance, impedance and other quantities, such as the amount of combustible gases in a sample, with an explosimeter. The Kelvin bridge was specially adapted from the Wheatstone bridge for measuring very low resistances. In many cases, the significance of measuring the unknown resistance is related to measuring the impact of some physical phenomenon (such as force, temperature, pressure, etc.) which thereby allows the use of Wheatstone bridge in measuring those elements indirectly. The concept was extended to alternating current measurements by James Clerk Maxwell in 1865 and further improved as by Alan Blumlein in British Patent no. 323,037, 1928. Modifications of the basic bridge The Wheatstone bridge is the fundamental bridge, but there are other modifications that can be made to measure various kinds of resistances when the fundamental Wheatstone bridge is not suitable. Some of the modifications are: Carey Foster bridge, for measuring small resistances Kelvin bridge, for measuring small four-terminal resistances Maxwell bridge, and Wien bridge for measuring reactive components Anderson's bridge, for measuring the self-inductance of the circuit, an advanced form of Maxwell's bridge See also Diode bridge, product mixer – diode bridges Phantom circuit – a circuit using a balanced bridge Post office box (electricity) Potentiometer (measuring instrument) Potential divider Ohmmeter Resistance thermometer Strain gauge References External links DC Metering Circuits chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series. Test Set I-49 Electrical meters Bridge circuits Measuring instruments English inventions Impedance measurements pl:Mostek (elektronika)#Mostek Wheatstone'a
Wheatstone bridge
[ "Physics", "Technology", "Engineering" ]
1,035
[ "Electrical resistance and conductance", "Physical quantities", "Measuring instruments", "Impedance measurements", "Electrical meters" ]
34,151
https://en.wikipedia.org/wiki/X-ray%20crystallography
X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds, crystallographic disorder, and other information. X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys. The method has also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases. Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community. History Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal (law of constancy of interfacial angles). René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size (law of decrements). Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive. Wilhelm Röntgen discovered X-rays in 1895. Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation. The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite (Mn(OH)2) and, by extension, brucite (Mg(OH)2) in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure was determined in 1920. The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium. Contributions in different areas Chemistry X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement. The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was rapidly followed by several studies of different long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll. In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide. The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry. Materials science and mineralogy The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography. On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. Biological macromolecular crystallography X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years. Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 130,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals. Methods Overview Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology. The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning. In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections. In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database. Crystallization Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure. Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture. It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter). Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior. Data collection Mounting the crystal The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer. Recording the reflections The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space. Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken. Crystal symmetry, unit cell, and image scaling The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)). A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled usingpeaks appear in two or more images (merging) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data. Initial phasing The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways: Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections. Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps. Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases. Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine. Model building and phase refinement Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps. It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization. Disorder A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered. Applied computational data analysis The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths). Deposition of the structure Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases. Contribution of women to X-ray crystallography A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science. Kathleen Lonsdale was a research student of William Henry Bragg, who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London. Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men.... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers. In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography. She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject. Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix, that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses. Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right. Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography, which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge, founded and ran the Cambridge Crystallographic Data Centre, an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker, a British scientist, co-authored Crystal Structure Analysis: A Primer, first published in 1971 and as of 2010 in its third edition. Eleanor Dodson, an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4, the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide. Nobel Prizes involving X-ray crystallography See also Beevers–Lipson strip Bragg diffraction Crystallographic database Crystallographic point groups Difference density map Electron diffraction Energy-dispersive X-ray diffraction Flack parameter Grazing incidence diffraction Henderson limit International Year of Crystallography Multipole density formalism Neutron diffraction Powder diffraction Ptychography Scherrer equation Small angle X-ray scattering (SAXS) Structure determination Ultrafast x-ray Wide angle X-ray scattering (WAXS) X-ray diffraction Notes References Further reading International Tables for Crystallography Bound collections of articles Textbooks Applied computational data analysis Historical External links Tutorials Learning Crystallography Simple, non technical introduction The Crystallography Collection, video series from the Royal Institution "Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website International Union of Crystallography Crystallography 101 Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal. Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D. Lecture notes on X-ray crystallography and structure determination Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi Interactive Crystallography Timeline from the Royal Institution Primary databases Crystallography Open Database (COD) Protein Data Bank (PDB) Nucleic Acid Databank (NDB) Cambridge Structural Database (CSD) Inorganic Crystal Structure Database (ICSD) Biological Macromolecule Crystallization Database (BMCD) Derivative databases PDBsum Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules RNABase HIC-Up database of PDB ligands Structural Classification of Proteins database CATH Protein Structure Classification List of transmembrane proteins with known 3D structure Orientations of Proteins in Membranes database Structural validation MolProbity structural validation suite ProSA-web NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues) DALI server (identifies proteins similar to a given protein) Laboratory techniques in condensed matter physics Crystallography Diffraction Materials science Protein structure Protein methods Protein imaging Synchrotron-related techniques Articles containing video clips Crystallography
X-ray crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
8,556
[ "Biochemistry methods", "Applied and interdisciplinary physics", "X-rays", "Spectrum (physical sciences)", "Protein methods", "Protein biochemistry", "Electromagnetic spectrum", "Laboratory techniques in condensed matter physics", "Materials science", "Crystallography", "Diffraction", "Condens...
34,417
https://en.wikipedia.org/wiki/Zero-sum%20game
Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero. If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally. Other examples of zero-sum games in daily life include games like poker, chess, sport and bridge where one person gains and another person loses, which results in a zero-net benefit for every player. In the markets and financial instruments, futures contracts and options are zero-sum games as well. In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called a strictly competitive game, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality, or with Nash equilibrium. Prisoner's Dilemma is a classic non-zero-sum game. Definition The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game. Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero. Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation. In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with. The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games). The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it. Solution For two-player finite zero-sum games, if the players are allowed to play a mixed strategy, the game always has a one equilibrium solution. The different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. Notice that this is not true for pure strategy. Example A game's payoff matrix is a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above. The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices. Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points. In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points. Émile Borel and John von Neumann had the fundamental insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute probably optimal strategies for all two-player zero-sum games. For the example given above, it turns out that Red should choose action 1 with probability and action 2 with probability , and Blue should assign the probabilities 0, , and to the three actions A, B, and C. Red will then win points on average per game. Solving The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix where element is the payoff obtained when the minimizing player chooses pure strategy and the maximizing player chooses pure strategy (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vector : The first constraint says each element of the vector must be nonnegative, and the second constraint says each element of the vector must be at least 1. For the resulting vector, the inverse of the sum of its elements is the value of the game. Multiplying by that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy. If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium. The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of (adding a constant so it is positive), then solving the resulting game. If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general. Universal solution If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not. The most common or simple example from the subfield of social psychology is the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour. Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted the fictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss). Zero-sum three-person games It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game. A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent. Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent. The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests. Real life example Economic benefits of low-cost airlines in saturated markets - net benefits or a zero-sum game Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million. Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines. Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model. Zero-sum games in financial markets Derivatives trading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero. An options contract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. A futures contract – whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game. This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other. If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero. Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game. Consider a standard interest rate swap whereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate). Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. The financial markets are complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions. The stock market is an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings. The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance. For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players. Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market. Complexity It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. Extensions In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a zero-sum game with n + 1 players; the (n + 1)th player representing the global profit or loss. Misunderstandings Zero-sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games. Politics is sometimes called zero sum because in common usage the idea of a stalemate is perceived to be "zero sum"; politics and macroeconomics are not zero sum games, however, because they do not constitute conserved systems. Zero-sum thinking In psychology, zero-sum thinking refers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss. See also Bimatrix game Comparative advantage Dutch disease Gains from trade Lump of labour fallacy Win–win game No-win situation References Further reading Misstating the Concept of Zero-Sum Games within the Context of Professional Sports Trading Strategies, series Pardon the Interruption (2010-09-23) ESPN, created by Tony Kornheiser and Michael Wilbon, performance by Bill Simmons Handbook of Game Theory – volume 2, chapter Zero-sum two-person games, (1994) Elsevier Amsterdam, by Raghavan, T. E. S., Edited by Aumann and Hart, pp. 735–759, Power: Its Forms, Bases and Uses (1997) Transaction Publishers, by Dennis Wrong, External links Play zero-sum games online by Elmer G. Wiens. Game Theory & its Applications – comprehensive text on psychology and game theory. (Contents and Preface to Second Edition.) A playable zero-sum game and its mixed strategy Nash equilibrium. Positive Sum Games Non-cooperative games International relations theory Game theory game classes
Zero-sum game
[ "Mathematics" ]
3,439
[ "Game theory game classes", "Game theory", "Non-cooperative games" ]
34,420
https://en.wikipedia.org/wiki/Zinc
Zinc is a chemical element with the symbol Zn and atomic number 30. It is a slightly brittle metal at room temperature and has a shiny-greyish appearance when oxidation is removed. It is the first element in group 12 (IIB) of the periodic table. In some respects, zinc is chemically similar to magnesium: both elements exhibit only one normal oxidation state (+2), and the Zn2+ and Mg2+ ions are of similar size. Zinc is the 24th most abundant element in Earth's crust and has five stable isotopes. The most common zinc ore is sphalerite (zinc blende), a zinc sulfide mineral. The largest workable lodes are in Australia, Asia, and the United States. Zinc is refined by froth flotation of the ore, roasting, and final extraction using electricity (electrowinning). Zinc is an essential trace element for humans, animals, plants and for microorganisms and is necessary for prenatal and postnatal development. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes. Zinc is also an essential nutrient element for coral growth as it is an important cofactor for many enzymes. Zinc deficiency affects about two billion people in the developing world and is associated with many diseases. In children, deficiency causes growth retardation, delayed sexual maturation, infection susceptibility, and diarrhea. Enzymes with a zinc atom in the reactive center are widespread in biochemistry, such as alcohol dehydrogenase in humans. Consumption of excess zinc may cause ataxia, lethargy, and copper deficiency. In marine biomes, notably within polar regions, a deficit of zinc can compromise the vitality of primary algal communities, potentially destabilizing the intricate marine trophic structures and consequently impacting biodiversity. Brass, an alloy of copper and zinc in various proportions, was used as early as the third millennium BC in the Aegean area and the region which currently includes Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia. In the second millennium BC it was used in the regions currently including West India, Uzbekistan, Iran, Syria, Iraq, and Israel. Zinc metal was not produced on a large scale until the 12th century in India, though it was known to the ancient Romans and Greeks. The mines of Rajasthan have given definite evidence of zinc production going back to the 6th century BC. The oldest evidence of pure zinc comes from Zawar, in Rajasthan, as early as the 9th century AD when a distillation process was employed to make pure zinc. Alchemists burned zinc in air to form what they called "philosopher's wool" or "white snow". The element was probably named by the alchemist Paracelsus after the German word Zinke (prong, tooth). German chemist Andreas Sigismund Marggraf is credited with discovering pure metallic zinc in 1746. Work by Luigi Galvani and Alessandro Volta uncovered the electrochemical properties of zinc by 1800. Corrosion-resistant zinc plating of iron (hot-dip galvanizing) is the major application for zinc. Other applications are in electrical batteries, small non-structural castings, and alloys such as brass. A variety of zinc compounds are commonly used, such as zinc carbonate and zinc gluconate (as dietary supplements), zinc chloride (in deodorants), zinc pyrithione (anti-dandruff shampoos), zinc sulfide (in luminescent paints), and dimethylzinc or diethylzinc in the organic laboratory. Characteristics Physical properties Zinc is a bluish-white, lustrous, diamagnetic metal, though most common commercial grades of the metal have a dull finish. It is somewhat less dense than iron and has a hexagonal crystal structure, with a distorted form of hexagonal close packing, in which each atom has six nearest neighbors (at 265.9 pm) in its own plane and six others at a greater distance of 290.6 pm. The metal is hard and brittle at most temperatures but becomes malleable between 100 and 150 °C. Above 210 °C, the metal becomes brittle again and can be pulverized by beating. Zinc is a fair conductor of electricity. For a metal, zinc has relatively low melting (419.5 °C) and boiling point (907 °C). The melting point is the lowest of all the d-block metals aside from mercury and cadmium; for this reason among others, zinc, cadmium, and mercury are often not considered to be transition metals like the rest of the d-block metals. Many alloys contain zinc, including brass. Other metals long known to form binary alloys with zinc are aluminium, antimony, bismuth, gold, iron, lead, mercury, silver, tin, magnesium, cobalt, nickel, tellurium, and sodium. Although neither zinc nor zirconium is ferromagnetic, their alloy, , exhibits ferromagnetism below 35 K. Occurrence Zinc makes up about 75 ppm (0.0075%) of Earth's crust, making it the 24th most abundant element. It also makes up 312 ppm of the solar system, where it is the 22nd most abundant element. Typical background concentrations of zinc do not exceed 1 μg/m3 in the atmosphere; 300 mg/kg in soil; 100 mg/kg in vegetation; 20 μg/L in freshwater and 5 μg/L in seawater. The element is normally found in association with other base metals such as copper and lead in ores. Zinc is a chalcophile, meaning the element is more likely to be found in minerals together with sulfur and other heavy chalcogens, rather than with the light chalcogen oxygen or with non-chalcogen electronegative elements such as the halogens. Sulfides formed as the crust solidified under the reducing conditions of the early Earth's atmosphere. Sphalerite, which is a form of zinc sulfide, is the most heavily mined zinc-containing ore because its concentrate contains 60–62% zinc. Other source minerals for zinc include smithsonite (zinc carbonate), hemimorphite (zinc silicate), wurtzite (another zinc sulfide), and sometimes hydrozincite (basic zinc carbonate). With the exception of wurtzite, all these other minerals were formed by weathering of the primordial zinc sulfides. Identified world zinc resources total about 1.9–2.8 billion tonnes. Large deposits are in Australia, Canada and the United States, with the largest reserves in Iran. The most recent estimate of reserve base for zinc (meets specified minimum physical criteria related to current mining and production practices) was made in 2009 and calculated to be roughly 480 Mt. Zinc reserves, on the other hand, are geologically identified ore bodies whose suitability for recovery is economically based (location, grade, quality, and quantity) at the time of determination. Since exploration and mine development is an ongoing process, the amount of zinc reserves is not a fixed number and sustainability of zinc ore supplies cannot be judged by simply extrapolating the combined mine life of today's zinc mines. This concept is well supported by data from the United States Geological Survey (USGS), which illustrates that although refined zinc production increased 80% between 1990 and 2010, the reserve lifetime for zinc has remained unchanged. About 346 million tonnes have been extracted throughout history to 2002, and scholars have estimated that about 109–305 million tonnes are in use. Isotopes Five stable isotopes of zinc occur in nature, with 64Zn being the most abundant isotope (49.17% natural abundance). The other isotopes found in nature are (27.73%), (4.04%), (18.45%), and (0.61%). Several dozen radioisotopes have been characterized. , which has a half-life of 243.66 days, is the least active radioisotope, followed by with a half-life of 46.5 hours. Zinc has 10 nuclear isomers, of which 69mZn has the longest half-life, 13.76 h. The superscript m indicates a metastable isotope. The nucleus of a metastable isotope is in an excited state and will return to the ground state by emitting a photon in the form of a gamma ray. has three excited metastable states and has two. The isotopes , , and each have only one excited metastable state. The most common decay mode of a radioisotope of zinc with a mass number lower than 66 is electron capture. The decay product resulting from electron capture is an isotope of copper. + → + The most common decay mode of a radioisotope of zinc with mass number higher than 66 is beta decay (β−), which produces an isotope of gallium. → + + Compounds and chemistry Reactivity Zinc has an electron configuration of [Ar]3d104s2 and is a member of the group 12 of the periodic table. It is a moderately reactive metal and strong reducing agent; in the reactivity series it is comparable to manganese. The surface of the pure metal tarnishes quickly, eventually forming a protective passivating layer of the basic zinc carbonate, , by reaction with atmospheric carbon dioxide. Zinc burns in air with a bright bluish-green flame, giving off fumes of zinc oxide. Zinc reacts readily with acids, alkalis and other non-metals. Extremely pure zinc reacts only slowly at room temperature with acids. Strong acids, such as hydrochloric or sulfuric acid, can remove the passivating layer and the subsequent reaction with the acid releases hydrogen gas. Zinc chemistry resembles that of the late first-row transition metals, nickel and copper, as well as certain main group elements. Almost all zinc compounds have the element in the +2 oxidation state. When Zn2&plus; compounds form, the outer shell s electrons are lost, yielding a bare zinc ion with the electronic configuration [Ar]3d10. The filled interior d shell generally does not participate in bonding, producing diamagnetic and mostly colorless compounds. In aqueous solution an octahedral complex, is the predominant species. The ionic radii of zinc and magnesium happen to be nearly identical. Consequently some of the equivalent salts have the same crystal structure, and in other circumstances where ionic radius is a determining factor, the chemistry of zinc has much in common with that of magnesium. Compared to the transition metals, zinc tends to form bonds with a greater degree of covalency. Complexes with N- and S- donors are much more stable. Complexes of zinc are mostly 4- or 6- coordinate, although 5-coordinate complexes are known. Other oxidation states require unusual physical conditions, and the only positive oxidation states demonstrated are +1 or +2. The volatilization of zinc in combination with zinc chloride at temperatures above 285 °C indicates the formation of , a zinc compound with a +1 oxidation state. Calculations indicate that a zinc compound with the oxidation state of +4 is unlikely to exist. Zn(III) is predicted to exist in the presence of strongly electronegative trianions; however, there exists some doubt around this possibility. Zinc(I) compounds Zinc(I) compounds are very rare. The [Zn2]2+ ion is implicated by the formation of a yellow diamagnetic glass by dissolving metallic zinc in molten ZnCl2. The [Zn2]2+ core would be analogous to the [Hg2]2+ cation present in mercury(I) compounds. The diamagnetic nature of the ion confirms its dimeric structure. The first zinc(I) compound containing the Zn–Zn bond, (η5-C5Me5)2Zn2. Zinc(II) compounds Binary compounds of zinc are known for most of the metalloids and all the nonmetals except the noble gases. The oxide ZnO is a white powder that is nearly insoluble in neutral aqueous solutions, but is amphoteric, dissolving in both strong basic and acidic solutions. The other chalcogenides (ZnS, ZnSe, and ZnTe) have varied applications in electronics and optics. Pnictogenides (, , and ), the peroxide (), the hydride (), and the carbide () are also known. Of the four halides, has the most ionic character, while the others (, , and ) have relatively low melting points and are considered to have more covalent character. In weak basic solutions containing ions, the hydroxide forms as a white precipitate. In stronger alkaline solutions, this hydroxide is dissolved to form zincates (). The nitrate , chlorate , sulfate , phosphate , molybdate , cyanide , arsenite , arsenate and the chromate (one of the few colored zinc compounds) are a few examples of other common inorganic compounds of zinc. Organozinc compounds are those that contain zinc–carbon covalent bonds. Diethylzinc () is a reagent in synthetic chemistry. It was first reported in 1848 from the reaction of zinc and ethyl iodide, and was the first compound known to contain a metal–carbon sigma bond. Test for zinc Cobalticyanide paper (Rinnmann's test for Zn) can be used as a chemical indicator for zinc. 4 g of K3Co(CN)6 and 1 g of KClO3 is dissolved on 100 ml of water. Paper is dipped in the solution and dried at 100 °C. One drop of the sample is dropped onto the dry paper and heated. A green disc indicates the presence of zinc. History Ancient use Various isolated examples of the use of impure zinc in ancient times have been discovered. Zinc ores were used to make the zinc–copper alloy brass thousands of years prior to the discovery of zinc as a separate element. Judean brass from the 14th to 10th centuries BC contains 23% zinc. Knowledge of how to produce brass spread to Ancient Greece by the 7th century BC, but few varieties were made. Ornaments made of alloys containing 80–90% zinc, with lead, iron, antimony, and other metals making up the remainder, have been found that are 2,500 years old. A possibly prehistoric statuette containing 87.5% zinc was found in a Dacian archaeological site. Strabo writing in the 1st century BC (but quoting a now lost work of the 4th century BC historian Theopompus) mentions "drops of false silver" which when mixed with copper make brass. This may refer to small quantities of zinc that is a by-product of smelting sulfide ores. Zinc in such remnants in smelting ovens was usually discarded as it was thought to be worthless. The manufacture of brass was known to the Romans by about 30 BC. They made brass by heating powdered calamine (zinc silicate or carbonate), charcoal and copper together in a crucible. The resulting calamine brass was then either cast or hammered into shape for use in weaponry. Some coins struck by Romans in the Christian era are made of what is probably calamine brass. The oldest known pills were made of the zinc carbonates hydrozincite and smithsonite. The pills were used for sore eyes and were found aboard the Roman ship Relitto del Pozzino, wrecked in 140 BC. The Berne zinc tablet is a votive plaque dating to Roman Gaul made of an alloy that is mostly zinc. The Charaka Samhita, thought to have been written between 300 and 500 AD, mentions a metal which, when oxidized, produces pushpanjan, thought to be zinc oxide. Zinc mines at Zawar, near Udaipur in India, have been active since the Mauryan period ( and 187 BCE). The smelting of metallic zinc here, however, appears to have begun around the 12th century AD. One estimate is that this location produced an estimated million tonnes of metallic zinc and zinc oxide from the 12th to 16th centuries. Another estimate gives a total production of 60,000 tonnes of metallic zinc over this period. The Rasaratna Samuccaya, written in approximately the 13th century AD, mentions two types of zinc-containing ores: one used for metal extraction and another used for medicinal purposes. Early studies and naming Zinc was distinctly recognized as a metal under the designation of Yasada or Jasada in the medical Lexicon ascribed to the Hindu king Madanapala (of Taka dynasty) and written about the year 1374. Smelting and extraction of impure zinc by reducing calamine with wool and other organic substances was accomplished in the 13th century in India. The Chinese did not learn of the technique until the 17th century. Alchemists burned zinc metal in air and collected the resulting zinc oxide on a condenser. Some alchemists called this zinc oxide lana philosophica, Latin for "philosopher's wool", because it collected in wooly tufts, whereas others thought it looked like white snow and named it nix album. The name of the metal was probably first documented by Paracelsus, a Swiss-born German alchemist, who referred to the metal as "zincum" or "zinken" in his book Liber Mineralium II, in the 16th century. The word is probably derived from the German , and supposedly meant "tooth-like, pointed or jagged" (metallic zinc crystals have a needle-like appearance). Zink could also imply "tin-like" because of its relation to German zinn meaning tin. Yet another possibility is that the word is derived from the Persian word seng meaning stone. The metal was also called Indian tin, tutanego, calamine, and spinter. German metallurgist Andreas Libavius received a quantity of what he called "calay" (from the Malay or Hindi word for tin) originating from Malabar off a cargo ship captured from the Portuguese in the year 1596. Libavius described the properties of the sample, which may have been zinc. Zinc was regularly imported to Europe from the Orient in the 17th and early 18th centuries, but was at times very expensive. Isolation Metallic zinc was isolated in India by 1300 AD. Before it was isolated in Europe, it was imported from India in about 1600 CE. Postlewayt's Universal Dictionary, a contemporary source giving technological information in Europe, did not mention zinc before 1751 but the element was studied before then. Flemish metallurgist and alchemist P. M. de Respour reported that he had extracted metallic zinc from zinc oxide in 1668. By the start of the 18th century, Étienne François Geoffroy described how zinc oxide condenses as yellow crystals on bars of iron placed above zinc ore that is being smelted. In Britain, John Lane is said to have carried out experiments to smelt zinc, probably at Landore, prior to his bankruptcy in 1726. In 1738 in Great Britain, William Champion patented a process to extract zinc from calamine in a vertical retort-style smelter. His technique resembled that used at Zawar zinc mines in Rajasthan, but no evidence suggests he visited the Orient. Champion's process was used through 1851. German chemist Andreas Marggraf normally gets credit for isolating pure metallic zinc in the West, even though Swedish chemist Anton von Swab had distilled zinc from calamine four years previously. In his 1746 experiment, Marggraf heated a mixture of calamine and charcoal in a closed vessel without copper to obtain a metal. This procedure became commercially practical by 1752. Later work William Champion's brother, John, patented a process in 1758 for calcining zinc sulfide into an oxide usable in the retort process. Prior to this, only calamine could be used to produce zinc. In 1798, Johann Christian Ruberg improved on the smelting process by building the first horizontal retort smelter. Jean-Jacques Daniel Dony built a different kind of horizontal zinc smelter in Belgium that processed even more zinc. Italian doctor Luigi Galvani discovered in 1780 that connecting the spinal cord of a freshly dissected frog to an iron rail attached by a brass hook caused the frog's leg to twitch. He incorrectly thought he had discovered an ability of nerves and muscles to create electricity and called the effect "animal electricity". The galvanic cell and the process of galvanization were both named for Luigi Galvani, and his discoveries paved the way for electrical batteries, galvanization, and cathodic protection. Galvani's friend, Alessandro Volta, continued researching the effect and invented the Voltaic pile in 1800. Volta's pile consisted of a stack of simplified galvanic cells, each being one plate of copper and one of zinc connected by an electrolyte. By stacking these units in series, the Voltaic pile (or "battery") as a whole had a higher voltage, which could be used more easily than single cells. Electricity is produced because the Volta potential between the two metal plates makes electrons flow from the zinc to the copper and corrode the zinc. The non-magnetic character of zinc and its lack of color in solution delayed discovery of its importance to biochemistry and nutrition. This changed in 1940 when carbonic anhydrase, an enzyme that scrubs carbon dioxide from blood, was shown to have zinc in its active site. The digestive enzyme carboxypeptidase became the second known zinc-containing enzyme in 1955. Production Mining and processing Zinc is the fourth most common metal in use, trailing only iron, aluminium, and copper with an annual production of about 13 million tonnes. The world's largest zinc producer is Nyrstar, a merger of the Australian OZ Minerals and the Belgian Umicore. About 70% of the world's zinc originates from mining, while the remaining 30% comes from recycling secondary zinc. Commercially pure zinc is known as Special High Grade, often abbreviated SHG, and is 99.995% pure. Worldwide, 95% of new zinc is mined from sulfidic ore deposits, in which sphalerite (ZnS) is nearly always mixed with the sulfides of copper, lead and iron. Zinc mines are scattered throughout the world, with the main areas being China, Australia, and Peru. China produced 38% of the global zinc output in 2014. Zinc metal is produced using extractive metallurgy. The ore is finely ground, then put through froth flotation to separate minerals from gangue (on the property of hydrophobicity), to get a zinc sulfide ore concentrate consisting of about 50% zinc, 32% sulfur, 13% iron, and 5% . Roasting converts the zinc sulfide concentrate to zinc oxide: 2ZnS + 3O2 ->[t^o] 2ZnO + 2SO2 The sulfur dioxide is used for the production of sulfuric acid, which is necessary for the leaching process. If deposits of zinc carbonate, zinc silicate, or zinc-spinel (like the Skorpion Deposit in Namibia) are used for zinc production, the roasting can be omitted. For further processing two basic methods are used: pyrometallurgy or electrowinning. Pyrometallurgy reduces zinc oxide with carbon or carbon monoxide at into the metal, which is distilled as zinc vapor to separate it from other metals, which are not volatile at those temperatures. The zinc vapor is collected in a condenser. The equations below describe this process: ZnO + C ->[950^oC] Zn + CO ZnO + CO ->[950^oC] Zn + CO2 In electrowinning, zinc is leached from the ore concentrate by sulfuric acid and impurities are precipitated: ZnO + H2SO4 -> ZnSO4 + H2O Finally, the zinc is reduced by electrolysis. 2ZnSO4 + 2H2O -> 2Zn + O2 + 2H2SO4 The sulfuric acid is regenerated and recycled to the leaching step. When galvanised feedstock is fed to an electric arc furnace, the zinc is recovered from the dust by a number of processes, predominantly the Waelz process (90% as of 2014). Environmental impact Refinement of sulfidic zinc ores produces large volumes of sulfur dioxide and cadmium vapor. Smelter slag and other residues contain significant quantities of metals. About 1.1 million tonnes of metallic zinc and 130 thousand tonnes of lead were mined and smelted in the Belgian towns of La Calamine and Plombières between 1806 and 1882. The dumps of the past mining operations leach zinc and cadmium, and the sediments of the Geul River contain non-trivial amounts of metals. About two thousand years ago, emissions of zinc from mining and smelting totaled 10 thousand tonnes a year. After increasing 10-fold from 1850, zinc emissions peaked at 3.4 million tonnes per year in the 1980s and declined to 2.7 million tonnes in the 1990s, although a 2005 study of the Arctic troposphere found that the concentrations there did not reflect the decline. Man-made and natural emissions occur at a ratio of 20 to 1. Zinc in rivers flowing through industrial and mining areas can be as high as 20 ppm. Effective sewage treatment greatly reduces this; treatment along the Rhine, for example, has decreased zinc levels to 50 ppb. Concentrations of zinc as low as 2 ppm adversely affects the amount of oxygen that fish can carry in their blood. Soils contaminated with zinc from mining, refining, or fertilizing with zinc-bearing sludge can contain several grams of zinc per kilogram of dry soil. Levels of zinc in excess of 500 ppm in soil interfere with the ability of plants to absorb other essential metals, such as iron and manganese. Zinc levels of 2000 ppm to 180,000 ppm (18%) have been recorded in some soil samples. Applications Major applications of zinc include, with percentages given for the US Galvanizing (55%) Brass and bronze (16%) Other alloys (21%) Miscellaneous (8%) Anti-corrosion and batteries Zinc is most commonly used as an anti-corrosion agent, and galvanization (coating of iron or steel) is the most familiar form. In 2009 in the United States, 55% or 893,000 tons of the zinc metal was used for galvanization. Zinc is more reactive than iron or steel and thus will attract almost all local oxidation until it completely corrodes away. A protective surface layer of oxide and carbonate ( forms as the zinc corrodes. This protection lasts even after the zinc layer is scratched but degrades through time as the zinc corrodes away. The zinc is applied electrochemically or as molten zinc by hot-dip galvanizing or spraying. Galvanization is used on chain-link fencing, guard rails, suspension bridges, lightposts, metal roofs, heat exchangers, and car bodies. The relative reactivity of zinc and its ability to attract oxidation to itself makes it an efficient sacrificial anode in cathodic protection (CP). For example, cathodic protection of a buried pipeline can be achieved by connecting anodes made from zinc to the pipe. Zinc acts as the anode (negative terminus) by slowly corroding away as it passes electric current to the steel pipeline. Zinc is also used to cathodically protect metals that are exposed to sea water. A zinc disc attached to a ship's iron rudder will slowly corrode while the rudder stays intact. Similarly, a zinc plug attached to a propeller or the metal protective guard for the keel of the ship provides temporary protection. With a standard electrode potential (SEP) of −0.76 volts, zinc is used as an anode material for batteries. (More reactive lithium (SEP −3.04 V) is used for anodes in lithium batteries ). Powdered zinc is used in this way in alkaline batteries and the case (which also serves as the anode) of zinc–carbon batteries is formed from sheet zinc. Zinc is used as the anode or fuel of the zinc–air battery/fuel cell. The zinc-cerium redox flow battery also relies on a zinc-based negative half-cell. Alloys A widely used zinc alloy is brass, in which copper is alloyed with anywhere from 3% to 45% zinc, depending upon the type of brass. Brass is generally more ductile and stronger than copper, and has superior corrosion resistance. These properties make it useful in communication equipment, hardware, musical instruments, and water valves. Other widely used zinc alloys include nickel silver, typewriter metal, soft and aluminium solder, and commercial bronze. Zinc is also used in contemporary pipe organs as a substitute for the traditional lead/tin alloy in pipes. Alloys of 85–88% zinc, 4–10% copper, and 2–8% aluminium find limited use in certain types of machine bearings. Zinc has been the primary metal in American one cent coins (pennies) since 1982. The zinc core is coated with a thin layer of copper to give the appearance of a copper coin. In 1994, of zinc were used to produce 13.6 billion pennies in the United States. Alloys of zinc with small amounts of copper, aluminium, and magnesium are useful in die casting as well as spin casting, especially in the automotive, electrical, and hardware industries. These alloys are marketed under the name Zamak. An example of this is zinc aluminium. The low melting point together with the low viscosity of the alloy makes possible the production of small and intricate shapes. The low working temperature leads to rapid cooling of the cast products and fast production for assembly. Another alloy, marketed under the brand name Prestal, contains 78% zinc and 22% aluminium, and is reported to be nearly as strong as steel but as malleable as plastic. This superplasticity of the alloy allows it to be molded using die casts made of ceramics and cement. Similar alloys with the addition of a small amount of lead can be cold-rolled into sheets. An alloy of 96% zinc and 4% aluminium is used to make stamping dies for low production run applications for which ferrous metal dies would be too expensive. For building facades, roofing, and other applications for sheet metal formed by deep drawing, roll forming, or bending, zinc alloys with titanium and copper are used. Unalloyed zinc is too brittle for these manufacturing processes. As a dense, inexpensive, easily worked material, zinc is used as a lead replacement. In the wake of lead concerns, zinc appears in weights for various applications ranging from fishing to tire balances and flywheels. Cadmium zinc telluride (CZT) is a semiconductive alloy that can be divided into an array of small sensing devices. These devices are similar to an integrated circuit and can detect the energy of incoming gamma ray photons. When behind an absorbing mask, the CZT sensor array can determine the direction of the rays. Other industrial uses Roughly one quarter of all zinc output in the United States in 2009 was consumed in zinc compounds; a variety of which are used industrially. Zinc oxide is widely used as a white pigment in paints and as a catalyst in the manufacture of rubber to disperse heat. Zinc oxide is used to protect rubber polymers and plastics from ultraviolet radiation (UV). The semiconductor properties of zinc oxide make it useful in varistors and photocopying products. The zinc zinc-oxide cycle is a two step thermochemical process based on zinc and zinc oxide for hydrogen production. Zinc chloride is often added to lumber as a fire retardant and sometimes as a wood preservative. It is used in the manufacture of other chemicals. Zinc methyl () is used in a number of organic syntheses. Zinc sulfide (ZnS) is used in luminescent pigments such as on the hands of clocks, X-ray and television screens, and luminous paints. Crystals of ZnS are used in lasers that operate in the mid-infrared part of the spectrum. Zinc sulfate is a chemical in dyes and pigments. Zinc pyrithione is used in antifouling paints. Zinc powder is sometimes used as a propellant in model rockets. When a compressed mixture of 70% zinc and 30% sulfur powder is ignited there is a violent chemical reaction. This produces zinc sulfide, together with large amounts of hot gas, heat, and light. Zinc sheet metal is used as a durable covering for roofs, walls, and countertops, the last often seen in bistros and oyster bars, and is known for the rustic look imparted by its surface oxidation in use to a blue-gray patina and susceptibility to scratching. , the most abundant isotope of zinc, is very susceptible to neutron activation, being transmuted into the highly radioactive , which has a half-life of 244 days and produces intense gamma radiation. Because of this, zinc oxide used in nuclear reactors as an anti-corrosion agent is depleted of before use, this is called depleted zinc oxide. For the same reason, zinc has been proposed as a salting material for nuclear weapons (cobalt is another, better-known salting material). A jacket of isotopically enriched would be irradiated by the intense high-energy neutron flux from an exploding thermonuclear weapon, forming a large amount of significantly increasing the radioactivity of the weapon's fallout. Such a weapon is not known to have ever been built, tested, or used. is used as a tracer to study how alloys that contain zinc wear out, or the path and the role of zinc in organisms. Zinc dithiocarbamate complexes are used as agricultural fungicides; these include Zineb, Metiram, Propineb and Ziram. Zinc naphthenate is used as wood preservative. Zinc in the form of ZDDP, is used as an anti-wear additive for metal parts in engine oil. Organic chemistry Organozinc chemistry is the science of compounds that contain carbon-zinc bonds, describing the physical properties, synthesis, and chemical reactions. Many organozinc compounds are commercially important. Among important applications are: The Frankland-Duppa Reaction in which an oxalate ester (ROCOCOOR) reacts with an alkyl halide R'X, zinc and hydrochloric acid to form α-hydroxycarboxylic esters RR'COHCOOR Organozincs have similar reactivity to Grignard reagents but are much less nucleophilic, and they are expensive and difficult to handle. Organozincs typically perform nucleophilic addition on electrophiles such as aldehydes, which are then reduced to alcohols. Commercially available diorganozinc compounds include dimethylzinc, diethylzinc and diphenylzinc. Like Grignard reagents, organozincs are commonly produced from organobromine precursors. Zinc has found many uses in catalysis in organic synthesis including enantioselective synthesis, being a cheap and readily available alternative to precious metal complexes. Quantitative results (yield and enantiomeric excess) obtained with chiral zinc catalysts can be comparable to those achieved with palladium, ruthenium, iridium and others. Dietary supplement In most single-tablet, over-the-counter, daily vitamin and mineral supplements, zinc is included in such forms as zinc oxide, zinc acetate, zinc gluconate, or zinc amino acid chelate. Generally, zinc supplement is recommended where there is high risk of zinc deficiency (such as low and middle income countries) as a preventive measure. Although zinc sulfate is a commonly used zinc form, zinc citrate, gluconate and picolinate may be valid options as well. These forms are better absorbed than zinc oxide. Gastroenteritis Zinc is an inexpensive and effective part of treatment of diarrhea among children in the developing world. Zinc becomes depleted in the body during diarrhea and replenishing zinc with a 10- to 14-day course of treatment can reduce the duration and severity of diarrheal episodes and may also prevent future episodes for as long as three months. Gastroenteritis is strongly attenuated by ingestion of zinc, possibly by direct antimicrobial action of the ions in the gastrointestinal tract, or by the absorption of the zinc and re-release from immune cells (all granulocytes secrete zinc), or both. Common cold Weight gain Zinc deficiency may lead to loss of appetite. The use of zinc in the treatment of anorexia has been advocated since 1979. At least 15 clinical trials have shown that zinc improved weight gain in anorexia. A 1994 trial showed that zinc doubled the rate of body mass increase in the treatment of anorexia nervosa. Deficiency of other nutrients such as tyrosine, tryptophan and thiamine could contribute to this phenomenon of "malnutrition-induced malnutrition". A meta-analysis of 33 prospective intervention trials regarding zinc supplementation and its effects on the growth of children in many countries showed that zinc supplementation alone had a statistically significant effect on linear growth and body weight gain, indicating that other deficiencies that may have been present were not responsible for growth retardation. Other People taking zinc supplements may slow down the progress to age-related macular degeneration. Zinc supplement is an effective treatment for acrodermatitis enteropathica, a genetic disorder affecting zinc absorption that was previously fatal to affected infants. Zinc deficiency has been associated with major depressive disorder (MDD), and zinc supplements may be an effective treatment. Zinc may help individuals sleep more. Topical use Topical preparations of zinc include those used on the skin, often in the form of zinc oxide. Zinc oxide is generally recognized by the FDA as safe and effective and is considered a very photo-stable. Zinc oxide is one of the most common active ingredients formulated into a sunscreen to mitigate sunburn. Applied thinly to a baby's diaper area (perineum) with each diaper change, it can protect against diaper rash. Chelated zinc is used in toothpastes and mouthwashes to prevent bad breath; zinc citrate helps reduce the build-up of calculus (tartar). Zinc pyrithione is widely included in shampoos to prevent dandruff. Topical zinc has also been shown to effectively treat, as well as prolong remission in genital herpes. Biological role Zinc is an essential trace element for humans. and other animals, for plants and for microorganisms. Zinc is required for the function of over 300 enzymes and 1000 transcription factors, and is stored and transferred in metallothioneins. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes. In proteins, zinc ions are often coordinated to the amino acid side chains of aspartic acid, glutamic acid, cysteine and histidine. The theoretical and computational description of this zinc binding in proteins (as well as that of other transition metals) is difficult. Roughly  grams of zinc are distributed throughout the human body. Most zinc is in the brain, muscle, bones, kidney, and liver, with the highest concentrations in the prostate and parts of the eye. Semen is particularly rich in zinc, a key factor in prostate gland function and reproductive organ growth. Zinc homeostasis of the body is mainly controlled by the intestine. Here, ZIP4 and especially TRPM7 were linked to intestinal zinc uptake essential for postnatal survival. In humans, the biological roles of zinc are ubiquitous. It interacts with "a wide range of organic ligands", and has roles in the metabolism of RNA and DNA, signal transduction, and gene expression. It also regulates apoptosis. A review from 2015 indicated that about 10% of human proteins (~3000) bind zinc, in addition to hundreds more that transport and traffic zinc; a similar in silico study in the plant Arabidopsis thaliana found 2367 zinc-related proteins. In the brain, zinc is stored in specific synaptic vesicles by glutamatergic neurons and can modulate neuronal excitability. It plays a key role in synaptic plasticity and so in learning. Zinc homeostasis also plays a critical role in the functional regulation of the central nervous system. Dysregulation of zinc homeostasis in the central nervous system that results in excessive synaptic zinc concentrations is believed to induce neurotoxicity through mitochondrial oxidative stress (e.g., by disrupting certain enzymes involved in the electron transport chain, including complex I, complex III, and α-ketoglutarate dehydrogenase), the dysregulation of calcium homeostasis, glutamatergic neuronal excitotoxicity, and interference with intraneuronal signal transduction. L- and D-histidine facilitate brain zinc uptake. SLC30A3 is the primary zinc transporter involved in cerebral zinc homeostasis. Enzymes Zinc is an efficient Lewis acid, making it a useful catalytic agent in hydroxylation and other enzymatic reactions. The metal also has a flexible coordination geometry, which allows proteins using it to rapidly shift conformations to perform biological reactions. Two examples of zinc-containing enzymes are carbonic anhydrase and carboxypeptidase, which are vital to the processes of carbon dioxide () regulation and digestion of proteins, respectively. In vertebrate blood, carbonic anhydrase converts into bicarbonate and the same enzyme transforms the bicarbonate back into for exhalation through the lungs. Without this enzyme, this conversion would occur about one million times slower at the normal blood pH of 7 or would require a pH of 10 or more. The non-related β-carbonic anhydrase is required in plants for leaf formation, the synthesis of indole acetic acid (auxin) and alcoholic fermentation. Carboxypeptidase cleaves peptide linkages during digestion of proteins. A coordinate covalent bond is formed between the terminal peptide and a C=O group attached to zinc, which gives the carbon a positive charge. This helps to create a hydrophobic pocket on the enzyme near the zinc, which attracts the non-polar part of the protein being digested. Signalling Zinc has been recognized as a messenger, able to activate signalling pathways. Many of these pathways provide the driving force in aberrant cancer growth. They can be targeted through ZIP transporters. Other proteins Zinc serves a purely structural role in zinc fingers, twists and clusters. Zinc fingers form parts of some transcription factors, which are proteins that recognize DNA base sequences during the replication and transcription of DNA. Each of the nine or ten ions in a zinc finger helps maintain the finger's structure by coordinately binding to four amino acids in the transcription factor. In blood plasma, zinc is bound to and transported by albumin (60%, low-affinity) and transferrin (10%). Because transferrin also transports iron, excessive iron reduces zinc absorption, and vice versa. A similar antagonism exists with copper. The concentration of zinc in blood plasma stays relatively constant regardless of zinc intake. Cells in the salivary gland, prostate, immune system, and intestine use zinc signaling to communicate with other cells. Zinc may be held in metallothionein reserves within microorganisms or in the intestines or liver of animals. Metallothionein in intestinal cells is capable of adjusting absorption of zinc by 15–40%. However, inadequate or excessive zinc intake can be harmful; excess zinc particularly impairs copper absorption because metallothionein absorbs both metals. The human dopamine transporter contains a high affinity extracellular zinc binding site which, upon zinc binding, inhibits dopamine reuptake and amplifies amphetamine-induced dopamine efflux in vitro. The human serotonin transporter and norepinephrine transporter do not contain zinc binding sites. Some EF-hand calcium binding proteins such as S100 or NCS-1 are also able to bind zinc ions. Nutrition Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for zinc in 2001. The current EARs for zinc for women and men ages 14 and up is 6.8 and 9.4 mg/day, respectively. The RDAs are 8 and 11 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 11 mg/day. RDA for lactation is 12 mg/day. For infants up to 12 months the RDA is 3 mg/day. For children ages 1–13 years the RDA increases with age from 3 to 8 mg/day. As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of zinc the adult UL is 40 mg/day including both food and supplements combined (lower for children). Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 18 and older the PRI calculations are complex, as the EFSA has set higher and higher values as the phytate content of the diet increases. For women, PRIs increase from 7.5 to 12.7 mg/day as phytate intake increases from 300 to 1200 mg/day; for men the range is 9.4 to 16.3 mg/day. These PRIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and set its UL at 25 mg/day, which is much lower than the U.S. value. For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For zinc labeling purposes 100% of the Daily Value was 15 mg, but on May 27, 2016, it was revised to 11 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Dietary intake Animal products such as meat, fish, shellfish, fowl, eggs, and dairy contain zinc. The concentration of zinc in plants varies with the level in the soil. With adequate zinc in the soil, the food plants that contain the most zinc are wheat (germ and bran) and various seeds, including sesame, poppy, alfalfa, celery, and mustard. Zinc is also found in beans, nuts, almonds, whole grains, pumpkin seeds, sunflower seeds, and blackcurrant. Other sources include fortified food and dietary supplements in various forms. A 1998 review concluded that zinc oxide, one of the most common supplements in the United States, and zinc carbonate are nearly insoluble and poorly absorbed in the body. This review cited studies that found lower plasma zinc concentrations in the subjects who consumed zinc oxide and zinc carbonate than in those who took zinc acetate and sulfate salts. For fortification, however, a 2003 review recommended cereals (containing zinc oxide) as a cheap, stable source that is as easily absorbed as the more expensive forms. A 2005 study found that various compounds of zinc, including oxide and sulfate, did not show statistically significant differences in absorption when added as fortificants to maize tortillas. Deficiency Nearly two billion people in the developing world are deficient in zinc. Groups at risk include children in developing countries and elderly with chronic illnesses. In children, it causes an increase in infection and diarrhea and contributes to the death of about 800,000 children worldwide per year. The World Health Organization advocates zinc supplementation for severe malnutrition and diarrhea. Zinc supplements help prevent disease and reduce mortality, especially among children with low birth weight or stunted growth. However, zinc supplements should not be administered alone, because many in the developing world have several deficiencies, and zinc interacts with other micronutrients. While zinc deficiency is usually due to insufficient dietary intake, it can be associated with malabsorption, acrodermatitis enteropathica, chronic liver disease, chronic renal disease, sickle cell disease, diabetes, malignancy, and other chronic illnesses. In the United States, a federal survey of food consumption determined that for women and men over the age of 19, average consumption was 9.7 and 14.2 mg/day, respectively. For women, 17% consumed less than the EAR, for men 11%. The percentages below EAR increased with age. The most recent published update of the survey (NHANES 2013–2014) reported lower averages – 9.3 and 13.2 mg/day – again with intake decreasing with age. Symptoms of mild zinc deficiency are diverse. Clinical outcomes include depressed growth, diarrhea, impotence and delayed sexual maturation, alopecia, eye and skin lesions, impaired appetite, altered cognition, impaired immune functions, defects in carbohydrate use, and reproductive teratogenesis. Zinc deficiency depresses immunity, but excessive zinc does also. Despite some concerns, western vegetarians and vegans do not suffer any more from overt zinc deficiency than meat-eaters. Major plant sources of zinc include cooked dried beans, sea vegetables, fortified cereals, soy foods, nuts, peas, and seeds. However, phytates in many whole-grains and fibers may interfere with zinc absorption and marginal zinc intake has poorly understood effects. The zinc chelator phytate, found in seeds and cereal bran, can contribute to zinc malabsorption. Some evidence suggests that more than the US RDA (8 mg/day for adult women; 11 mg/day for adult men) may be needed in those whose diet is high in phytates, such as some vegetarians. The European Food Safety Authority (EFSA) guidelines attempt to compensate for this by recommending higher zinc intake when dietary phytate intake is greater. These considerations must be balanced against the paucity of adequate zinc biomarkers, and the most widely used indicator, plasma zinc, has poor sensitivity and specificity. Soil remediation Species of Calluna, Erica and Vaccinium can grow in zinc-metalliferous soils, because translocation of toxic ions is prevented by the action of ericoid mycorrhizal fungi. Agriculture Zinc deficiency appears to be the most common micronutrient deficiency in crop plants; it is particularly common in high-pH soils. Zinc-deficient soil is cultivated in the cropland of about half of Turkey and India, a third of China, and most of Western Australia. Substantial responses to zinc fertilization have been reported in these areas. Plants that grow in soils that are zinc-deficient are more susceptible to disease. Zinc is added to the soil primarily through the weathering of rocks, but humans have added zinc through fossil fuel combustion, mine waste, phosphate fertilizers, pesticide (zinc phosphide), limestone, manure, sewage sludge, and particles from galvanized surfaces. Excess zinc is toxic to plants, although zinc toxicity is far less widespread. Precautions Toxicity Although zinc is an essential requirement for good health, excess zinc can be harmful. Excessive absorption of zinc suppresses copper and iron absorption. The free zinc ion in solution is highly toxic to plants, invertebrates, and even vertebrate fish. The Free Ion Activity Model is well-established in the literature, and shows that just micromolar amounts of the free ion kills some organisms. A recent example showed 6 micromolar killing 93% of all Daphnia in water. The free zinc ion is a powerful Lewis acid up to the point of being corrosive. Stomach acid contains hydrochloric acid, in which metallic zinc dissolves readily to give corrosive zinc chloride. Swallowing a post-1982 American one cent piece (97.5% zinc) can cause damage to the stomach lining through the high solubility of the zinc ion in the acidic stomach. Evidence shows that people taking 100–300 mg of zinc daily may suffer induced copper deficiency. A 2007 trial observed that elderly men taking 80 mg daily were hospitalized for urinary complications more often than those taking a placebo. Levels of 100–300 mg may interfere with the use of copper and iron or adversely affect cholesterol. Zinc in excess of 500 ppm in soil interferes with the plant absorption of other essential metals, such as iron and manganese. A condition called the zinc shakes or "zinc chills" can be induced by inhalation of zinc fumes while brazing or welding galvanized materials. Zinc is a common ingredient of denture cream which may contain between 17 and 38 mg of zinc per gram. Disability and even deaths from excessive use of these products have been claimed. The U.S. Food and Drug Administration (FDA) states that zinc damages nerve receptors in the nose, causing anosmia. Reports of anosmia were also observed in the 1930s when zinc preparations were used in a failed attempt to prevent polio infections. On June 16, 2009, the FDA ordered removal of zinc-based intranasal cold products from store shelves. The FDA said the loss of smell can be life-threatening because people with impaired smell cannot detect leaking gas or smoke, and cannot tell if food has spoiled before they eat it. Recent research suggests that the topical antimicrobial zinc pyrithione is a potent heat shock response inducer that may impair genomic integrity with induction of PARP-dependent energy crisis in cultured human keratinocytes and melanocytes. Poisoning In 1982, the US Mint began minting pennies coated in copper but containing primarily zinc. Zinc pennies pose a risk of zinc toxicosis, which can be fatal. One reported case of chronic ingestion of 425 pennies (over 1 kg of zinc) resulted in death due to gastrointestinal bacterial and fungal sepsis. Another patient who ingested 12 grams of zinc showed only lethargy and ataxia (gross lack of coordination of muscle movements). Several other cases have been reported of humans suffering zinc intoxication by the ingestion of zinc coins. Pennies and other small coins are sometimes ingested by dogs, requiring veterinary removal of the foreign objects. The zinc content of some coins can cause zinc toxicity, commonly fatal in dogs through severe hemolytic anemia and liver or kidney damage; vomiting and diarrhea are possible symptoms. Zinc is highly toxic in parrots and poisoning can often be fatal. The consumption of fruit juices stored in galvanized cans has resulted in mass parrot poisonings with zinc. See also List of countries by zinc production Spelter Wet storage stain Zinc alloy electroplating Metal fume fever Piotr Steinkeller Notes References Bibliography External links Zinc Fact Sheet from the U.S. National Institutes of Health History & Etymology of Zinc Statistics and Information from the U.S. Geological Survey Reducing Agents > Zinc American Zinc Association Information about the uses and properties of zinc. ISZB International Society for Zinc Biology, founded in 2008. An international, nonprofit organization bringing together scientists working on the biological actions of zinc. Zinc-UK Founded in 2010 to bring together scientists in the United Kingdom working on zinc. Zinc at The Periodic Table of Videos (University of Nottingham) ZincBind – a database of biological zinc binding sites. Chemical elements Dietary minerals Transition metals Reducing agents Chemical elements with hexagonal close-packed structure Pyrotechnic fuels Native element minerals Alchemical substances
Zinc
[ "Physics", "Chemistry" ]
11,697
[ "Chemical elements", "Redox", "Alchemical substances", "Reducing agents", "Atoms", "Matter" ]
34,477
https://en.wikipedia.org/wiki/Ziegler%E2%80%93Natta%20catalyst
A Ziegler–Natta catalyst, named after Karl Ziegler and Giulio Natta, is a catalyst used in the synthesis of polymers of 1-alkenes (alpha-olefins). Two broad classes of Ziegler–Natta catalysts are employed, distinguished by their solubility: Heterogeneous supported catalysts based on titanium compounds are used in polymerization reactions in combination with cocatalysts, organoaluminum compounds such as triethylaluminium, Al(C2H5)3. This class of catalyst dominates the industry. Homogeneous catalysts usually based on complexes of the group 4 metals titanium, zirconium or hafnium. They are usually used in combination with a different organoaluminum cocatalyst, methylaluminoxane (or methylalumoxane, MAO). These catalysts traditionally contain metallocenes but also feature multidentate oxygen- and nitrogen-based ligands. Ziegler–Natta catalysts are used to polymerize terminal alkenes (ethylene and alkenes with the vinyl double bond): n CH2=CHR → −[CH2−CHR]n−; History The 1963 Nobel Prize in Chemistry was awarded to German Karl Ziegler, for his discovery of first titanium-based catalysts, and Italian Giulio Natta, for using them to prepare stereoregular polymers from propylene. Ziegler–Natta catalysts have been used in the commercial manufacture of various polyolefins since 1956. As of 2010, the total volume of plastics, elastomers, and rubbers produced from alkenes with these and related (especially Phillips) catalysts worldwide exceeds 100 million tonnes. Together, these polymers represent the largest-volume commodity plastics as well as the largest-volume commodity chemicals in the world. In the early 1950s workers at Phillips Petroleum discovered that chromium catalysts are highly effective for the low-temperature polymerization of ethylene, which launched major industrial technologies culminating in the Phillips catalyst. A few years later, Ziegler discovered that a combination of titanium tetrachloride (TiCl4) and diethylaluminium chloride (Al(C2H5)2Cl) gave comparable activities for the production of polyethylene. Natta used crystalline α-TiCl3 in combination with Al(C2H5)3 to produce first isotactic polypropylene. Usually Ziegler catalysts refer to titanium-based systems for conversions of ethylene and Ziegler–Natta catalysts refer to systems for conversions of propylene. Also, in the 1960s, BASF developed a gas-phase, mechanically-stirred polymerization process for making polypropylene. In that process, the particle bed in the reactor was either not fluidized or not fully fluidized. In 1968, the first gas-phase fluidized-bed polymerization process, the Unipol process, was commercialized by Union Carbide to produce polyethylene. In the mid-1980s, the Unipol process was further extended to produce polypropylene. In the 1970s, magnesium chloride (MgCl2) was discovered to greatly enhance the activity of the titanium-based catalysts. These catalysts were so active that the removal of unwanted amorphous polymer and residual titanium from the product (so-called deashing) was no longer necessary, enabling the commercialization of linear low-density polyethylene (LLDPE) resins and allowed the development of fully amorphous copolymers. The fluidized-bed process remains one of the two most widely used processes for producing polypropylene. Stereochemistry of poly-1-alkenes Natta first used polymerization catalysts based on titanium chlorides to polymerize propylene and other 1-alkenes. He discovered that these polymers are crystalline materials and ascribed their crystallinity to a special feature of the polymer structure called stereoregularity. The concept of stereoregularity in polymer chains is illustrated in the picture on the left with polypropylene. Stereoregular poly(1-alkene) can be isotactic or syndiotactic depending on the relative orientation of the alkyl groups in polymer chains consisting of units −[CH2−CHR]−, like the CH3 groups in the figure. In the isotactic polymers, all stereogenic centers CHR share the same configuration. The stereogenic centers in syndiotactic polymers alternate their relative configuration. A polymer that lacks any regular arrangement in the position of its alkyl substituents (R) is called atactic. Both isotactic and syndiotactic polypropylene are crystalline, whereas atactic polypropylene, which can also be prepared with special Ziegler–Natta catalysts, is amorphous. The stereoregularity of the polymer is determined by the catalyst used to prepare it. Classes Heterogeneous catalysts The first and dominant class of titanium-based catalysts (and some vanadium-based catalysts) for alkene polymerization can be roughly subdivided into two subclasses: catalysts suitable for homopolymerization of ethylene and for ethylene/1-alkene copolymerization reactions leading to copolymers with a low 1-alkene content, 2–4 mol% (LLDPE resins), and catalysts suitable for the synthesis of isotactic 1-alkenes. The overlap between these two subclasses is relatively small because the requirements to the respective catalysts differ widely. Commercial catalysts are supported by being bound to a solid with a high surface area. Both TiCl4 and TiCl3 give active catalysts. The support in the majority of the catalysts is MgCl2. A third component of most catalysts is a carrier, a material that determines the size and the shape of catalyst particles. The preferred carrier is microporous spheres of amorphous silica with a diameter of 30–40 mm. During the catalyst synthesis, both the titanium compounds and MgCl2 are packed into the silica pores. All these catalysts are activated with organoaluminum compounds such as Al(C2H5)3. All modern supported Ziegler–Natta catalysts designed for polymerization of propylene and higher 1-alkenes are prepared with TiCl4 as the active ingredient and MgCl2 as a support. Another component of all such catalysts is an organic modifier, usually an ester of an aromatic diacid or a diether. The modifiers react both with inorganic ingredients of the solid catalysts as well as with organoaluminum cocatalysts. These catalysts polymerize propylene and other 1-alkenes to highly crystalline isotactic polymers. Homogeneous catalysts A second class of Ziegler–Natta catalysts are soluble in the reaction medium. Traditionally such homogeneous catalysts were derived from metallocenes, but the structures of active catalysts have been significantly broadened to include nitrogen-based ligands. Metallocene catalysts These catalysts are metallocenes together with a cocatalyst, typically MAO, −[O−Al(CH3)]n−. The idealized metallocene catalysts have the composition Cp2MCl2 (M = Ti, Zr, Hf) such as titanocene dichloride. Typically, the organic ligands are derivatives of cyclopentadienyl. In some complexes, the two cyclopentadiene (Cp) rings are linked with bridges, like −CH2−CH2− or >SiPh2. Depending on the type of their cyclopentadienyl ligands, for example by using an ansa-bridge, metallocene catalysts can produce either isotactic or syndiotactic polymers of propylene and other 1-alkenes. Non-metallocene catalysts Ziegler–Natta catalysts of the third class, non-metallocene catalysts, use a variety of complexes of various metals, ranging from scandium to lanthanoid and actinoid metals, and a large variety of ligands containing oxygen (O2), nitrogen (N2), phosphorus (P), and sulfur (S). The complexes are activated using MAO, as is done for metallocene catalysts. Most Ziegler–Natta catalysts and all the alkylaluminium cocatalysts are unstable in air, and the alkylaluminium compounds are pyrophoric. The catalysts, therefore, are always prepared and handled under an inert atmosphere. Mechanism of Ziegler–Natta polymerization The structure of active centers in Ziegler–Natta catalysts is well established only for metallocene catalysts. An idealized and simplified metallocene complex Cp2ZrCl2 represents a typical precatalyst. It is unreactive toward alkenes. The dihalide reacts with MAO and is transformed into a metallocenium ion Cp2CH3, which is ion-paired to some derivative(s) of MAO. A polymer molecule grows by numerous insertion reactions of C=C bonds of 1-alkene molecules into the Zr–C bond in the ion: Many thousands of alkene insertion reactions occur at each active center resulting in the formation of long polymer chains attached to the center. The Cossee–Arlman mechanism describes the growth of stereospecific polymers. This mechanism states that the polymer grows through alkene coordination at a vacant site at the titanium atom, which is followed by insertion of the C=C bond into the Ti−C bond at the active center. Termination processes On occasion, the polymer chain is disengaged from the active centers in the chain termination reaction. Several pathways exist for termination: Cp2−(CH2−CHR)n−CH3 + CH2=CHR → Cp2−CH2−CH2R + CH2=CR–polymer Another type of chain termination reaction called a β-hydride elimination reaction also occurs periodically: Cp2−(CH2−CHR)n−CH3 → Cp2−H + CH2=CR–polymer Polymerization reactions of alkenes with solid titanium-based catalysts occur at special titanium centers located on the exterior of the catalyst crystallites. Some titanium atoms in these crystallites react with organoaluminum cocatalysts with the formation of Ti–C bonds. The polymerization reaction of alkenes occurs similarly to the reactions in metallocene catalysts: LnTi–CH2−CHR–polymer + CH2=CHR → LnTi–CH2-CHR–CH2−CHR–polymer The two chain termination reactions occur quite rarely in Ziegler–Natta catalysis and the formed polymers have a too high molecular weight to be of commercial use. To reduce the molecular weight, hydrogen is added to the polymerization reaction: LnTi–CH2−CHR–polymer + H2 → LnTi−H + CH3−CHR–polymer Another termination process involves the action of protic (acidic) reagents, which can be intentionally added or adventitious. Commercial polymers prepared with Ziegler–Natta catalysts Polyethylene Polypropylene Copolymers of ethylene and 1-alkenes Polybutene-1 Polymethylpentene Polycycloolefins Polybutadiene Polyisoprene Amorphous poly-alpha-olefins (APAO) Polyacetylene References Further reading Coordination complexes Catalysts Polymer chemistry Industrial processes 1953 in science 1953 in West Germany
Ziegler–Natta catalyst
[ "Chemistry", "Materials_science", "Engineering" ]
2,477
[ "Catalysis", "Catalysts", "Coordination complexes", "Coordination chemistry", "Materials science", "Polymer chemistry", "Chemical kinetics" ]
34,521
https://en.wikipedia.org/wiki/Z%20notation
The Z notation is a formal specification language used for describing and modelling computing systems. It is targeted at the clear specification of computer programs and computer-based systems in general. History In 1974, Jean-Raymond Abrial published "Data Semantics". He used a notation that would later be taught in the University of Grenoble until the end of the 1980s. While at EDF (Électricité de France), working with Bertrand Meyer, Abrial also worked on developing Z. The Z notation is used in the 1980 book Méthodes de programmation. Z was originally proposed by Abrial in 1977 with the help of Steve Schuman and Bertrand Meyer. It was developed further at the Programming Research Group at Oxford University, where Abrial worked in the early 1980s, having arrived at Oxford in September 1979. Abrial has said that Z is so named "Because it is the ultimate language!" although the name "Zermelo" is also associated with the Z notation through its use of Zermelo–Fraenkel set theory. In 1992, the Z User Group (ZUG) was established to oversee activities concerning the Z notation, especially meetings and conferences. Usage and notation Z is based on the standard mathematical notation used in axiomatic set theory, lambda calculus, and first-order predicate logic. All expressions in Z notation are typed, thereby avoiding some of the paradoxes of naive set theory. Z contains a standardized catalogue (called the mathematical toolkit) of commonly used mathematical functions and predicates, defined using Z itself. It is augmented with Z schema boxes, which can be combined using their own operators, based on standard logical operators, and also by including schemas within other schemas. This allows Z specifications to be built up into large specifications in a convenient manner. Because Z notation (just like the APL language, long before it) uses many non-ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX. There are also Unicode encodings for all standard Z symbols. Standards ISO completed a Z standardization effort in 2002. This standard and a technical corrigendum are available from ISO free: the standard is publicly available from the ISO ITTF site free of charge and, separately, available for purchase from the ISO site; the technical corrigendum is available from the ISO site free of charge. Award In 1992, Oxford University Computing Laboratory and IBM were jointly awarded The Queen's Award for Technological Achievement "for the development of ... the Z notation, and its application in the IBM Customer Information Control System (CICS) product." See also Z User Group (ZUG) Community Z Tools (CZT) project Other formal methods (and languages using formal specifications): VDM-SL, the main alternative to Z B-Method, developed by Jean-Raymond Abrial (creator of Z notation) Z++ and Object-Z, object extensions for the Z notation Alloy, a specification language inspired by Z notation and implementing the principles of Object Constraint Language (OCL). Verus, a proprietary tool built by Compion, Champaign, Illinois (later purchased by Motorola), for use in the multi-level secure UNIX project pioneered by its Addamax division. Fastest, a model-based testing tool for the Z notation. Unified Modeling Language, a software system design modeling tool by Object Management Group References Further reading Computer-related introductions in 1977 Specification languages Formal specification languages Oxford University Computing Laboratory
Z notation
[ "Mathematics", "Engineering" ]
718
[ "Software engineering", "Specification languages", "Z notation" ]
1,197,531
https://en.wikipedia.org/wiki/Hamiltonian%20system
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can be studied in both Hamiltonian mechanics and dynamical systems theory. Overview Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution equations of a physical system. The advantage of this description is that it gives important insights into the dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of three bodies: while there is no closed-form solution to the general problem, Poincaré showed for the first time that it exhibits deterministic chaos. Formally, a Hamiltonian system is a dynamical system characterised by the scalar function , also known as the Hamiltonian. The state of the system, , is described by the generalized coordinates and , corresponding to generalized momentum and position respectively. Both and are real-valued vectors with the same dimension N. Thus, the state is completely described by the 2N-dimensional vector and the evolution equations are given by Hamilton's equations: The trajectory is the solution of the initial value problem defined by Hamilton's equations and the initial condition . Time-independent Hamiltonian systems If the Hamiltonian is not explicitly time-dependent, i.e. if , then the Hamiltonian does not vary with time at all: and thus the Hamiltonian is a constant of motion, whose constant equals the total energy of the system: . Examples of such systems are the undamped pendulum, the harmonic oscillator, and dynamical billiards. Example An example of a time-independent Hamiltonian system is the harmonic oscillator. Consider the system defined by the coordinates and . Then the Hamiltonian is given by The Hamiltonian of this system does not depend on time and thus the energy of the system is conserved. Symplectic structure One important property of a Hamiltonian dynamical system is that it has a symplectic structure. Writing the evolution equation of the dynamical system can be written as where and IN is the N×N identity matrix. One important consequence of this property is that an infinitesimal phase-space volume is preserved. A corollary of this is Liouville's theorem, which states that on a Hamiltonian system, the phase-space volume of a closed surface is preserved under time evolution. where the third equality comes from the divergence theorem. Hamiltonian chaos Certain Hamiltonian systems exhibit chaotic behavior. When the evolution of a Hamiltonian system is highly sensitive to initial conditions, and the motion appears random and erratic, the system is said to exhibit Hamiltonian chaos. Origins The concept of chaos in Hamiltonian systems has its roots in the works of Henri Poincaré, who in the late 19th century made pioneering contributions to the understanding of the three-body problem in celestial mechanics. Poincaré showed that even a simple gravitational system of three bodies could exhibit complex behavior that could not be predicted over the long term. His work is considered to be one of the earliest explorations of chaotic behavior in physical systems. Characteristics Hamiltonian chaos is characterized by the following features: Sensitivity to Initial Conditions: A hallmark of chaotic systems, small differences in initial conditions can lead to vastly different trajectories. This is known as the butterfly effect. Mixing: Over time, the phases of the system become uniformly distributed in phase space. Recurrence: Though unpredictable, the system eventually revisits states that are arbitrarily close to its initial state, known as Poincaré recurrence. Hamiltonian chaos is also associated with the presence of chaotic invariants such as the Lyapunov exponent and Kolmogorov-Sinai entropy, which quantify the rate at which nearby trajectories diverge and the complexity of the system, respectively. Applications Hamiltonian chaos is prevalent in many areas of physics, particularly in classical mechanics and statistical mechanics. For instance, in plasma physics, the behavior of charged particles in a magnetic field can exhibit Hamiltonian chaos, which has implications for nuclear fusion and astrophysical plasmas. Moreover, in quantum mechanics, Hamiltonian chaos is studied through quantum chaos, which seeks to understand the quantum analogs of classical chaotic behavior. Hamiltonian chaos also plays a role in astrophysics, where it is used to study the dynamics of star clusters and the stability of galactic structures. Examples Dynamical billiards Planetary systems, more specifically, the n-body problem. Canonical general relativity See also Action-angle coordinates Liouville's theorem Integrable system Symplectic manifold Kolmogorov–Arnold–Moser theorem Poincaré recurrence theorem Lyapunov exponent Three-body problem Ergodic theory References Further reading Almeida, A. M. (1992). Hamiltonian systems: Chaos and quantization. Cambridge monographs on mathematical physics. Cambridge (u.a.: Cambridge Univ. Press) Audin, M., (2008). Hamiltonian systems and their integrability. Providence, R.I: American Mathematical Society, Dickey, L. A. (2003). Soliton equations and Hamiltonian systems. Advanced series in mathematical physics, v. 26. River Edge, NJ: World Scientific. Treschev, D., & Zubelevich, O. (2010). Introduction to the perturbation theory of Hamiltonian systems. Heidelberg: Springer Zaslavsky, G. M. (2007). The physics of chaos in Hamiltonian systems. London: Imperial College Press. External links Hamiltonian mechanics
Hamiltonian system
[ "Physics", "Mathematics" ]
1,157
[ "Hamiltonian mechanics", "Theoretical physics", "Classical mechanics", "Dynamical systems" ]
1,197,767
https://en.wikipedia.org/wiki/Thermal%20interface%20material
A thermal interface material (shortened to TIM) is any material that is inserted between two components in order to enhance the thermal coupling between them. A common use is heat dissipation, in which the TIM is inserted between a heat-producing device (e.g. an integrated circuit) and a heat-dissipating device (e.g. a heat sink). There are intensive studies in developing several kinds of TIM with different target applications. At each interface, a thermal resistance exists and impedes heat dissipation. In addition, the electronic performance and device lifetime can degrade dramatically under continuous overheating and large thermal stress at the interfaces. Many recent efforts have been dedicated to developing and improving TIMs: These effort include minimizing the thermal boundary resistance between layers and enhancing thermal management performance, while addressing application requirements such as low thermal stress between materials of different thermal expansion coefficients, low elastic modulus or viscosity, as well as ensuring flexibility and reusability. Thermal paste: Mostly used in the electronics industry, thermal pastes provide a very thin bond line and therefore a very small thermal resistance. They have no mechanical strength (other than the surface tension of the paste and the resulting adhesive effect) and require an external mechanical fixation mechanism. Because they do not cure, thermal pastes are typically only used where the material can be contained, or in thin applications where the viscosity of the paste will allow it to stay in position during use. Thermal adhesive: As with thermal pastes, thermal adhesives provide a very thin bond line, but provide additional mechanical strength to the bond after curing. While curing TIMs like thermal adhesives may be used outside of a semiconductor package, often they are used in inside of a thermal package, as their curing properties can improve reliability over different thermal stresses. Thermal adhesives come in both single-part formulations as well as two-part formulations, often containing additives to improve thermal conductivity, including solid fillers (metal oxides, carbon black, carbon nanotubes, etc.), or liquid metal droplets. Thermal gap filler: This could be described as "curing thermal paste" or "non-adhesive thermal glue". It provides thicker bond lines than the thermal paste, as it cures while still allowing an easy disassembly, thanks to limited adhesiveness. Thermally conductive pad: As opposed to previous TIMs that come in a fluidic form, thermal pads are manufactured and used in a solid state (albeit often soft). Mostly made of silicone or silicone-like material, thermal pads have the advantage of being easy to apply. They provide thicker bond lines (ranging in thickness from larger than a few hundred μm to a few mm) to accommodate non-flat interfaces and even multi-component interfaces, but will usually need higher force to press the heat sink onto the heat source, so that the thermal pad conforms to the bonded surfaces. Thermal tape: These materials adhere to the bonded surfaces, require no curing time, they are easy to apply. Similar to thermal pads, they are typically shipped in a solid but flexible form and come in a variety of thicknesses larger than a few hundred μm. Phase-change materials (PCM): Naturally sticky materials, used in place of thermal pastes. Its application is similar to solid pads. After achieving a melting point of 55–60 degrees, it changes to a half-liquid status and fills all gaps between the heat source and the heat sink. Metal thermal interface materials (metal TIMs): Metallic materials offer substantially higher bulk thermal conductivity as well as the lowest thermal interface resistance. This high conductivity translates to less sensitivity to bondline thicknesses and coplanarity issues than polymeric TIMs. Common metals used as TIMs include the relatively soft and compliant indium alloys, as well as sintered silver. See also Heat sink Heat spreader References Computer hardware cooling Heat exchangers Heat transfer Materials science
Thermal interface material
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
825
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Chemical equipment", "Materials science", "Thermodynamics", "Heat exchangers", "nan" ]
1,197,980
https://en.wikipedia.org/wiki/Downregulation%20and%20upregulation
In biochemistry, in the biological context of organisms' regulation of gene expression and production of gene products, downregulation is the process by which a cell decreases the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus. The complementary process that involves increase in quantities of cellular components is called upregulation. An example of downregulation is the cellular decrease in the expression of a specific receptor in response to its increased activation by a molecule, such as a hormone or neurotransmitter, which reduces the cell's sensitivity to the molecule. This is an example of a locally acting (negative feedback) mechanism. An example of upregulation is the response of liver cells exposed to such xenobiotic molecules as dioxin. In this situation, the cells increase their production of cytochrome P450 enzymes, which in turn increases degradation of these dioxin molecules. Downregulation or upregulation of an RNA or protein may also arise by an epigenetic alteration. Such an epigenetic alteration can cause expression of the RNA or protein to no longer respond to an external stimulus. This occurs, for instance, during drug addiction or progression to cancer. Downregulation and upregulation of receptors All living cells have the ability to receive and process signals that originate outside their membranes, which they do by means of proteins called receptors, often located at the cell's surface imbedded in the plasma membrane. When such signals interact with a receptor, they effectively direct the cell to do something, such as dividing, dying, or allowing substances to be created, or to enter or exit the cell. A cell's ability to respond to a chemical message depends on the presence of receptors tuned to that message. The more receptors a cell has that are tuned to the message, the more the cell will respond to it. Receptors are created, or expressed, from instructions in the DNA of the cell, and they can be increased, or upregulated, when the signal is weak, or decreased, or downregulated, when it is strong. Their level can also be up or down regulated by modulation of systems that degrade receptors when they are no longer required by the cell. Downregulation of receptors can also occur when receptors have been chronically exposed to an excessive amount of a ligand, either from endogenous mediators or from exogenous drugs. This results in ligand-induced desensitization or internalization of that receptor. This is typically seen in animal hormone receptors. Upregulation of receptors, on the other hand, can result in super-sensitized cells, especially after repeated exposure to an antagonistic drug or prolonged absence of the ligand. Some receptor agonists may cause downregulation of their respective receptors, while most receptor antagonists temporarily upregulate their respective receptors. The disequilibrium caused by these changes often causes withdrawal when the long-term use of a drug is discontinued. Upregulation and downregulation can also happen as a response to toxins or hormones. An example of upregulation in pregnancy is hormones that cause cells in the uterus to become more sensitive to oxytocin. Example: Insulin receptor downregulation Elevated levels of the hormone insulin in the blood trigger downregulation of the associated receptors. When insulin binds to its receptors on the surface of a cell, the hormone receptor complex undergoes endocytosis and is subsequently attacked by intracellular lysosomal enzymes. The internalization of the insulin molecules provides a pathway for degradation of the hormone, as well as for regulation of the number of sites that are available for binding on the cell surface. At high plasma concentrations, the number of surface receptors for insulin is gradually reduced by the accelerated rate of receptor internalization and degradation brought about by increased hormonal binding. The rate of synthesis of new receptors within the endoplasmic reticulum and their insertion in the plasma membrane do not keep pace with their rate of destruction. Over time, this self-induced loss of target cell receptors for insulin reduces the target cell's sensitivity to the elevated hormone concentration. This process is illustrated by the insulin receptor sites on target cells, e.g. liver cells, in a person with type 2 diabetes. Due to the elevated levels of blood glucose in an individual, the β-cells (islets of Langerhans) in the pancreas must release more insulin than normal to meet the demand and return the blood to homeostatic levels. The near-constant increase in blood insulin levels results from an effort to match the increase in blood glucose, which will cause receptor sites on the liver cells to downregulate and decrease the number of receptors for insulin, increasing the subject's resistance by decreasing sensitivity to this hormone. There is also a hepatic decrease in sensitivity to insulin. This can be seen in the continuing gluconeogenesis in the liver even when blood glucose levels are elevated. This is the more common process of insulin resistance, which leads to adult-onset diabetes. Another example can be seen in diabetes insipidus, in which the kidneys become insensitive to arginine vasopressin. Drug addiction Family-based, adoption, and twin studies have indicated that there is a strong (50%) heritable component to vulnerability to substance abuse addiction. Especially among genetically vulnerable individuals, repeated exposure to a drug of abuse in adolescence or adulthood causes addiction by inducing stable downregulation or upregulation in expression of specific genes and microRNAs through epigenetic alterations. Such downregulation or upregulation has been shown to occur in the brain's reward regions, such as the nucleus accumbens. Cancer DNA damage appears to be the primary underlying cause of cancer. DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer (see malignant neoplasms). Investigation of epigenetic down- or upregulation of repaired DNA genes as possibly central to progression of cancer has been regularly undertaken since 2000. Epigenetic downregulation of the DNA repair gene MGMT occurs in 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40–90% of colorectal cancers, and 50% of brain cancers. Similarly, epigenetic downregulation of LIG4 occurs in 82% of colorectal cancers and epigenetic downregulation of NEIL1 occurs in 62% of head and neck cancers and in 42% of non-small-cell lung cancers. Epigenetic upregulation of the DNA repair genes PARP1 and FEN1 occurs in numerous cancers (see Regulation of transcription in cancer). PARP1 and FEN1 are essential genes in the error-prone and mutagenic DNA repair pathway microhomology-mediated end joining. If this pathway is upregulated, the excess mutations it causes can lead to cancer. PARP1 is over-expressed in tyrosine kinase-activated leukemias, in neuroblastoma, in testicular and other germ cell tumors, and in Ewing's sarcoma. FEN1 is upregulated in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung. See also Regulation of gene expression Transcriptional regulation Enhancer (genetics) References External links Molecular biology Genetics Cell biology
Downregulation and upregulation
[ "Chemistry", "Biology" ]
1,555
[ "Biochemistry", "Cell biology", "Genetics", "Molecular biology" ]
1,198,458
https://en.wikipedia.org/wiki/Boiling%20chip
A boiling chip, boiling stone, or porous bit anti-bumping granule is a tiny, unevenly shaped piece of substance added to liquids to make them boil more calmly. Boiling chips are frequently employed in distillation and heating. When a liquid becomes superheated, a speck of dust or a stirring rod can cause violent flash boiling. Boiling chips provide nucleation sites so the liquid boils smoothly without becoming superheated or bumping. Use Boiling chips should not be added to liquid that is already near its boiling point, as this could also induce flash boiling. Boiling chips should not be used when cooking unless they are suitable for food-grade applications. The structure of a boiling chip traps liquid while in use, meaning that they cannot be re-used in laboratory setups. They also don't work well under vacuum; if a solution is boiling under vacuum, it is best to constantly stir it instead. Materials Boiling chips are typically made of a porous material, such as alumina, silicon carbide, calcium carbonate, calcium sulfate, porcelain or carbon, and often have a nonreactive coating of PTFE. This ensures that the boiling chips will provide effective nucleation sites, yet are chemically inert. In less demanding situations, like school laboratories, pieces of broken porcelainware or glassware are often used. References Laboratory equipment Phase transitions
Boiling chip
[ "Physics", "Chemistry" ]
283
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
1,198,956
https://en.wikipedia.org/wiki/Conditional%20convergence
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely. Definition More precisely, a series of real numbers is said to converge conditionally if exists (as a finite real number, i.e. not or ), but A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series). Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. Agnew's theorem describes rearrangements that preserve convergence for all convergent series. The Lévy–Steinitz theorem identifies the set of values to which a series of terms in Rn can converge. A typical conditionally convergent integral is that on the non-negative real axis of (see Fresnel integral). See also Absolute convergence Unconditional convergence References Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964). Mathematical series Integral calculus Convergence (mathematics) Summability theory
Conditional convergence
[ "Mathematics" ]
232
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Series (mathematics)", "Calculus", "Mathematical objects", "Mathematical relations", "Integral calculus" ]
1,199,421
https://en.wikipedia.org/wiki/Frequency%20multiplier
In electronics, a frequency multiplier is an electronic circuit that generates an output signal and that output frequency is a harmonic (multiple) of its input frequency. Frequency multipliers consist of a nonlinear circuit that distorts the input signal and consequently generates harmonics of the input signal. A subsequent bandpass filter selects the desired harmonic frequency and removes the unwanted fundamental and other harmonics from the output. Frequency multipliers are often used in frequency synthesizers and communications circuits. It can be more economical to develop a lower frequency signal with lower power and less expensive devices, and then use a frequency multiplier chain to generate an output frequency in the microwave or millimeter wave range. Some modulation schemes, such as frequency modulation, survive the nonlinear distortion without ill effect (but schemes such as amplitude modulation do not). Frequency multiplication is also used in nonlinear optics. The nonlinear distortion in crystals can be used to generate harmonics of laser light. Theory A pure sine wave has a single frequency f If the sine wave is applied to a linear circuit, such as a non–distortion amplifier, the output is still a sine wave (but may acquire a phase shift). However, if the sine wave is applied to a nonlinear circuit, the resulting distortion creates harmonics; frequency components at integer multiples nf of the fundamental frequency f. The distorted signal can be described by a Fourier series in f. The nonzero ck represent the generated harmonics. The Fourier coefficients are given by integrating over the fundamental period T: So a frequency multiplier can be built from a nonlinear electronic component which generates a series of harmonics, followed by a bandpass filter which passes one of the harmonics to the output and blocks the others. From a conversion efficiency standpoint, the nonlinear circuit should maximize the coefficient for the desired harmonic and minimize the others. Consequently, the transcribing function is often specially chosen. Easy choices are to use an even function to generate even harmonics or an odd function for odd harmonics. See Even and odd functions#Harmonics. A full wave rectifier, for example, is good for making a doubler. To produce a times-3 multiplier, the original signal may be input to an amplifier that is over driven to produce nearly a square wave. This signal is high in 3rd order harmonics and can be filtered to produce the desired x3 outcome. YIG multipliers often want to select an arbitrary harmonic, so they use a stateful distortion circuit that converts the input sine wave into an approximate impulse train. The ideal (but impractical) impulse train generates an infinite number of (weak) harmonics. In practice, an impulse train generated by a monostable circuit will have many usable harmonics. YIG multipliers using step recovery diodes may, for example, take an input frequency of 1 to 2 GHz and produce outputs up to 18 GHz. Sometimes the frequency multiplier circuit will adjust the width of the impulses to improve conversion efficiency for a specific harmonic. Circuits Diode Clipping circuits. Full wave bridge doubler. Class C amplifier and multiplier Efficiently generating power becomes more important at high power levels. Linear Class A amplifiers are at best 25 percent efficient. Push-pull Class B amplifiers are at best 50 percent efficient. The basic problem is the amplifying element is dissipating power. Switching Class C amplifiers are nonlinear, but they can be better than 50 percent efficient because an ideal switch does not dissipate any power. A clever design can use the nonlinear Class C amplifier for both gain and as a frequency multiplier. Step recovery diode Generating a large number of useful harmonics requires a fast nonlinear device, such as a step recovery diode. Microwave generators may use a step recovery diode impulse generator followed by a tunable YIG filter. The YIG filter has a yttrium iron garnet sphere that is tuned with a magnetic field. The step recovery diode impulse generator is driven at a subharmonic of the desired output frequency. An electromagnet then tunes the YIG filter to select the desired harmonic. Varactor diode Resistive loaded varactors. Regenerative varactors. Penfield. Frequency multipliers have much in common with frequency mixers, and some of the same nonlinear devices are used for both: transistors operated in Class C and diodes. In transmitting circuits many of the amplifying devices (vacuum tubes or transistors) operate nonlinearly and create harmonics, so an amplifier stage can be made a multiplier by tuning the tuned circuit at the output to a multiple of the input frequency. Usually the power (gain) produced by the nonlinear device drops off rapidly at the higher harmonics, so most frequency multipliers just double or triple the frequency, and multiplication by higher factors is accomplished by cascading doubler and tripler stages. Previous uses Frequency multipliers use circuits tuned to a harmonic of the input frequency. Non-linear elements such as diodes may be added to enhance the production of harmonic frequencies. Since the power in the harmonics declines rapidly, usually a frequency multiplier is tuned to only a small multiple (twice, three times, or five times) of the input frequency. Usually amplifiers are inserted in a chain of frequency multipliers to ensure adequate signal level at the final frequency. Since the tuned circuits have a limited bandwidth, if the base frequency is changed significantly (more than one percent or so), the multiplier stages may have to be adjusted; this can take significant time if there are many stages. Microelectromechanical (MEMS) frequency doubler An electric-field driven micromechanical cantilever resonator is one of the most fundamental and widely studied structures in MEMS, which can provide a high Q and narrow bandpass filtering function. The inherent square-law nonlinearity of the voltage-to-force transfer function of a cantilever resonator's capacitive transducer can be employed for the realization of frequency doubling effect. Due to the low-loss attribute (or equivalently, a high Q) offered by MEMS devices, improved circuit performance can be expected from a micromechanical frequency doubler than semiconductor devices utilized for the same task. Graphene based frequency multipliers Graphene based FETs have also been employed for frequency doubling with more than 90% converting efficiency. In fact, all ambipolar transistors can be used for designing frequency multiplier circuits. Graphene can work over a large frequency range due to its unique characteristics. Phase-locked loops with frequency dividers A phase-locked loop (PLL) uses a reference frequency to generate a multiple of that frequency. A voltage controlled oscillator (VCO) is initially tuned roughly to the range of the desired frequency multiple. The signal from the VCO is divided down using frequency dividers by the multiplication factor. The divided signal and the reference frequency are fed into a phase comparator. The output of the phase comparator is a voltage that is proportional to the phase difference. After passing through a low pass filter and being converted to the proper voltage range, this voltage is fed to the VCO to adjust the frequency. This adjustment increases the frequency as the phase of the VCO's signal lags that of the reference signal and decreases the frequency as the lag decreases (or lead increases). The VCO will stabilize at the desired frequency multiple. This type of PLL is a type of frequency synthesizer. Fractional-N synthesizer In some PLLs the reference frequency may also be divided by an integer multiple before being input to the phase comparator. This allows the synthesis of frequencies that are N/M times the reference frequency. This can be accomplished in a different manner by periodically changing the integer value of an integer-N frequency divider, effectively resulting in a multiplier with both whole number and fractional component. Such a multiplier is called a fractional-N synthesizer after its fractional component. Fractional-N synthesizers provide an effective means of achieving fine frequency resolution with lower values of N, allowing loop architectures with tens of thousands of times less phase noise than alternative designs with lower reference frequencies and higher integer N values. They also allow a faster settling time because of their higher reference frequencies, allowing wider closed and open loop bandwidths. Delta sigma synthesizer A delta sigma synthesizer adds a randomization to programmable-N frequency divider of the fractional-N synthesizer. This is done to shrink sidebands created by periodic changes of an integer-N frequency divider. PLL References Egan, William F. 2000. Frequency Synthesis by Phase-lock, 2nd Ed., John Wiley & Sons, Fractional N frequency synthesizer with modulation compensation U.S. Patent 4,686,488, Attenborough, C. (1987, August 11) Programmable fractional-N frequency synthesizer U.S. Patent 5,224,132, Bar-Giora Goldberg, (1993, June 29) See also Heterostructure barrier varactor CPU multiplier References Communication circuits
Frequency multiplier
[ "Engineering" ]
1,884
[ "Telecommunications engineering", "Communication circuits" ]
1,199,510
https://en.wikipedia.org/wiki/Anomaly-based%20intrusion%20detection%20system
An anomaly-based intrusion detection system, is an intrusion detection system for detecting both network and computer intrusions and misuse by monitoring system activity and classifying it as either normal or anomalous. The classification is based on heuristics or rules, rather than patterns or signatures, and attempts to detect any type of misuse that falls out of normal system operation. This is as opposed to signature-based systems, which can only detect attacks for which a signature has previously been created. In order to positively identify attack traffic, the system must be taught to recognize normal system activity. The two phases of a majority of anomaly detection systems consist of the training phase (where a profile of normal behaviors is built) and testing phase (where current traffic is compared with the profile created in the training phase). Anomalies are detected in several ways, most often with artificial intelligence type techniques. Systems using artificial neural networks have been used to great effect. Another method is to define what normal usage of the system comprises using a strict mathematical model, and flag any deviation from this as an attack. This is known as strict anomaly detection. Other techniques used to detect anomalies include data mining methods, grammar based methods, and Artificial Immune System. Network-based anomalous intrusion detection systems often provide a second line of defense to detect anomalous traffic at the physical and network layers after it has passed through a firewall or other security appliance on the border of a network. Host-based anomalous intrusion detection systems are one of the last layers of defense and reside on computer end points. They allow for fine-tuned, granular protection of end points at the application level. Anomaly-based Intrusion Detection at both the network and host levels have a few shortcomings; namely a high false-positive rate and the ability to be fooled by a correctly delivered attack. Attempts have been made to address these issues through techniques used by PAYL and MCPAD. See also fail2ban Cfengine – 'cfenvd' can be utilized to do 'anomaly detection' Change detection DNS analytics Hogzilla IDS – is a free software (GPL) anomaly-based intrusion detection system. RRDtool – can be configured to flag anomalies Sqrrl – threat hunting based on NetFlow and other collected data References Computer network security
Anomaly-based intrusion detection system
[ "Engineering" ]
477
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
1,200,324
https://en.wikipedia.org/wiki/Convective%20available%20potential%20energy
In meteorology, convective available potential energy (commonly abbreviated as CAPE), is a measure of the capacity of the atmosphere to support upward air movement that can lead to cloud formation and storms. Some atmospheric conditions, such as very warm, moist, air in an atmosphere that cools rapidly with height, can promote strong and sustained upward air movement, possibly stimulating the formation of cumulus clouds or cumulonimbus (thunderstorm clouds). In that situation the potential energy of the atmosphere to cause upward air movement is very high, so CAPE (a measure of potential energy) would be high and positive. By contrast, other conditions, such as a less warm air parcel or a parcel in an atmosphere with a temperature inversion (in which the temperature increases above a certain height) have much less capacity to support vigorous upward air movement, thus the potential energy level (CAPE) would be much lower, as would the probability of thunderstorms. More technically, CAPE is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink. Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards. Mechanics CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL): Where is the height of the level of free convection and is the height of the equilibrium level (neutral buoyancy), where is the virtual temperature of the specific parcel, where is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy. CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon. CAPE is effectively positive buoyancy, expressed B+ or simply B; the opposite of convective inhibition (CIN), which is expressed as B-, and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CAPE is sometimes referred to as positive buoyant energy (PBE). This type of CAPE is the maximum energy available to an ascending parcel and to moist convection. When a layer of CIN is present, the layer must be eroded by surface heating or mechanical lifting, so that convective boundary layer parcels may reach their level of free convection (LFC). On a sounding diagram, CAPE is the positive area above the LFC, the area between the parcel's virtual temperature line and the environmental virtual temperature line where the ascending parcel is warmer than the environment. Neglecting the virtual temperature correction may result in substantial relative errors in the calculated value of CAPE for small CAPE values. CAPE may also exist below the LFC, but if a layer of CIN (subsidence) is present, it is unavailable to deep, moist convection until CIN is exhausted. When there is mechanical lift to saturation, cloud base begins at the lifted condensation level (LCL); absent forcing, cloud base begins at the convective condensation level (CCL) where heating from below causes spontaneous buoyant lifting to the point of condensation when the convective temperature is reached. When CIN is absent or is overcome, saturated parcels at the LCL or CCL, which had been small cumulus clouds, will rise to the LFC, and then spontaneously rise until hitting the stable layer of the equilibrium level. The result is deep, moist convection (DMC), or simply, a thunderstorm. When a parcel is unstable, it will continue to move vertically, in either direction, dependent on whether it receives upward or downward forcing, until it reaches a stable layer (though momentum, gravity, and other forcing may cause the parcel to continue). There are multiple types of CAPE, downdraft CAPE (DCAPE), estimates the potential strength of rain and evaporatively cooled downdrafts. Other types of CAPE may depend on the depth being considered. Other examples are surface based CAPE (SBCAPE), mixed layer or mean layer CAPE (MLCAPE), most unstable or maximum usable CAPE (MUCAPE), and normalized CAPE (NCAPE). Fluid elements displaced upwards or downwards in such an atmosphere expand or compress adiabatically in order to remain in pressure equilibrium with their surroundings, and in this manner become less or more dense. If the adiabatic decrease or increase in density is less than the decrease or increase in the density of the ambient (not moved) medium, then the displaced fluid element will be subject to downwards or upwards pressure, which will function to restore it to its original position. Hence, there will be a counteracting force to the initial displacement. Such a condition is referred to as convective stability. On the other hand, if adiabatic decrease or increase in density is greater than in the ambient fluid, the upwards or downwards displacement will be met with an additional force in the same direction exerted by the ambient fluid. In these circumstances, small deviations from the initial state will become amplified. This condition is referred to as convective instability. Convective instability is also termed static instability, because the instability does not depend on the existing motion of the air; this contrasts with dynamic instability where instability is dependent on the motion of air and its associated effects such as dynamic lifting. Significance to thunderstorms Thunderstorms form when air parcels are lifted vertically. Deep, moist convection requires a parcel to be lifted to the LFC where it then rises spontaneously until reaching a layer of non-positive buoyancy. The atmosphere is warm at the surface and lower levels of the troposphere where there is mixing (the planetary boundary layer (PBL)), but becomes substantially cooler with height. The temperature profile of the atmosphere, the change in temperature, the degree that it cools with height, is the lapse rate. When the rising air parcel cools more slowly than the surrounding atmosphere, it remains warmer and less dense. The parcel continues to rise freely (convectively; without mechanical lift) through the atmosphere until it reaches an area of air less dense (warmer) than itself. The amount, and shape, of the positive-buoyancy area modulates the speed of updrafts, thus extreme CAPE can result in explosive thunderstorm development; such rapid development usually occurs when CAPE stored by a capping inversion is released when the "lid" is broken by heating or mechanical lift. The amount of CAPE also modulates how low-level vorticity is entrained and then stretched in the updraft, with importance to tornadogenesis. The most important CAPE for tornadoes is within the lowest 1 to 3 km (0.6 to 1.9 mi) of the atmosphere, whilst deep layer CAPE and the width of CAPE at mid-levels is important for supercells. Tornado outbreaks tend to occur within high CAPE environments. Large CAPE is required for the production of very large hail, owing to updraft strength, although a rotating updraft may be stronger with less CAPE. Large CAPE also promotes lightning activity. Two notable days for severe weather exhibited CAPE values over 5 kJ/kg. Two hours before the 1999 Oklahoma tornado outbreak occurred on May 3, 1999, the CAPE value sounding at Oklahoma City was at 5.89 kJ/kg. A few hours later, an F5 tornado ripped through the southern suburbs of the city. Also on May 4, 2007, CAPE values of 5.5 kJ/kg were reached and an EF5 tornado tore through Greensburg, Kansas. On these days, it was apparent that conditions were ripe for tornadoes and CAPE wasn't a crucial factor. However, extreme CAPE, by modulating the updraft (and downdraft), can allow for exceptional events, such as the deadly F5 tornadoes that hit Plainfield, Illinois on August 28, 1990, and Jarrell, Texas on May 27, 1997, on days which weren't readily apparent as conducive to large tornadoes. CAPE was estimated to exceed 8 kJ/kg in the environment of the Plainfield storm and was around 7 kJ/kg for the Jarrell storm. Severe weather and tornadoes can develop in an area of low CAPE values. The surprise severe weather event that occurred in Illinois and Indiana on April 20, 2004, is a good example. Importantly in that case, was that although overall CAPE was weak, there was strong CAPE in the lowest levels of the troposphere which enabled an outbreak of minisupercells producing large, long-track, intense tornadoes. Example from meteorology A good example of convective instability can be found in our own atmosphere. If dry mid-level air is drawn over very warm, moist air in the lower troposphere, a hydrolapse (an area of rapidly decreasing dew point temperatures with height) results in the region where the moist boundary layer and mid-level air meet. As daytime heating increases mixing within the moist boundary layer, some of the moist air will begin to interact with the dry mid-level air above it. Owing to thermodynamic processes, as the dry mid-level air is slowly saturated its temperature begins to drop, increasing the adiabatic lapse rate. Under certain conditions, the lapse rate can increase significantly in a short amount of time, resulting in convection. High convective instability can lead to severe thunderstorms and tornadoes as moist air which is trapped in the boundary layer eventually becomes highly negatively buoyant relative to the adiabatic lapse rate and escapes as a rapidly rising bubble of humid air triggering the development of a cumulus or cumulonimbus cloud. Limitations As with most parameters used in meteorology, there are some caveats to keep in mind, one of which is what CAPE represents physically and in what instances CAPE can be used. One example where the more common method for determining CAPE might start to break down is in the presence of tropical cyclones (TCs), such as tropical depressions, tropical storms, or hurricanes. The more common method of determining CAPE can break down near tropical cyclones because CAPE assumes that liquid water is lost instantaneously during condensation. This process is thus irreversible upon adiabatic descent. This process is not realistic for tropical cyclones. To make the process more realistic for tropical cyclones is to use Reversible CAPE (RCAPE for short). RCAPE assumes the opposite extreme to the standard convention of CAPE and is that no liquid water will be lost during the process. This new process gives parcels a greater density related to water loading. RCAPE is calculated using the same formula as CAPE, the difference in the formula being in the virtual temperature. In this new formulation, we replace the parcel saturation mixing ratio (which leads to the condensation and vanishing of liquid water) with the parcel water content. This slight change can drastically change the values we get through the integration. RCAPE does have some limitations, one of which is that RCAPE assumes no evaporation keeping consistent for the use within a TC but should be used sparingly elsewhere. Another limitation of both CAPE and RCAPE is that currently, both systems do not consider entrainment. See also Atmospheric thermodynamics Lifted index Maximum potential intensity References Further reading Barry, R.G. and Chorley, R.J. Atmosphere, weather and climate (7th ed) Routledge 1998 p. 80-81 External links Map of current global CAPE Severe weather and convection Atmospheric thermodynamics Fluid dynamics Meteorological quantities
Convective available potential energy
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
2,586
[ "Physical quantities", "Chemical engineering", "Quantity", "Meteorological quantities", "Piping", "Fluid dynamics" ]
1,200,465
https://en.wikipedia.org/wiki/Inverse%20agonist
In pharmacology, an inverse agonist is a drug that binds to the same receptor as an agonist but induces a pharmacological response opposite to that of the agonist. A neutral antagonist has no activity in the absence of an agonist or inverse agonist but can block the activity of either; they are in fact sometimes called blockers (examples include alpha blockers, beta blockers, and calcium channel blockers). Inverse agonists have opposite actions to those of agonists but the effects of both of these can be blocked by antagonists. A prerequisite for an inverse agonist response is that the receptor must have a constitutive (also known as intrinsic or basal) level of activity in the absence of any ligand. An agonist increases the activity of a receptor above its basal level, whereas an inverse agonist decreases the activity below the basal level. The efficacy of a full agonist is by definition 100%, a neutral antagonist has 0% efficacy, and an inverse agonist has < 0% (i.e., negative) efficacy. Examples Receptors for which inverse agonists have been identified include the GABAA, melanocortin, mu opioid, histamine and beta adrenergic receptors. Both endogenous and exogenous inverse agonists have been identified, as have drugs at ligand gated ion channels and at G protein-coupled receptors. Ligand gated ion channel inverse agonists An example of a receptor site that possesses basal activity and for which inverse agonists have been identified is the GABAA receptors. Agonists for GABAA receptors (such as muscimol) create a relaxant effect, whereas inverse agonists have agitation effects (for example, Ro15-4513) or even convulsive and anxiogenic effects (certain beta-carbolines). G protein-coupled receptor inverse agonists Two known endogenous inverse agonists are the Agouti-related peptide (AgRP) and its associated peptide Agouti signalling peptide (ASIP). AgRP and ASIP appear naturally in humans and bind melanocortin receptors 4 and 1 (Mc4R and Mc1R), respectively, with nanomolar affinities. The opioid antagonists naloxone and naltrexone act as neutral antagonists of the mu opioid receptors under basal conditions, but as inverse agonists when an opioid such as morphine is bound to the same channel. 6α-naltrexo, 6β-naltrexol, 6β-naloxol, and 6β-naltrexamine acted neutral antagonists regardless of opioid binding and caused significantly reduced withdrawal jumping when compared to naloxone and naltrexone. Nearly all antihistamines acting at H1 receptors and H2 receptors have been shown to be inverse agonists. The beta blockers carvedilol and bucindolol have been shown to be low level inverse agonists at beta adrenoceptors. Mechanisms of action Like agonists, inverse agonists have their own unique ways of inducing pharmacological and physiological responses depending on many factors, such as the type of inverse agonist, the type of receptor, mutants of receptors, binding affinities and whether the effects are exerted acutely or chronically based on receptor population density. Because of this, they exhibit a spectrum of activity below the Intrinsic activity level. Changes in constitutive activity of receptors affect response levels from ligands like inverse agonists. To illustrate, mechanistic models have been made for how inverse agonists induce their responses on G protein-coupled receptors (GPCRs). Many types of Inverse agonists for GPCRs have been shown to exhibit the following conventionally accepted mechanism. Based on the Extended Ternary complex model, the mechanism contends that inverse agonists switch the receptor from an active state to an inactive state by undergoing conformational changes. Under this model, current thinking is that the GPCRs can exist in a continuum of active and inactive states when no ligand is present. Inverse agonists stabilize the inactive states, thereby suppressing agonist-independent activity. However, the implementation of 'constitutively active mutants' of GPCRs change their intrinsic activity. Thus, the effect an inverse agonist has on a receptor depends on the basal activity of the receptor, assuming the inverse agonist has the same binding affinity (as shown in the figure 2). See also Agonist Receptor antagonist Autoreceptor References External links Inverse Agonists: An Illustrated Tutorial Panesar K, Guzman F. Pharmacology Corner. 2012 Pharmacodynamics Receptor agonists
Inverse agonist
[ "Chemistry" ]
1,004
[ "Receptor agonists", "Pharmacology", "Neurochemistry", "Pharmacodynamics" ]
1,200,537
https://en.wikipedia.org/wiki/Comodule
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra. Formal definition Let K be a field, and C be a coalgebra over K. A (right) comodule over C is a K-vector space M together with a linear map such that , where Δ is the comultiplication for C, and ε is the counit. Note that in the second rule we have identified with . Examples A coalgebra is a comodule over itself. If M is a finite-dimensional module over a finite-dimensional K-algebra A, then the set of linear functions from A to K forms a coalgebra, and the set of linear functions from M to K forms a comodule over that coalgebra. A graded vector space V can be made into a comodule. Let I be the index set for the graded vector space, and let be the vector space with basis for . We turn into a coalgebra and V into a -comodule, as follows: Let the comultiplication on be given by . Let the counit on be given by . Let the map on V be given by , where is the i-th homogeneous piece of . In algebraic topology One important result in algebraic topology is the fact that homology over the dual Steenrod algebra forms a comodule. This comes from the fact the Steenrod algebra has a canonical action on the cohomologyWhen we dualize to the dual Steenrod algebra, this gives a comodule structureThis result extends to other cohomology theories as well, such as complex cobordism and is instrumental in computing its cohomology ring . The main reason for considering the comodule structure on homology instead of the module structure on cohomology lies in the fact the dual Steenrod algebra is a commutative ring, and the setting of commutative algebra provides more tools for studying its structure. Rational comodule If M is a (right) comodule over the coalgebra C, then M is a (left) module over the dual algebra C∗, but the converse is not true in general: a module over C∗ is not necessarily a comodule over C. A rational comodule is a module over C∗ which becomes a comodule over C in the natural way. Comodule morphisms Let R be a ring, M, N, and C be R-modules, and be right C-comodules. Then an R-linear map is called a (right) comodule morphism, or (right) C-colinear, if This notion is dual to the notion of a linear map between vector spaces, or, more generally, of a homomorphism between R-modules. See also Divided power structure References Module theory Coalgebras
Comodule
[ "Mathematics" ]
627
[ "Mathematical structures", "Fields of abstract algebra", "Algebraic structures", "Coalgebras", "Module theory" ]
1,201,310
https://en.wikipedia.org/wiki/Accumulated%20cyclone%20energy
Accumulated cyclone energy (ACE) is a metric used to compare overall activity of tropical cyclones, utilizing the available records of windspeeds at six-hour intervals to synthesize storm duration and strength into a single index value. The ACE index may refer to a single storm or to groups of storms such as those within a particular month, a full season or combined seasons. It is calculated by summing the square of tropical cyclones' maximum sustained winds, as recorded every six hours, but only for windspeeds of at least tropical storm strength (≥ 34 kn; 63 km/h; 39 mph); the resulting figure is divided by 10,000 to place it on a more manageable scale. The calculation originated as the Hurricane Destruction Potential (HDP) index, which sums the squares of tropical cyclones' maximum sustained winds while at hurricane strength, at least 64 knots (≥ 119 km/h; 74 mph) at six-hour recorded intervals across an entire season. The HDP index was later modified to further include tropical storms, that is, all wind speeds of at least 34 knots (≥ 63 km/h; 39 mph), to become the accumulated cyclone energy index. The highest ACE calculated for a single tropical cyclone on record worldwide is 87.01, set by Cyclone Freddy in 2023. History The ACE index is an offshoot of Hurricane Destruction Potential (HDP), an index created in 1988 by William Gray and his associates at Colorado State University who argued the destructiveness of a hurricane's wind and storm surge is better related to the square of the maximum wind speed () than simply to the maximum wind speed (). The HDP index is calculated by squaring the estimated maximum sustained wind speeds for tropical cyclones while at hurricane strength, that is, wind speeds of at least 64 knots (≥ 119 km/h; 74 mph). The squared windspeeds from six-hourly recorded intervals are then summed across an entire season. This scale was subsequently modified in 1999 by the United States National Oceanic and Atmospheric Administration (NOAA) to include not only hurricanes but also tropical storms, that is, all cyclones while windspeeds are at least 34 knots (≥ 63 km/h; 39 mph). Since the calculation was more broadly adjusted by NOAA, the index has been used in a number of different ways such as to compare individual storms, and by various agencies and researchers including the Australian Bureau of Meteorology and the India Meteorological Department. The purposes of the ACE index include to categorize how active tropical cyclone seasons were as well as to identify possible long-term trends in a certain area such as the Lesser Antilles. Calculation Accumulated cyclone energy is calculated by summing the squares of the estimated maximum sustained velocity of tropical cyclones when wind speeds are at least tropical storm strength (≥ 34 kn; 63 km/h; 39 mph) at recorded six-hour intervals. The sums are usually divided by 10,000 to make them more manageable. One unit of ACE equals and for use as an index the unit is assumed. Thus: (for ≥ 34 kn), where is estimated sustained wind speed in knots at six-hour intervals. Kinetic energy is proportional to the square of velocity. However, unlike the measure defined above, kinetic energy is also proportional to the mass (corresponding to the size of the storm) and represents an integral of force equal to mass times acceleration, , where acceleration is the antiderivative of velocity, or . The integral is a difference at the limits of the square antiderivative, rather than a sum of squares at regular intervals. Thus, the term applied to the index, accumulated cyclone energy, is a misnomer since the index is neither a measure of kinetic energy nor "accumulated energy." Atlantic Ocean Within the Atlantic Ocean, the United States National Oceanic and Atmospheric Administration and others use the ACE index of a season to classify the season into one of four categories. These four categories are extremely active, above-normal, near-normal, and below-normal, and are worked out using an approximate quartile partitioning of seasons based on the ACE index over the 70 years between 1951 and 2020. The median value of the ACE index from 1951 to 2020 is 96.7 x 104 kt2 Individual storms in the Atlantic The highest ever ACE estimated for a single storm in the Atlantic is 73.6, for the San Ciriaco hurricane in 1899. A Category 4 hurricane which lasted for four weeks, this single storm had an ACE higher than many whole Atlantic storm seasons. Other Atlantic storms with high ACEs include Hurricane Ivan in 2004, with an ACE of 70.4, Hurricane Irma in 2017, with an ACE of 64.9, the Great Charleston Hurricane in 1893, with an ACE of 63.5, Hurricane Isabel in 2003, with an ACE of 63.3, and the 1932 Cuba hurricane, with an ACE of 59.8. Since 1950, the highest ACE of a tropical storm was Tropical Storm Philippe in 2023, which attained an ACE of 9.4. The highest ACE of a Category 1 hurricane was Hurricane Nadine in 2012, which attained an ACE of 26.3. The record for lowest ACE of a tropical storm is jointly held by Tropical Storm Chris in 2000 and Tropical Storm Philippe in 2017, both of which were tropical storms for only six hours and had an ACE of just 0.1225. The lowest ACE of any hurricane was 2005's Hurricane Cindy, which was only a hurricane for six hours, and 2007's Hurricane Lorenzo, which was a hurricane for twelve hours; Cindy had an ACE of just 1.5175 and Lorenzo had a lower ACE of only 1.475. The lowest ACE of a major hurricane (Category 3 or higher), was Hurricane Gerda in 1969, with an ACE of 5.3. The following table shows those storms in the Atlantic basin from 1851–2021 that have attained over 50 points of ACE. Historical ACE in recorded Atlantic hurricane history There is an undercount bias of tropical storms, hurricanes, and major hurricanes before the satellite era (prior to the mid–1960s), due to the difficulty in identifying storms. Classification criteria Eastern Pacific Within the Eastern Pacific Ocean, the United States National Oceanic and Atmospheric Administration and others use the ACE index of a season to classify the season into one of three categories. These four categories are extremely active, above-, near-, and below-normal and are worked out using an approximate tercile partitioning of seasons based on the ACE index and the number of tropical storms, hurricanes, and major hurricanes over the 30 years between 1991 and 2020. For a season to be defined as above-normal, the ACE index criterion and two or more of the other criteria given in the table below must be satisfied. The mean value of the ACE index from 1991 to 2020 is 108.7 × 104 kt2, while the median value is 97.2 × 104 kt2. Individual storms in the Pacific The highest ever ACE estimated for a single storm in the Eastern or Central Pacific, while located east of the International Date Line is 62.8, for Hurricane Fico of 1978. Other Eastern Pacific storms with high ACEs include Hurricane John in 1994, with an ACE of 54.0, Hurricane Kevin in 1991, with an ACE of 52.1, and Hurricane Hector of 2018, with an ACE of 50.5. The following table shows those storms in the Eastern and Central Pacific basins from 1971 through 2023 that have attained over 30 points of ACE. – Indicates that the storm formed in the Eastern/Central Pacific, but crossed 180°W at least once; therefore, only the ACE and number of days spent in the Eastern/Central Pacific are included. Historical ACE in recorded Pacific hurricane history Data on ACE is considered reliable starting with the 1971 season. Classification criteria Western Pacific Historical ACE in recorded Western Pacific typhoon history There is an undercount bias of tropical storms, typhoons, and super typhoon before the satellite era (prior to the mid–1950s), due to the difficulty in identifying storms. Classification criteria North Indian There are various agencies over the North Indian Ocean that monitor and forecast tropical cyclones, including the United States Joint Typhoon Warning Center, as well as the Bangladesh, Pakistan and India Meteorological Department. As a result, the track and intensity of tropical cyclones differ from each other, and as a result, the accumulated cyclone energy also varies over the region. However, the India Meteorological Department has been designated as the official Regional Specialised Meteorological Centre by the WMO for the region and has worked out the ACE for all cyclonic systems above based on their best track analysis which goes back to 1982. Historical ACE in recorded North Indian cyclonic history See also Atlantic hurricane Cyclone Freddy – Produced the highest accumulated cyclone energy amount worldwide Hurricane/Typhoon Ioke - The second-most ACE producing tropical cyclone on record, most in the Northern Hemisphere Saffir–Simpson scale – Alternative intensity scale References External links The International Best Track Archive for Climate Stewardship (IBTrACS) Colorado State University's Real Time Tropical Cyclone Statistics Ryan Maue's Global Tropical Cyclone Activity Meteorological quantities Tropical cyclone meteorology
Accumulated cyclone energy
[ "Physics", "Mathematics" ]
1,857
[ "Quantity", "Physical quantities", "Meteorological quantities" ]
1,201,321
https://en.wikipedia.org/wiki/Superposition%20principle
The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X, and input B produces response Y, then input (A + B) produces response (X + Y). A function that satisfies the superposition principle is called a linear function. Superposition can be defined by two simpler properties: additivity and homogeneity for scalar . This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency-domain linear transform methods such as Fourier and Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behavior. The superposition principle applies to any linear system, including algebraic equations, linear differential equations, and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields, time-varying signals, or any other object that satisfies certain axioms. Note that when vectors or vector fields are involved, a superposition is interpreted as a vector sum. If the superposition holds, then it automatically also holds for all linear operations applied on these functions (due to definition), such as gradients, differentials or integrals (if they exist). Relation to Fourier analysis and similar methods By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute. For example, in Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids. Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.) According to the superposition principle, the response to the original stimulus is the sum (or integral) of all the individual sinusoidal responses. As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely many impulse functions, and the response is then a superposition of impulse responses. Fourier analysis is particularly common for waves. For example, in electromagnetic theory, ordinary light is described as a superposition of plane waves (waves of fixed frequency, polarization, and direction). As long as the superposition principle holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves. Wave superposition Waves are usually described by variations in some parameters through space and time—for example, height in a water wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called the amplitude of the wave and the wave itself is a function specifying the amplitude at each point. In any system with waves, the waveform at a given time is a function of the sources (i.e., external forces, if any, that create or affect the wave) and initial conditions of the system. In many cases (for example, in the classic wave equation), the equation describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the individual waves separately. For example, two waves traveling towards each other will pass right through each other without any distortion on the other side. (See image at the top.) Wave diffraction vs. wave interference With regard to wave superposition, Richard Feynman wrote: Other authors elaborate: Yet another source concurs: Wave interference The phenomenon of interference between waves is based on this idea. When two or more waves traverse the same space, the net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-canceling headphones, the summed variation has a smaller amplitude than the component variations; this is called destructive interference. In other cases, such as in a line array, the summed variation will have a bigger amplitude than any of the components individually; this is called constructive interference. Departures from linearity In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller. For examples of phenomena that arise when the superposition principle does not exactly hold, see the articles nonlinear optics and nonlinear acoustics. Quantum superposition In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves. The wave is described by a wave function, and the equation governing its behavior is called the Schrödinger equation. A primary approach to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly infinitely many) other wave functions of a certain type—stationary states whose behavior is particularly simple. Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way. The projective nature of quantum-mechanical-state space causes some confusion, because a quantum mechanical state is a ray in projective Hilbert space, not a vector. According to Dirac: "if the ket vector corresponding to a state is multiplied by any complex number, not zero, the resulting ket vector will correspond to the same state [italics in original]." However, the sum of two rays to compose a superpositioned ray is undefined. As a result, Dirac himself uses ket vector representations of states to decompose or split, for example, a ket vector into superposition of component ket vectors as: where the . The equivalence class of the allows a well-defined meaning to be given to the relative phases of the ., but an absolute (same amount for all the ) phase change on the does not affect the equivalence class of the . There are exact correspondences between the superposition presented in the main on this page and the quantum superposition. For example, the Bloch sphere to represent pure state of a two-level quantum mechanical system (qubit) is also known as the Poincaré sphere representing different types of classical pure polarization states. Nevertheless, on the topic of quantum superposition, Kramers writes: "The principle of [quantum] superposition ... has no analogy in classical physics". According to Dirac: "the superposition that occurs in quantum mechanics is of an essentially different nature from any occurring in the classical theory [italics in original]." Though reasoning by Dirac includes atomicity of observation, which is valid, as for phase, they actually mean phase translation symmetry derived from time translation symmetry, which is also applicable to classical states, as shown above with classical polarization states. Boundary-value problems A common type of boundary value problem is (to put it abstractly) finding a function y that satisfies some equation with some boundary specification For example, in Laplace's equation with Dirichlet boundary conditions, F would be the Laplacian operator in a region R, G would be an operator that restricts y to the boundary of R, and z would be the function that y is required to equal on the boundary of R. In the case that F and G are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation: while the boundary values superpose: Using these facts, if a list can be compiled of solutions to the first equation, then these solutions can be carefully put into a superposition such that it will satisfy the second equation. This is one common method of approaching boundary-value problems. Additive state decomposition Consider a simple linear system: By superposition principle, the system can be decomposed into with Superposition principle is only available for linear systems. However, the additive state decomposition can be applied to both linear and nonlinear systems. Next, consider a nonlinear system where is a nonlinear function. By the additive state decomposition, the system can be additively decomposed into with This decomposition can help to simplify controller design. Other example applications In electrical engineering, in a linear circuit, the input (an applied time-varying voltage signal) is related to the output (a current or voltage anywhere in the circuit) by a linear transformation. Thus, a superposition (i.e., sum) of input signals will yield the superposition of the responses. In physics, Maxwell's equations imply that the (possibly time-varying) distributions of charges and currents are related to the electric and magnetic fields by a linear transformation. Thus, the superposition principle can be used to simplify the computation of fields that arise from a given charge and current distribution. The principle also applies to other linear differential equations arising in physics, such as the heat equation. In engineering, superposition is used to solve for beam and structure deflections of combined loads when the effects are linear (i.e., each load does not affect the results of the other loads, and the effect of each load does not significantly alter the geometry of the structural system). Mode superposition method uses the natural frequencies and mode shapes to characterize the dynamic response of a linear structure. In hydrogeology, the superposition principle is applied to the drawdown of two or more water wells pumping in an ideal aquifer. This principle is used in the analytic element method to develop analytical elements capable of being combined in a single model. In process control, the superposition principle is used in model predictive control. The superposition principle can be applied when small deviations from a known solution to a nonlinear system are analyzed by linearization. History According to Léon Brillouin, the principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations." The principle was rejected by Leonhard Euler and then by Joseph Lagrange. Bernoulli argued that any sonorous body could vibrate in a series of simple modes with a well-defined frequency of oscillation. As he had earlier indicated, these modes could be superposed to produce more complex vibrations. In his reaction to Bernoulli's memoirs, Euler praised his colleague for having best developed the physical part of the problem of vibrating strings, but denied the generality and superiority of the multi-modes solution. Later it became accepted, largely through the work of Joseph Fourier. See also Additive state decomposition Beat (acoustics) Coherence (physics) Convolution Green's function Impulse response Interference Quantum superposition References Further reading Superposition of sound waves External links Mathematical physics Waves Systems theory
Superposition principle
[ "Physics", "Mathematics" ]
2,349
[ "Physical phenomena", "Applied mathematics", "Theoretical physics", "Waves", "Motion (physics)", "Mathematical physics" ]
1,201,430
https://en.wikipedia.org/wiki/DLVO%20theory
In physical chemistry, the Derjaguin–Landau–Verwey–Overbeek (DLVO) theory explains the aggregation and kinetic stability of aqueous dispersions quantitatively and describes the force between charged surfaces interacting through a liquid medium. It combines the effects of the van der Waals attraction and the electrostatic repulsion due to the so-called double layer of counterions. The electrostatic part of the DLVO interaction is computed in the mean field approximation in the limit of low surface potentials - that is when the potential energy of an elementary charge on the surface is much smaller than the thermal energy scale, . For two spheres of radius each having a charge (expressed in units of the elementary charge) separated by a center-to-center distance in a fluid of dielectric constant containing a concentration of monovalent ions, the electrostatic potential takes the form of a screened-Coulomb or Yukawa potential, where is the Bjerrum length, is the potential energy, ≈ 2.71828 is Euler's number, is the inverse of the Debye–Hückel screening length (); is given by , and is the thermal energy scale at absolute temperature The DLVO theory is named after Boris Derjaguin and Lev Landau, Evert Verwey and Theodoor Overbeek who developed it between 1941 and 1948. Overview DLVO theory is a theory of colloidal dispersion stability in which zeta potential is used to explain that as two particles approach one another their ionic atmospheres begin to overlap and a repulsion force is developed. In this theory, two forces are considered to impact on colloidal stability: Van der Waals forces and electrical double layer forces. The total potential energy is described as the sum of the attraction potential and the repulsion potential. When two particles approach each other, electrostatic repulsion increases and the interference between their electrical double layers increases. However, the Van der Waals attraction also increases as they get closer. At each distance, the net potential energy of the smaller value is subtracted from the larger value. At very close distances, the combination of these forces results in a deep attractive well, which is referred to as the primary minimum. At larger distances, the energy profile goes through a maximum, or energy barrier, and subsequently passes through a shallow minimum, which is referred to as the secondary minimum. At the maximum of the energy barrier, repulsion is greater than attraction. Particles rebound after interparticle contact, and remain dispersed throughout the medium. The maximum energy needs to be greater than the thermal energy. Otherwise, particles will aggregate due to the attraction potential. The height of the barrier indicates how stable the system is. Since particles have to overcome this barrier in order to aggregate, two particles on a collision course must have sufficient kinetic energy due to their velocity and mass. If the barrier is cleared, then the net interaction is all attractive, and as a result the particles aggregate. This inner region is often referred to as an energy trap since the colloids can be considered to be trapped together by Van der Waals forces. For a colloidal system, the thermodynamic equilibrium state may be reached when the particles are in deep primary minimum. At primary minimum, attractive forces overpower the repulsive forces at low molecular distances. Particles coagulate and this process is not reversible. However, when the maximum energy barrier is too high to overcome, the colloid particles may stay in the secondary minimum, where particles are held together but more weakly than in the primary minimum. Particles form weak attractions but are easily redispersed. Thus, the adhesion at secondary minimum can be reversible. History In 1923, Peter Debye and Erich Hückel reported the first successful theory for the distribution of charges in ionic solutions. The framework of linearized Debye–Hückel theory subsequently was applied to colloidal dispersions by S. Levine and G. P. Dube who found that charged colloidal particles should experience a strong medium-range repulsion and a weaker long-range attraction. This theory did not explain the observed instability of colloidal dispersions against irreversible aggregation in solutions of high ionic strength. In 1941, Boris Derjaguin and Lev Landau introduced a theory for the stability of colloidal dispersions that invoked a fundamental instability driven by strong but short-ranged van der Waals attractions countered by the stabilizing influence of electrostatic repulsions. In 1948, Evert Verwey and Theodor Overbeek independently arrived at the same result. This so-called DLVO theory resolved the failure of the Levine–Dube theory to account for the dependence of colloidal dispersions' stability on the ionic strength of the electrolyte. Derivation DLVO theory is the combined effect of van der Waals and double layer force. For the derivation, different conditions must be taken into account and different equations can be obtained. But some useful assumptions can effectively simplify the process, which are suitable for ordinary conditions. The simplified way to derive it is to add the two parts together. van der Waals attraction van der Waals force is actually the total name of dipole-dipole force, dipole-induced dipole force and dispersion forces, in which dispersion forces are the most important part because they are always present. Assume that the pair potential between two atoms or small molecules is purely attractive and of the form w = −C/rn, where C is a constant for interaction energy, decided by the molecule's property and n = 6 for van der Waals attraction. With another assumption of additivity, the net interaction energy between a molecule and planar surface made up of like molecules will be the sum of the interaction energy between the molecule and every molecule in the surface body. So the net interaction energy for a molecule at a distance D away from the surface will therefore be where is the interaction energy between the molecule and the surface, is the number density of the surface, is the axis perpendicular to the surface and passesding across the molecule, with at the point where the molecule is, and at the surface, is the axis perpendicular to the axis, with at the intersection. Then the interaction energy of a large sphere of radius R and a flat surface can be calculated as where W(D) is the interaction energy between the sphere and the surface, is the number density of the sphere. For convenience, Hamaker constant A is given as and the equation becomes With a similar method and according to Derjaguin approximation, the van der Waals interaction energy between particles with different shapes can be calculated, such as energy between two spheres: sphere and surface: two surfaces: per unit area. Double layer force A surface in a liquid may be charged by dissociation of surface groups (e.g. silanol groups for glass or silica surfaces) or by adsorption of charged molecules such as polyelectrolyte from the surrounding solution. This results in the development of a wall surface potential which will attract counterions from the surrounding solution and repel co-ions. In equilibrium, the surface charge is balanced by oppositely charged counterions in solution. The region near the surface of enhanced counterion concentration is called the electrical double layer (EDL). The EDL can be approximated by a sub-division into two regions. Ions in the region closest to the charged wall surface are strongly bound to the surface. This immobile layer is called the Stern or Helmholtz layer. The region adjacent to the Stern layer is called the diffuse layer and contains loosely associated ions that are comparatively mobile. The total electrical double layer due to the formation of the counterion layers results in electrostatic screening of the wall charge and minimizes the Gibbs free energy of EDL formation. The thickness of the diffuse electric double layer is known as the Debye screening length . At a distance of two Debye screening lengths the electrical potential energy is reduced to 2 percent of the value at the surface wall. with unit of , where is the number density of ion i in the bulk solution, is the valency of the ion (for example, H+ has a valency of +1, and Ca2+ has a valency of +2), is the vacuum permittivity, is the relative static permittivity, is the Boltzmann constant. The repulsive free energy per unit area between two planar surfaces is shown as where is the reduced surface potential, , is the potential on the surface. The interaction free energy between two spheres of radius R is Combining the van der Waals interaction energy and the double layer interaction energy, the interaction between two particles or two surfaces in a liquid can be expressed as where W(D)R is the repulsive interaction energy due to electric repulsion, and W(D)A is the attractive interaction energy due to van der Waals interaction. Effect of shear flows Alessio Zaccone and collaborators investigated the effects of shear-flow on particle aggregation which can play an important role in applications e.g. microfluidics, chemical reactors, atmospheric and environmental flows. Their work showed a characteristic lag-time in the shear-induced aggregation of the particles, which decreases exponentially with the shear rate. Application Since the 1940s, the DLVO theory has been used to explain phenomena found in colloidal science, adsorption and many other fields. Due to the more recent popularity of nanoparticle research, DLVO theory has become even more popular because it can be used to explain behavior of both material nanoparticles such as fullerene particles and microorganisms. For example, DLVO theory has been widely applied to assess the degree of particle-particle interactions at controlled chemical conditions. For example, it has been used to investigate the colloidal stability of BaSO4 (barium sulfate). and particle-particle interactions between magnesite, dolomite, quartz and serpentine. Shortcomings Additional forces beyond the DLVO construct have been reported to also play a major role in determining colloid stability. DLVO theory is not effective in describing ordering processes such as the evolution of colloidal crystals in dilute dispersions with low salt concentrations. It also cannot explain the relation between the formation of colloidal crystals and salt concentrations. References Physical chemistry Colloidal chemistry
DLVO theory
[ "Physics", "Chemistry" ]
2,141
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "nan", "Physical chemistry" ]
17,173,535
https://en.wikipedia.org/wiki/Austempering
Austempering is heat treatment that is applied to ferrous metals, most notably steel and ductile iron. In steel it produces a bainite microstructure whereas in cast irons it produces a structure of acicular ferrite and high carbon, stabilized austenite known as ausferrite. It is primarily used to improve mechanical properties or reduce / eliminate distortion. Austempering is defined by both the process and the resultant microstructure. Typical austempering process parameters applied to an unsuitable material will not result in the formation of bainite or ausferrite and thus the final product will not be called austempered. Both microstructures may also be produced via other methods. For example, they may be produced as-cast or air cooled with the proper alloy content. These materials are also not referred to as austempered. History The austempering of steel was first pioneered in the 1930s by Edgar C. Bain and Edmund S. Davenport, who were working for the United States Steel Corporation at that time. Bainite must have been present in steels long before its acknowledged discovery date, but was not identified because of the limited metallographic techniques available and the mixed microstructures formed by the heat treatment practices of the time. Coincidental circumstances inspired Bain to study isothermal phase transformations. Austenite and the higher temperature phases of steel were becoming more and more understood and it was already known that austenite could be retained at room temperature. Through his contacts at the American Steel and Wire Company, Bain was aware of isothermal transformations being used in industry and he began to conceive new experiments Further research into the isothermal transformation of steels was a result of Bain and Davenport's discovery of a new microstructure consisting of an "acicular, dark etching aggregate." This microstructure was found to be "tougher for the same hardness than tempered Martensite". Commercial exploitation of bainitic steel was not rapid. Common heat-treating practices at the time featured continuous cooling methods and were not capable, in practice, of producing fully bainitic microstructures. The range of alloys available produced either mixed microstructures or excessive amounts of Martensite. The advent of low-carbon steels containing boron and molybdenum in 1958 allowed fully bainitic steel to be produced by continuous cooling. Commercial use of bainitic steel thus came about as a result of the development of new heat-treating methods, with those that involve a step in which the workpiece is held at a fixed temperature for a period of time sufficient to allow transformation becoming collectively known as austempering. One of the first uses of austempered steel was in rifle bolts during World War II. The high impact strength possible at high hardnesses, and the relatively small section size of the components made austempered steel ideal for this application. Over subsequent decades austempering revolutionized the spring industry followed by clips and clamps. These components, which are usually thin, formed parts, do not require expensive alloys and generally possess better elastic properties than their tempered Martensite counterparts. Eventually austempered steel made its way into the automotive industry, where one of its first uses was in safety critical components. The majority of car seat brackets and seat belt components are made of austempered steel because of its high strength and ductility. These properties allow it to absorb more energy during a crash without the risk of brittle failure. Currently, austempered steel is also used in bearings, mower blades, transmission gear, wave plate, and turf aeration tines. In the second half of the 20th century the austempering process began to be applied commercially to cast irons. Austempered ductile iron (ADI) was first commercialized in the early 1970s and has since become a major industry. Process The most notable difference between austempering and conventional quench and tempering is that it involves holding the workpiece at the quenching temperature for an extended period of time. The basic steps are the same whether applied to cast iron or steel and are as follows: Austenitizing In order for any transformation to take place, the microstructure of the metal must be austenite structure. The exact boundaries of the austenite phase region depend on the chemistry of the alloy being heat treated. However, austenitizing temperatures are typically between . The amount of time spent at this temperature will vary with the alloy and process specifics for a through-hardened part. The best results are achieved when austenitization is long enough to produce a fully austenitic metal microstructure (there will still be graphite present in cast irons) with a consistent carbon content. In steels this may take only a few minutes after the austenitizing temperature has been reached throughout the part section, but in cast irons it takes longer. This is because carbon must diffuse out of the graphite until it has reached the equilibrium concentration dictated by the temperature and the phase diagram. This step may be done in many types of furnaces, in a high-temperature salt bath, or via direct flame or induction heating. Numerous patents describe specific methods and variations. Quenching As with conventional quench and tempering the material being heat treated must be cooled from the austenitizing temperature quickly enough to avoid the formation of pearlite. The specific cooling rate that is necessary to avoid the formation of pearlite is a product of the chemistry of the austenite phase and thus the alloy being processed. The actual cooling rate is a product of both the quench severity, which is influenced by quench media, agitation, load (quenchant ratio, etc.), and the thickness and geometry of the part. As a result, heavier section components required greater hardenability. In austempering the heat treat load is quenched to a temperature which is typically above the Martensite start of the austenite and held. In some patented processes the parts are quenched just below the Martensite start so that the resulting microstructure is a controlled mixture of Martensite and Bainite. The two important aspects of quenching are the cooling rate and the holding time. The most common practice is to quench into a bath of liquid nitrite-nitrate salt and hold in the bath. Because of the restricted temperature range for processing it is not usually possible to quench in water or brine, but high temperature oils are used for a narrow temperature range. Some processes feature quenching and then removal from the quench media, then holding in a furnace. The quench and holding temperature are primary processing parameters that control the final hardness, and thus properties of the material. Cooling After quenching and holding there is no danger of cracking; parts are typically air cooled or put directly into a room temperature wash system. Tempering No tempering is required after austempering if the part is through hardened and fully transformed to either Bainite or ausferrite. Tempering adds another stage and thus cost to the process; it does not provide the same property modification and stress relief in Bainite or ausferrite that it does for virgin Martensite. Advantages Austempering offers many manufacturing and performance advantages over conventional material/process combinations. It may be applied to numerous materials, and each combination has its own advantages, which are listed below. One of the advantages that is common to all austempered materials is a lower rate of distortion than for quenching and tempering. This can be translated into cost savings by adjustment of the entire manufacturing process. The most immediate cost savings are realized by machining before heat treatment. There are many such savings possible in the specific case of converting a quench-and-tempered steel component to austempered ductile iron (ADI). Ductile iron is 10% less dense than steel and can be cast near to net shape, both characteristics that reduce the casting weight. Near-net-shape casting also further reduces the machining cost, which is already reduced by machining soft ductile iron instead of hardened steel. A lighter finished part reduces freight charges and the streamlined production flow often reduces lead time. In many cases strength and wear resistance can also be improved. Process/material combinations include: Austempered steel Carbo-austempered steel Marbain steel Austempered ductile iron (ADI) Locally austempered ductile iron (LADI) Austempered gray iron (AGI) Carbidic austempered ductile iron (CADI) Intercritically Austempered Steel Intercritically Austempered Ductile Iron With respect to performance improvements, austempered materials are typically compared to conventionally quench-and-tempered materials with a tempered Martensite microstructure. In steels above 40 Rc these improvements include: Higher ductility, impact strength and wear resistance for a given hardness, A low-distortion, repeatable dimensional response, Increased fatigue strength, Resistance to hydrogen and environmental embrittlement. In cast irons (from 250-550 HBW) these improvements include: Higher ductility and impact resistance for a given hardness, A low-distortion, repeatable dimensional response, Increased fatigue strength, Increased wear resistance for a given hardness. References Metal heat treatments
Austempering
[ "Chemistry" ]
1,926
[ "Metallurgical processes", "Metal heat treatments" ]
17,177,316
https://en.wikipedia.org/wiki/Weather%20testing%20of%20polymers
Accelerated photo-ageing of polymers in SEPAP units is the controlled polymer degradation and polymer coating degradation under lab or natural conditions. The prediction of the ageing of plastic materials is a subject that concerns both users and manufacturers. It covers plastic materials (polymers, fillers and various additives) or intermediates that are the transformers that use their thermoplastic property for the manufacture of objects by processes such as extrusion, injection molding, etc. The reliability of the materials is one of the many guarantees that are increasingly required for all the manufactured objects. It can be integrated into the "sustainable development" approach. However, predicting the behavior of a material or an industrial part over time is a delicate process because many parameters must be taken into account. The resistance to "natural" ageing itself is variable. It depends on temperature, sunshine (climate, latitude, humidity, ...) and on many other factors (physical constraints, level of pollution, ...) that are difficult to assess accurately. The simulation of this ageing by the use of artificial light sources and other physical constraints (temperature, sprinkling of water simulating rain, ...) has been the subject of developments that are the basis of several standards, ISO, ASTM, etc. After all, accelerating this ageing to offer, for example, ten-year guarantees or validate stabilizing agents is a complex approach that must be based on solid science. Other applications, such as those of materials that must degrade quickly in the environment, are also concerned by this approach. Mechanistic approach It has long been known that most ageing of these materials is based on a chemical reaction called "radical oxidation". Under the influence of external stresses that generate primary radicals attacking chemical bonds (especially the most abundant ones, between carbon and hydrogen), reactions occur with atmospheric oxygen. This led to the formation of many chemical entities, among which hydroperoxides and peroxides were the key products; they are both stable enough to be detected and reactive enough to break down into many by-products such as ketones, alcohols, acids, ... which are easily detectable by spectroscopic methods. Another important element, the decomposition of one of these peroxidized groups (like hydrogen peroxide, H2O2) generates two new radicals, which leads to a self-acceleration of ageing. These elementary chemical reactions lead more or less quickly to a deterioration of the physical properties of polymer materials and their precise analysis using infrared spectroscopy methods makes it possible both to understand the degradation mechanism and to make predictions about the long-term behavior of polymers. Polypropylene, a common material in our everyday environment, is a very significant example of this approach. Its chemical structure where many tertiary carbons are present (bound to three carbon atoms and only one hydrogen) makes it a particularly sensitive material to ageing. Its use in the absence of stabilizing agents, in the form of film for example, is completely impossible without finding degradation (in a few days it quickly becomes opaque and brittle). Photo-ageing of polymers Sunlight (whose wavelengths on earth are greater than 295 nm) is among the main factors affecting the natural ageing of plastics along with temperature and atmospheric oxygen. However, if the influence of temperature can be analyzed separately (ageing in the dark), it is not the same for photo-ageing which is always associated with a temperature effect, it is also often rightly qualified as "photo-thermal". The simulation of photothermal ageing is generally done by exposing samples in centers approved for their geographical location (Arizona, Florida, South of France) and their ability to know precisely the exposure conditions (duration and intensity of sunshine, temperature, humidity level, etc.). Sometimes mirror systems make it possible to intensify the radiation. The simulation can also be carried out in the laboratory, we generally use xenon lamps whose spectrum, after eliminating short wavelengths, is very similar to that of the sun. Most instruments allow control of light intensity, temperature of the surrounding environment, humidity level and water sprinklers can be programmed to simulate the effect of rain. The use of xenon lamps is based on a similarity with the solar spectrum but that the principles of photochemistry (in particular the existence of vibrational relaxations of excited states) do not exclude the use of other light sources to simulate or accelerate photothermal ageing. Mercury-vapor lamps, properly filtered, have a discontinuous spectrum with discrete radiations (unlike the spectra of xenon and the sun which are continuous). This UV emission of Hg lamps also makes it possible to predict the durability of polymer materials formulated under use. SEPAP accelerated artificial photo-ageing units As early as 1978, the principles mentioned above led to the design of specific units by the Laboratory of Molecular and Macromolecular Photochemistry, now integrated into the Institute of Chemistry of Clermont-Ferrand (https://iccf.uca.fr). One of these units, referenced SEPAP 12–24, was long built and marketed by ATLAS MTT (picture 1) until the release of a new SEPAP MHE model in 2014 (picture 2) (https://www.atlas-mts.com). In the SEPAP 12-24 unit, light excitation is provided by four 400 Watts medium-pressure mercury vapor lamps placed at the four corners of a parallelepiped. These lamps, whose shortest wavelengths are eliminated by a borosilicate glass envelope, have lifetime of 5000 hours. The temperature of the exposed surfaces (and not of the surrounding environment) is maintained and controlled by a thermoprobe in contact with a reference film of the same composition as the samples to be exposed. This temperature can vary from 45 °C to 80 °C and a good compromise between photochemical excitation and thermal excitation is always ensured at the level of the samples. 24 samples of about 1X5 cm are positioned on a metallic sample holder rotating at a constant speed in the center of the unit to ensure homogeneous illumination of all samples. The sample size is suitable for monitoring chemical evolution, with a low conversion rate, by infrared spectroscopy. SEPAP 12-24 enclosures must be calibrated using polyethylene calibration films. The detailed analysis of the mechanism of chemical evolution that controls degradation could be proposed for a large number of polymers [3,4] and it could be verified that this mechanism was identical to that which intervened in natural ageing on approved site or during real outdoor use. Today, a dozen French and European standards refer to these enclosures (agricultural films, cables) and about twenty companies have included SEPAP tests in their specifications for their subcontractors. The new SEPAP MHE (Medium and High Energy) unit is equipped with a single medium-pressure mercury source with variable power allowing a first level of acceleration corresponding to that of the SEPAP 12-24 unit and a second level allowing an acceleration about 3 times higher (Ultra-Acceleration). It was developed by CNEP, Renault, PSA, PolyOne and Atlas-Ametek. The source has a central position and the samples are fixed on a sample holder animated by a uniform rotational movement around the source. The analysis of the chemical evolution under the accelerated conditions of a SEPAP 12-24 or MHE units and the analysis of the chemical evolution in an early phase of exposure in outdoor use in the field (1 year or more) make it possible to define an acceleration factor if we know how to discern in the mechanism the formation of a "critical product" representative of the reaction pattern. This acceleration factor cannot be universal for all families of formulated materials that evolve according to very different reaction mechanisms, but it can be determined for each family of polymers. For example, it is close to 12 (1 month = 1 year in the field in the South of France) for the reference polyethylene. These acceleration factors have indeed been determined in very specific cases of polymers of well-defined formulations and exposed in forms that allow to take into account the diffusion of oxygen (avoid any oxygen starvation) and the migrations of stabilizers ("reservoir" effect). The SEPAP MHE unit allows, for example, to simulate a year of exposure of a polypropylene in the south of France in 300 hours (on average acceleration) or 100 hours (in ultra-acceleration mode). Medium and Ultra-acceleration Can photo-ageing be further accelerated? There are many ways to achieve this, but there is a great risk of no longer being representative of natural ageing. From the photochemical point of view, multi-photonic effects are for example to be feared, just as the oxygen starvation may occur very quickly and strongly disrupt the degradation mechanisms. The ultra-accelerated approach developed in the SEPAP MHE unit makes it possible to solve in particular the problem of very long-term stability required for certain applications (cable-stayed bridges, photovoltaic panels, wind turbines, ...) or the need to be able to homologate a new material very quickly (automotive industry, ...). Role of water It is first of all its physical role (leaching) that has been highlighted in particular in polyolefins (polyethylene, polypropylene). Polar degradation products and low molecular weights can be removed from the surface of the material and thus mask the ageing phenomenon. It is possible to operate the SEPAP MHE with periodic sprinklers of water by avoiding too abundant sprinkling that can lead to an underestimation of ageing. Too frequent water sprinkling can also lead to premature extraction of low molecular weight stabilizers and wrongly disqualify polymeric materials. To examine the combined role of water with other physico-chemical constraints (Ultraviolet – heat – oxygen), a prototype SEPAP 12-24 H unit was developed. In this unit the sample holder is immersed in temperature-controlled liquid water that is re-oxygenated in outdoor circulation. Centre National d'Evaluation de Photoprotection (called CNEP) In 1986, the work of the Laboratory of Molecular and Macromolecular Photochemistry led to the creation of a transfer center CNEP to put its skills in the photo-ageing of polymer materials at the service of manufacturers, either to analyze failures of their materials or to conduct studies of collective interest. Studies to predict the behavior of polymeric materials subjected to different environmental constraints (sunlight, heat with or without moisture) or failure analyses of polymer parts can be carried out in collaboration with the R&D departments of manufacturers. The CNEP can also be a partner in collaborative projects led by industrialists on an innovative research theme. The Centre National d'Evaluation de Photoprotection is now associated with about sixty companies and annually carries out more than 450 studies covering all areas of application of polymers including works of art. It is also approved at the French national level as a "Technological Resources Center". Notes and references Jacques Lacoste, Sandrine Therias, '’Vieillissement des matériaux polymères et des composites'’ in L'actualité chimique, 2015, 395, 38-43. Jacques Lacoste, David Carlsson,"Gamma-, photo-, and thermally-initiated oxidation of linear low density polyethylene: a quantitative comparison of oxidation products" in J. Polym. Sci., Polym. Chem. Ed. A, 1992, 30, 493-500 and 1993, 31, 715-722 (polypropylène) Jacques Lemaire,"Predicting polymer durability" in Chemtech, October 1996, 42- 47 Jacques Lemaire, René Arnaud, Jean Luc Gardette, Jacques Lacoste, Henri Seinera, "Zuverlässigkeit der methode der photo-schnellalterung bei polymeren. ( Reliability of the accelerated photo-ageing method)", Kunststoffe, German Plastics (int Ed.), 1986, 76, 149-153 See also References (cnep-fr.com) ASTM STANDARDS B117: Standard Method of Salt Spray (fog) Testing, ASTM D1014 (45° North): Test method for Conducting Exterior Exposure Tests of Paints on Steel ASTM G90: Standard Practice for Performing Accelerated Outdoor Weathering of Nonmetallic Materials Using Concentrated Natural Sunlight ASTM G154: Standard Practice for Operating Fluorescent Light Apparatus for UV Exposure of Non-metallic Materials Q.U.V Accelerated Weathering Tester operation manual, Q-Lab Corporation, Cleveland, OH, US, www.q-lab.com. UV Weathering and Related Test Methods, Cabot corporation, www.cabot-corp.com G.C. Eastwood, A. Ledwith, S. Russo, P. Sigwalt, vol 6; "Polymer Reactions, vol 6" in Comprehensive Polymer Science, Pergamon press, 1989, Olivier Haillant, "Polymer weathering: a mix of empiricism and science", Material Testing Product and Technology News, 2006, 36 (76), 3-12 Jacques Lemaire,"Predicting polymer durability" in Chemtech, October 1996, 42-47. materials degradation polymers tests
Weather testing of polymers
[ "Chemistry", "Materials_science", "Engineering" ]
2,762
[ "Polymers", "Materials degradation", "Materials science", "Polymer chemistry" ]
17,181,411
https://en.wikipedia.org/wiki/Multilayer%20soft%20lithography
Multilayer soft lithography (MSL) is a fabrication process in which microscopic chambers, channels, valves and vias are molded within bonded layers of elastomer. Commercial PDMS stamps can mold materials such as optical adhesive in a sequential process to create the bonded layers. See also Soft lithography References Lithography (microfabrication)
Multilayer soft lithography
[ "Materials_science" ]
76
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
17,182,647
https://en.wikipedia.org/wiki/Hunter%E2%80%93Saxton%20equation
In mathematical physics, the Hunter–Saxton equation is an integrable PDE that arises in the theoretical study of nematic liquid crystals. If the molecules in the liquid crystal are initially all aligned, and some of them are then wiggled slightly, this disturbance in orientation will propagate through the crystal, and the Hunter–Saxton equation describes certain aspects of such orientation waves. Physical background In the models for liquid crystals considered here, it is assumed that there is no fluid flow, so that only the orientation of the molecules is of interest. Within the elastic continuum theory, the orientation is described by a field of unit vectors n(x,y,z,t). For nematic liquid crystals, there is no difference between orienting a molecule in the n direction or in the −n direction, and the vector field n is then called a director field. The potential energy density of a director field is usually assumed to be given by the Oseen–Frank energy functional where the positive coefficients , , are known as the elastic coefficients of splay, twist, and bend, respectively. The kinetic energy is often neglected because of the high viscosity of liquid crystals. Derivation of the Hunter–Saxton equation Hunter and Saxton investigated the case when viscous damping is ignored and a kinetic energy term is included in the model. Then the governing equations for the dynamics of the director field are the Euler–Lagrange equations for the Lagrangian where is a Lagrange multiplier corresponding to the constraint |n|=1. They restricted their attention to "splay waves" where the director field takes the special form This assumption reduces the Lagrangian to and then the Euler–Lagrange equation for the angle φ becomes There are trivial constant solutions φ=φ0 corresponding to states where the molecules in the liquid crystal are perfectly aligned. Linearization around such an equilibrium leads to the linear wave equation which allows wave propagation in both directions with speed , so the nonlinear equation can be expected to behave similarly. In order to study right-moving waves for large t, one looks for asymptotic solutions of the form where Inserting this into the equation, one finds at the order that A simple renaming and rescaling of the variables (assuming that ) transforms this into the Hunter–Saxton equation. Generalization The analysis was later generalized by Alì and Hunter, who allowed the director field to point in any direction, but with the spatial dependence still only in the x direction: Then the Lagrangian is where The corresponding Euler–Lagrange equations are coupled nonlinear wave equations for the angles φ and ψ, with φ corresponding to "splay waves" and ψ to "twist waves". The previous Hunter–Saxton case (pure splay waves) is recovered by taking ψ constant, but one can also consider coupled splay-twist waves where both φ and ψ vary. Asymptotic expansions similar to that above lead to a system of equations, which, after renaming and rescaling the variables, takes the form where u is related to φ and v to ψ. This system implies that u satisfies so (rather remarkably) the Hunter–Saxton equation arises in this context too, but in a different way. Variational structures and integrability The integrability of the Hunter–Saxton equation, or, more precisely, that of its x derivative was shown by Hunter and Zheng, who exploited that this equation is obtained from the Camassa–Holm equation in the "high frequency limit" Applying this limiting procedure to a Lagrangian for the Camassa–Holm equation, they obtained a Lagrangian which produces the Hunter–Saxton equation after elimination of v and w from the Euler–Lagrange equations for u, v, w. Since there is also the more obvious Lagrangian the Hunter–Saxton has two inequivalent variational structures. Hunter and Zheng also obtained a bihamiltonian formulation and a Lax pair from the corresponding structures for the Camassa–Holm equation in a similar way. The fact that the Hunter–Saxton equation arises physically in two different ways (as shown above) was used by Alì and Hunter to explain why it has this bivariational (or bihamiltonian) structure. Geometric Formulation The periodic Hunter-Saxton equation can be given a geometric interpretation as the geodesic equation on an infinite-dimensional Lie group, endowed with an appropriate Riemannian metric. In more detail, consider the group of diffeomorphisms of the unit circle . Choose some and denote by the subgroup of consisting diffeomorphisms which fix : The group is an infinite-dimensional Lie group, whose Lie algebra consists of vector fields on which vanish at : Here is the standard coordinate on . Endow with the homogeneous inner product: where the subscript denotes differentiation. This inner product defines a right-invariant Riemannian metric on (on the full group this is only a semi-metric, since constant vector fields have norm 0 with respect to . Note that is isomorphic to the right quotient of by the subgroup of translations, which is generated by constant vector fields). Let be a time-dependent vector field on such that for all , and let be the flow of , i.e. the solution to: Then is a periodic solution to the Hunter-Saxton equation if and only if the path is a geodesic on with respect to the right-invariant metric. In the non-periodic case, one can similarly construct a subgroup of the group of diffeomorphisms of the real line, with a Riemannian metric whose geodesics correspond to non-periodic solutions of the Hunter-Saxton equation with appropriate decay conditions at infinity. Notes References Further reading Mathematical physics Solitons Partial differential equations Equations of fluid dynamics
Hunter–Saxton equation
[ "Physics", "Chemistry", "Mathematics" ]
1,208
[ "Equations of fluid dynamics", "Equations of physics", "Applied mathematics", "Theoretical physics", "Mathematical physics", "Fluid dynamics" ]
15,449,295
https://en.wikipedia.org/wiki/Normalization%20process%20model
The normalization process model is a sociological model, developed by Carl R. May, that describes the adoption of new technologies in health care. The model provides framework for process evaluation using three componentsactors, objects, and contextsthat are compared across four constructs: Interactional workability, relational integration, skill-set workability, and contextual integration. This model helped build the normalization process theory. Development The normalization process model is a theory that explains how new technologies are embedded in health care work. The model was developed by Carl R. May and co-workers, and is an empirically derived grounded theory in medical sociology and science and technology studies (STS), based on qualitative methods. Carl May developed the model after he appeared as a witness at a British House of Commons Health Committee Inquiry on New Medical Technologies in the NHS in 2005. He asked how new technologies became routinely embedded, and taken-for-granted, in everyday work, in view of the increasing corporate organization and regulation of healthcare. The model explains embedding by looking at the work that people do to make it possible. The model is a theory in sociology that fits well with macro approaches to innovation like the diffusion of innovations theory developed by Everett Rogers. Although the normalization process model is limited in scope to healthcare settings recent work by May and colleagues has led to the development of normalization process theory, which presents a general sociological theory of implementation and integration of technological and organizational innovations. Normalization process theory has now superseded the more limited normalization process model. The normalization process model provides a framework for process evaluation and also for comparative studies of complex interventions, especially of randomized controlled trials. Clinical trials and other evaluations of healthcare interventions often focus on the complex relationships between actors, objects and contexts, making a simple explanatory model, that fits well with other frameworks a necessary tool for clinical and health service researchers. In the normalization process model, a complex intervention is defined as a deliberately initiated attempt to introduce new, or modify existing, patterns of collective action in health care. Components A complex intervention has three kinds of components: Actors are the individuals and groups that encounter each other in health care settings. They can include physicians, other health professionals, managers, patients, and family members. The aims of interventions aimed at actors are often to change people's behaviour and its intended outcomes. Objects are the institutionally sanctioned means by which knowledge and practice are enacted. They can include procedures, protocols, hardware, and software The aims of interventions aimed at objects often include changing people's expertise and actions. Contexts are the physical, organisational, institutional, and legislative structures that enable and constrain, and resource and realize, people and procedures. The aims of interventions aimed at contexts are often to change the ways that people organize their work to achieve goals in health care (or other) services. Constructs The normalization process model explains the embedding of complex interventions by reference to four constructs of collective action that are demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration) in a rigorous and sound theory. Interactional workability: This describes how a complex intervention is operationalized by the people using it. A complex intervention will affect co-operative interactions over work (its congruence), and the normal pattern of outcomes of this work (its disposal). Therefore: a complex intervention is disposed to normalization if it confers an interactional advantage in flexibly accomplishing congruence and disposal of work. Relational integration: This describes how knowledge and work is mediated and understood within the social networks of people around it. A complex intervention will affect not only the knowledge required by its users (its accountability), but also the ways that they understand the actions of people around them (its confidence). Therefore: a complex intervention is disposed to normalization if it equals or improves accountability and confidence within networks. Skill-set workability: This describes the distribution and conduct of work in a division of labor. A complex intervention will affect the ways that work is defined and distributed (its allocation), and the ways in which it is undertaken and evaluated (its performance). Therefore: a complex intervention is disposed to normalization if it is calibrated to an agreed skill-set at a recognizable location in the division of labor. Contextual integration: This refers to the incorporation of work within an organizational setting. A complex intervention will affect the mechanisms that link work to existing structures and procedures (its execution), and for allocating and organizing resources for them (its realization). Therefore: a complex intervention is disposed to normalization if it confers an advantage on an organization in flexibly executing and realizing work. References External links Normalization Process Theory Website Medical sociology Innovation Diffusion
Normalization process model
[ "Physics", "Chemistry" ]
991
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
15,452,231
https://en.wikipedia.org/wiki/Wind%20stress
In physical oceanography and fluid dynamics, the wind stress is the shear stress exerted by the wind on the surface of large bodies of water – such as oceans, seas, estuaries and lakes. When wind is blowing over a water surface, the wind applies a wind force on the water surface. The wind stress is the component of this wind force that is parallel to the surface per unit area. Also, the wind stress can be described as the flux of horizontal momentum applied by the wind on the water surface. The wind stress causes a deformation of the water body whereby wind waves are generated. Also, the wind stress drives ocean currents and is therefore an important driver of the large-scale ocean circulation. The wind stress is affected by the wind speed, the shape of the wind waves and the atmospheric stratification. It is one of the components of the air–sea interaction, with others being the atmospheric pressure on the water surface, as well as the exchange of energy and mass between the water and the atmosphere. Background Stress is the quantity that describes the magnitude of a force that is causing a deformation of an object. Therefore, stress is defined as the force per unit area and its SI unit is the Pascal. When the deforming force acts parallel to the object's surface, this force is called a shear force and the stress it causes is called a shear stress. Dynamics Wind blowing over an ocean at rest first generates small-scale wind waves which extract energy and momentum from the wave field. As a result, the momentum flux (the rate of momentum transfer per unit area per unit time) generates a current. These surface currents are able to transport energy (e.g. heat) and mass (e.g. water or nutrients) around the globe. The different processes described here are depicted in the sketches shown in figures 1.1 till 1.4. Interactions between wind, wind waves and currents are an essential part of the world ocean dynamics. Eventually, the wind waves also influence the wind field leading to a complex interaction between wind and water whereof the research for a correct theoretical description is ongoing. The Beaufort scale quantifies the correspondence between wind speed and different sea states. Only the top layer of the ocean (mixed layer) is stirred by the wind stress. This upper layer of the ocean has a depth on the order of 10m. The wind blowing parallel to a water surface deforms that surface as a result of shear action caused by the fast wind blowing over the stagnant water. The wind blowing over the surface applies a shear force on the surface. The wind stress is the component of this force that acts parallel to the surface per unit area. This wind force exerted on the water surface due to shear stress is given by: Here, F represents the shear force per unit mass (default), represents the air density and represents the wind shear stress. Furthermore, z is the lifting direction as x corresponds to the zonal direction, y corresponds to the meridional direction. The vertical derivatives of the wind stress components are also called the vertical eddy viscosity. The equation describes how the force exerted on the water surface decreases for a denser atmosphere or, to be more precise, a denser atmospheric boundary layer (this is the layer of a fluid where the influence of friction is felt). On the other hand, the exerted force on the water surface increases when the vertical eddy viscosity increases. The wind stress can also be described as a downward transfer of momentum and energy from the air to the water. The magnitude of the wind stress () is often parametrized as a function of wind speed at a certain height above the surface in the form Here, is the density of the surface air and CD is a dimensionless wind drag coefficient which is a repository function for all remaining dependencies. An often used value for the drag coefficient is . Since the exchange of energy, momentum and moisture is often parametrized using bulk atmospheric formulae, the equation above is the semi-empirical bulk formula for the surface wind stress. The height at which the wind speed is referred to in wind drag formulas is usually 10 meters above the water surface. The formula for the wind stress explains how the stress increases for a denser atmosphere and higher wind speeds. When shear force caused by stress is in balance with the Coriolis force, this can be written as: where f is the Coriolis parameter, u and v are respectively the zonal and meridional currents and and are respectively the zonal Coriolis forces and meridional Coriolis forces. This balance of forces is known as the Ekman balance. Some important assumptions that underlie the Ekman balance are that there are no boundaries, an infinitely deep water layer, constant vertical eddy viscosity, barotropic conditions with no geostrophic flow and a constant Coriolis parameter. The oceanic currents that are generated by this balance are referred to as Ekman currents. In the Northern Hemisphere, Ekman currents at the surface are directed with an angle of 45° to the right of the wind stress direction and in the Southern Hemisphere they are directed with the same angle to the left of the wind stress direction. Flow directions of deeper positioned currents are deflected even more to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. This phenomenon is called the Ekman spiral. The Ekman transport can be obtained from vertically integrating the Ekman balance, giving: where D is the depth of the Ekman layer. Depth-averaged Ekman transport is directed perpendicular to the wind stress and, again, directed to the right of the wind stress direction in the Northern Hemisphere and to the left of the wind stress direction in the Southern Hemisphere. Alongshore winds therefore generate transport towards or away from the coast. For small values of D, water can return from or to deeper water layers, resulting in Ekman up- or downwelling. Upwelling due to Ekman transport can also happen at the equator due to the change of sign of the Coriolis parameter in the Northern and Southern Hemisphere and the stable easterly winds that are blowing to the North and South of the equator. Due to the strong temporal variability of the wind, the wind forcing on the ocean surface is also highly variable. This is one of the causes of the internal variability of ocean flows as these changes in the wind forcing cause changes in the wave field and the thereby generated currents. Variability of ocean flows also occurs because the changes of the wind forcing are disturbances of the mean ocean flow, which leads to instabilities. A well known phenomenon that is caused by changes in surface wind stress over the tropical Pacific is the El Niño-Southern Oscillation (ENSO). Global wind stress patterns The global annual mean wind stress forces the global ocean circulation. Typical values for the wind stress are about 0.1Pa and, in general, the zonal wind stress is stronger than the meridional wind stress as can be seen in figures 2.1 and 2.2. It can also be seen that the largest values of the wind stress occur in the Southern Ocean for the zonal direction with values of about 0.3Pa. Figures 2.3 and 2.4 show that monthly variations in the wind stress patterns are only minor and the general patterns stay the same during the whole year. It can be seen that there are strong easterly winds (i.e. blowing toward the West), called easterlies or trade winds near the equator, very strong westerly winds at midlatitudes (between ±30° and ±60°), called westerlies, and weaker easterly winds at polar latitudes. Also, on a large annual scale, the wind-stress field is fairly zonally homogeneous. Important meridional wind stress patterns are northward (southward) currents on the eastern (western) coasts of continents in the Northern Hemisphere and on the western (eastern) coast in the Southern Hemisphere since these generate coastal upwelling which causes biological activity. Examples of such patterns can be observed in figure 2.2 on the East coast of North America and on the West coast of South America. Large-scale ocean circulation Wind stress in one of the drivers of the large-scale ocean circulation with other drivers being the gravitational pull exerted by the Moon and Sun, differences in atmospheric pressure at sea level and convection resulting from atmospheric cooling and evaporation. However, the contribution of the wind stress to the forcing of the oceanic general circulation is largest. Ocean waters respond to the wind stress because of their low resistance to shear and the relative consistence with which winds blow over the ocean. The combination of easterly winds near the equator and westerly winds at midlatitudes drives significant circulations in the North and South Atlantic Oceans, the North and South Pacific Oceans and the Indian Ocean with westward currents near the equator and eastward currents at midlatitudes. This results in characteristic gyre flows in the Atlantic and Pacific consisting of a subpolar and subtropical gyre. The strong westerlies in the Southern ocean drive the Antarctic Circumpolar Current which is the dominant current in the Southern Hemisphere whereof no comparable current exists in the Northern Hemisphere. The equations to describe large-scale ocean dynamics were formulated by Harald Sverdrup and came to be known as Sverdrup dynamics. Important is the Sverdrup balance which describes the relation between the wind stress and the vertically integrated meridional transport of water. Other significant contributions to the description of large-scale ocean circulation were made by Henry Stommel who formulated the first correct theory for the Gulf Stream and theories of the abyssal circulation. Long before these theories were formulated, mariners have been aware of the major surface ocean currents. As an example, Benjamin Franklin already published a map of the Gulf Stream in 1770 and in European discovery of the gulf stream dates back to the 1512 expedition of Juan Ponce de León. Apart from such hydrographic measurement there are two methods to measure the ocean currents directly. Firstly, the Eulerian velocity can be measured using a current meter along a rope in the water column. And secondly, a drifter can be used which is an object that moves with the currents whereof the velocity can be measured. Wind-driven upwelling Wind-driven upwelling brings nutrients from deep waters to the surface which leads to biological productivity. Therefore, wind stress impacts biological activity around the globe. Two important forms of wind-driven upwelling are coastal upwelling and equatorial upwelling. Coastal upwelling occurs when the wind stress is directed with the coast on its left (right) in the Northern (Southern) Hemisphere. If so, Ekman transport is directed away from the coast forcing waters from below to move upward. Well known coastal upwelling areas are the Canary Current, the Benguela Current, the California Current, the Humboldt Current, and the Somali Current. All of these currents support major fisheries due to the increased biological activities. Equatorial upwelling occurs due to the trade winds blowing towards the west in both the Northern Hemisphere and the Southern Hemisphere. However, the Ekman transport that is associated with these trade winds is directed 90° to the right of the winds in the Northern Hemisphere and 90° to the left of the winds in the Southern Hemisphere. As a result, to the North of the equator water is transported away from the equator and to the South of the equator water is transported away from the equator. This horizontal divergence of mass has to be compensated and hence upwelling occurs. Wind waves Wind waves are waves at the water surface that are generated due to the shear action of wind stress on the water surface and the aim of gravity, that acts as a restoring force, to return the water surface to its equilibrium position. Wind waves in the ocean are also known as ocean surface waves. The wind waves interact with both the air and water flows above and below the waves. Therefore, the characteristics of wind waves are determined by the coupling processes between the boundary layers of both the atmosphere and ocean. Wind waves also play an important role themselves in the interaction processes between the ocean and the atmosphere. Wind waves in the ocean can travel thousands of kilometers. A proper description of the physical mechanisms that cause the growth of wind waves and is in accordance with observations has yet to be completed. A necessary condition for wind waves to grow is a minimum wind speed of 0.05 m/s. Expressions for the drag coefficient The drag coefficient is a dimensionless quantity which quantifies the resistance of the water surface. Due to the fact that the drag coefficient depends on the past of the wind, the drag coefficient is expressed differently for different time and spatial scales. A general expression for the drag coefficient does not yet exist and the value is unknown for unsteady and non-ideal conditions. In general, the drag coefficient increases with increasing wind speed and is greater for shallower waters. The geostrophic drag coefficient is expressed as: where is the geostrophic wind which is given by: In global climate models, often a drag coefficient appropriate for a spatial scale of 1° by 1° and a monthly time scale is used. In such a timescale, the wind can strongly fluctuate. The monthly mean shear stress can be expressed as: where is the density, is the drag coefficient, is the monthly mean wind and U' is the fluctuation from the monthly mean. Measurements It is not possible to directly measure the wind stress on the ocean surface. To obtain measurements of the wind stress, another easily measurable quantity like wind speed is measured and then via a parametrization the wind stress observations are obtained. Still, measurements of the wind stress are important as the value of the drag coefficient is not known for unsteady and non-ideal conditions. Measurements of the wind stress for such conditions can resolve the issue of the unknown drag coefficient. Four methods of measuring the drag coefficient are known as the Reynolds stress method, the dissipation method, the profile method and a method of using radar remote sensing. Wind stress on land surface The wind can also exert a stress force on land surface which can lead to erosion of the ground. References Fluid dynamics Physical oceanography
Wind stress
[ "Physics", "Chemistry", "Engineering" ]
2,897
[ "Applied and interdisciplinary physics", "Chemical engineering", "Physical oceanography", "Piping", "Fluid dynamics" ]
12,647,205
https://en.wikipedia.org/wiki/Digital%20Library%20of%20Mathematical%20Functions
The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is intended as an update of Abramowitz's and Stegun's Handbook of Mathematical Functions (A&S). It was published online on 7 May 2010, though some chapters appeared earlier. In the same year it appeared at Cambridge University Press under the title NIST Handbook of Mathematical Functions. In contrast to A&S, whose initial print run was done by the U.S. Government Printing Office and was in the public domain, NIST asserts that it holds copyright to the DLMF under Title 17 USC 105 of the U.S. Code. See also NIST Dictionary of Algorithms and Data Structures References Further reading (8 pages) External links NIST Digital Library of Mathematical Functions Corrected errors in NIST DLMF Handbooks and manuals Mathematics websites Mathematical tables Numerical analysis Special functions Mathematical databases
Digital Library of Mathematical Functions
[ "Mathematics" ]
210
[ "Special functions", "Computational mathematics", "Combinatorics", "Mathematical tables", "Mathematical relations", "Numerical analysis", "Approximations" ]
12,647,784
https://en.wikipedia.org/wiki/Meuse/Haute%20Marne%20Underground%20Research%20Laboratory
The Meuse/Haute Marne Underground Research Laboratory is a laboratory located 500 metres underground in Bure in the Meuse département. It allows study of the geological formation in order to evaluate its capacity for deep geological repository of high-level and long-lived medium-level radioactive waste. It is managed by the Agence nationale pour la gestion des déchets radioactifs, the French nuclear waste management authority. Since radioactive waste needs to be safely stored for extreme lengths of time, the geology of the area is of utmost importance. Geologically, this site chiefly consists of Kimmeridgian claystone 500 metres underground in the Paris Basin. The exploratory work was for the Cigéo project which would store medium-level waste from 2025 onwards at Bure. These plans have been met with protests. History The first practical geological studies on locations for deep geological repository in France date back to the 1960s. In the 1980s Andra, at that time a branch of the CEA, was given the task of investigating possible locations for an underground research laboratory. Site selection Two geological formations were initially considered in the 1990s: clay and granite. The 1991 law thus dictated that research would be done in several possible sites. In 1994, work by Andra investigated a wide range of locations in 4 separate départements, and further narrowed down the choice to 3 locations. Layout All above and below ground facilities at the site are organized around two wells. Surface installations There are headframes above each well for transporting equipment and people in and out. Then there is a host of other surface buildings and factories for research, which occupy a total of 170,000 square metres. The reception building has a Green roof. Tunnels As of 2007, a 40 metre long tunnel had been completed at the 445 m underground level, while almost 500 m of tunnels have been excavated at the 490 m underground level. Further extensions were built between 2007 and 2009 and more are scheduled, to be completed by 2015. Cigéo After 20 years of exploratory research, ANDRA intends to file in 2019 a request to build Cigéo (French: Centre Industriel de Stockage Géologique), which will store underground the most radioactive waste from French nuclear power stations. The Nuclear Safety Authority has confirmed that the rock has not moved for several million years, although it wants a solution to be found to the problem of bitumen deposits. The future storage centre would have an area of 600 hectares, for 250 kilometres of galleries. It is proposed to store 70,000 cubic metres of intermediate-level waste and 10,000 cubic metres of long-lived high-level vitrified waste. The French nuclear energy industry produces around 13,000 cubic metres of toxic radioactive waste every year. The project was initially estimated to cost between €13.5 and €16.5 billion in 2005. In 2009 costs were re-estimated at €36 billion. In 2012 ANDRA revised costs to €34.4 billion, including taxes and operational costs for 100 years, however EDF and the CEA estimated €20 billion. The French government budgeted €25 billion in 2016. Retrievability French law stipulates that for the first few hundreds of years the stored material must be safely retrievable, insofar as future Frenchmen may find it useful. The storage facility is therefore being designed for this purpose. Protests Several groups have opposed the building of the waste storage facility, including Burestop 55, Bure Zone Libre and EODRA (Élus opposés à l'enfouissement des déchets radioactifs). A Maison de la Résistance (House of Resistance) was set up by anti-nuclear activists in the centre of Bure in 2004. The forest of Mandres-en-Barrois, the site of proposed air vents for the expanded site, was occupied in 2015. It became a ZAD (Zone to Defend) before being evicted in 2018. See also Mont Terri Rock Laboratory (swisstopo, Saint-Ursanne, CH) Grimsel Test Site (GTS, Rock Laboratory in granite, CH) HADES Underground Research Laboratory (SCK CEN, Mol, BE) Bedretto Underground Laboratory for Geoenergies (ETH Zurich, CH) References External links Radioactive waste repositories Underground laboratories Nuclear research institutes Nuclear research institutes in France Laboratories in France
Meuse/Haute Marne Underground Research Laboratory
[ "Engineering" ]
901
[ "Nuclear research institutes", "Nuclear organizations" ]
12,649,653
https://en.wikipedia.org/wiki/BOP%20reagent
BOP (benzotriazol-1-yloxytris(dimethylamino)phosphonium hexafluorophosphate) is a reagent commonly used for the synthesis of amides from carboxylic acids and amines in peptide synthesis. It can be prepared from 1-hydroxybenzotriazole and a chlorophosphonium reagent under basic conditions. This reagent has advantages in peptide synthesis since it avoids side reactions like the dehydration of asparagine or glutamine redisues. BOP has used for the synthesis of esters from the carboxylic acids and alcohols. BOP has also been used in the reduction of carboxylic acids to primary alcohols with sodium borohydride (NaBH4). Its use raises safety concerns since the carcinogenic compound HMPA is produced as a stoichiometric by-product. See also PyBOP, a related phosphonium reagent for amide bond formation PyAOP, a related phosphonium reagent for amide bond formation References Hexafluorophosphates Peptide coupling reagents Benzotriazoles Biochemistry Biochemistry methods Reagents for biochemistry Quaternary phosphonium compounds Organophosphorus compounds
BOP reagent
[ "Chemistry", "Biology" ]
280
[ "Biochemistry methods", "Peptide coupling reagents", "Functional groups", "Organic compounds", "Organophosphorus compounds", "nan", "Reagents for organic chemistry", "Biochemistry", "Reagents for biochemistry", "Organic compound stubs", "Organic chemistry stubs" ]
3,431,092
https://en.wikipedia.org/wiki/Zero%20force%20member
In the field of engineering mechanics, a zero force member is a member (a single truss segment) in a truss which, given a specific load, is at rest: neither in tension, nor in compression. Description In a truss, a zero-force member is often found at pins (any connections within the truss) where no external load is applied, and three or fewer truss members meet. Basic zero-force members can be identified by analyzing the forces acting on an individual pin in a physical system. If the pin has an external force or moment applied to it, then all of the members attached to that pin are not zero-force members unless the external force acts in a manner that fulfills one of the rules: If two non-collinear members meet in an unloaded joint, both are zero-force members. If three members meet in an unloaded joint, of which two are collinear, then the third member is a zero-force member. Restated for clarity, when there are no external loads at a pin joint, the two rules that determine zero-force members are: If a joint in a truss has only two non-collinear members and no external load or support reaction is applied at that joint, then both members are zero-force members. If three members form a joint and two of these members are collinear while the third member is not, and no external load or support reaction is applied at the joint, the third non-collinear member is a zero-force member. Reasons to include zero force members in a truss system It is a common practice to eliminate zero force members from a truss to simplify analysis. Although an absolute minimalist design might eliminate all zero force elements from a truss, there are still sound reasons to retain some of these components in actual built systems: These members can contribute to the stability of the structure by preventing buckling of long, slender members under compressive forces These members can increase rigidity when variations are introduced in the normal external loading configuration, including dynamic and variable forces. See also Structural engineering Neutral plane External links Truss Overview Another Truss Overview References Structural analysis Statics
Zero force member
[ "Physics", "Engineering" ]
431
[ "Structural engineering", "Statics", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering" ]
3,431,556
https://en.wikipedia.org/wiki/Turbosteamer
A turbosteamer is a BMW combined cycle engine using a waste heat recovery unit. Waste heat energy from the internal combustion engine is used to generate steam for a steam engine which creates supplemental power for the vehicle. The turbosteamer device is affixed to the exhaust and cooling system. It salvages the heat wasted in the exhaust and radiator (as much as 80% of heat energy) and uses a steam piston or turbine to relay that power to the crankshaft. The steam circuit produces and of torque at peak (for a 1.8 straight-4 engine), yielding an estimated 15% gain in fuel efficiency. Unlike gasoline-electric hybrids, these gains increase at higher, steadier speeds. Timescale BMW has been the pioneer of this concept as early as 2000 under the direction of Dr. Raymond Freymann, and while they were designing this system to fit to most current BMW models, the technology didn't reach production. See also COGAS Cogeneration Exhaust heat recovery system Still engine Turbo-compound engine Publications R. Freymann, W. Strobl, A. Obieglo: The Turbosteamer: A System Introducing the Principle of Cogeneration in Automotive Applications. Motortechnische Zeitschrift, MTZ 05/2008 Jahrgang 69, pp.404-412. References External links Gizmag article discussing BMW's turbosteamer Article on BMW's alternative Combined Cycle Hybrid technology Looking for the next gram. BMW Group. Retrieved 5 December 2011. Engine technology Steam power
Turbosteamer
[ "Physics", "Technology" ]
319
[ "Physical quantities", "Engines", "Steam power", "Engine technology", "Power (physics)" ]
3,432,612
https://en.wikipedia.org/wiki/Cembrene%20A
Cembrene A, or sometimes neocembrene, is a natural monocyclic diterpene isolated from corals of the genus Nephthea. It is a colorless oil with a faint wax-like odor. Cembrene A itself has little importance as a chemical entity, being a trail pheromone for termites; however, the chemical structure of cembrene is central to a very wide variety of other natural products found both in plants and in animals. Pinus leucodermis tree bark and wood essential oils contain a high percentage of cembrene. Cembrenes are biosynthesized by macrocyclization of geranylgeranyl pyrophosphate. References Diterpenes Alkene derivatives Insect ecology Insect pheromones Cycloalkenes
Cembrene A
[ "Chemistry" ]
172
[ "Insect pheromones", "Chemical ecology" ]
3,432,674
https://en.wikipedia.org/wiki/AFP-L3
In oncology, AFP-L3 is an isoform of alpha-fetoprotein (AFP), a substance typically used in the triple test during pregnancy and for screening chronic liver disease patients for hepatocellular carcinoma (HCC). AFP can be fractionated by affinity electrophoresis into three glycoforms: L1, L2, and L3 based on the reactivity with the lectin Lens culinaris agglutinin (LCA). AFP-L3 binds strongly to LCA via an additional α 1-6 fucose residue attached at the reducing terminus of N-acetylglucosamine; this is in contrast to the L1 isoform. It is the L1 isoform which is typically associated with non-HCC inflammation of liver disease condition. The L3 isoform is specific to malignant tumors and its detected presence can serve to identify patients whom need increased monitoring for the development of HCC in high risk populations (i.e. chronic hepatitis B and C and/or liver cirrhosis). AFP-L3% is now being considered as a tumor marker for the North American demographic. AFP-L3% assay AFP-L3 is isolated via an immunoassay and quantified using chemiluminesence on an automated platform. Results for AFP-L3 are represented as a ratio of LCA-reactive AFP to total AFP (AFP-L3%). The AFP-L3% assay, a liquid-phase binding assay, will help to identify at-risk subjects earlier, allowing for more intense evaluation for evidence of HCC according to existing practice guidelines in oncology. AFP-L3% is the standard for quantifying the L3 isoform of AFP in serum of high risk chronic liver disease (CLD) patients. Studies have shown that AFP-L3% test results of more than 10% can be indicative of early HCC or early nonseminomatous germ cell tumor. Early testimonials from hepatologists indicate that there is a target patient population for the AFP-L3% assay. This target population are those CLD patients who have AFP concentrations in the indeterminate range of 20-200+ ng/mL and a small or indeterminate mass on imaging. It is in this range that doctors experience trouble differentiating non-HCC fluctuations in AFP vs indication of HCC. In such patients these hepatologists recommend utilizing AFP-L3% to clarify the disease state. Some hepatologists also use a positive result to urge insurance companies to pay for more frequent and intensive imaging. Ultimately AFP-L3% may be used as a rule-in or rule-out assay for transplantation consideration and/or an intermediate step in surveillance precluding costly imaging on patients with fluctuating AFP results but negative for HCC. References AFP-L3: a new generation of tumor marker for hepatocellular carcinoma. Li D, et al., Clin Chim Acta. 2001 Nov;313(1-2):15-9. Clinical evaluation of lentil lectin-reactive alpha-fetoprotein-L3 in histology-proven hepatocellular carcinoma. Khien VV, et al., Int J Biol Markers. 2001 Apr-Jun;16(2):105-11. Usefulness of measurement of Lens culinaris agglutinin-reactive fraction of alpha-fetoprotein as a marker of prognosis and recurrence of small hepatocellular carcinoma. Hayashi K, et al., Am J Gastroenterol. 1999 Oct;94(10):3028-33. A collaborative study for the evaluation of lectin-reactive alpha-fetoproteins in early detection of hepatocellular carcinoma. Takata, K., et al., Cancer Res., 53, 5419–5423, 1993. Utility of lentil lectin affinity of alpha-fetoprotein in the diagnosis of hepatocellular carcinoma. Wang, S., et al., J. Hepatology, 25, 166–171, 1996. Early recognition of hepatocellular carcinoma based on altered profiles of alpha-fetoprotein. Sato, Y., et al., N. Engl. J. Med., 328, 1802–1806, 1993. A clinical study of lectin-reactive alpha-fetoprotein as an early indicator of hepatocellular carcinoma in the follow-up of cirrhotic patients. Shiraki, K., Hepatology, 22, 802–807, 1985. A clinical study of lectin-reactive alpha-fetoprotein as an early indicator of hepatocellular carcinoma in the follow-up of cirrhotic patients. Shiraki, K., Hepatology, 22, 802–807, 1985. Prognostic significance of lens culinaris agglutinin A-reactive alpha-fetoprotein in small hepatocellular carcinoma. Yamashita, F., et al., Gastroenterology, 111, 996–1001, 1996. The fucosylation index of alpha-fetoprotein as a possible prognostic indicator for patients with hepatocellular carcinoma. Aoyagi, Y., et al., Am. Cancer Soc., 83, 2076–2082, 1998. Monitoring of lectin-reactive alpha-fetoproteins in patients with hepatocellular carcinoma treated using transactheter arterial embolization. Yamashita, F., Eur. J. Gastroenterol. Hepatol., 7, 627–633, 1995. Evaluation of curability and prediction of prognosis after surgical for hepatocellular carcinoma by lens culinaris agglutinin-reactive alpha-fetoprotein. Okuda, K., et al., Inter. J. Oncol., 14, 265–271, 1999. Usefulness of lens culinaris agglutinin A-reactive fraction of alpha-fetoprotein (AFP-L3) as a marker of distant metastasis from hepatocellular carcinoma. Yamashiki, N., et al., Oncology Reports, 6, 1229–1232, 1999. Relationship between lens culinaris agglutinin reactive alpha-fetoprotein and biological features of hepatocellular carcinoma. Kusaba, T., Kurume Med. J., 45, 113–120, 1998. Tumor vascularity and lens culinaris agglutinin reactive alpha-fetoprotein are predictors of long-term prognosis in patients with hepatocellular carcinoma after percutaneous ethanol injection therapy. Fukuda, H., Kurume Med. J., 45, 187–193, 1998. Clinical utility of lens culinaris agglutinin-reactive alpha-fetoprotein in small hepatocellular carcinoma: Special reference to imaging diagnosis. Kumada, T., et al., J. Hepatol., 30, 125–130, 1999. Deletion of serum lectin-reactive alpha-fetoprotein by Acyclic Retinoid: A potent biomarker in the chemoprevention of second primary hepatoma. Moriwaki, H., Clin. Cancer Res., 3, 727-731, 1997. Clinical utility of simultaneous measurement of serum high-sensitivity des-gamma-carboxy prothrombin and lens culinaris agglutinin A-reactive alpha-fetoprotein in patients with small hepatocellular carcinoma. Sassa, T., et al., Eur. J. Gastroenterol. Hepatol. 11, 1387–1392, 1999. A simultaneous monitoring of lens culinaris agglutinin A-reactive alpha-fetoprotein and des-gamma-carboxy prothrombin as an early diagnosis of hepatocellular carcinoma in the follow-up of cirrhotic patients. Shimauchi, Y., et al., Oncology Reports, 7, 249–256, 2000. Tumor markers
AFP-L3
[ "Chemistry", "Biology" ]
1,826
[ "Chemical pathology", "Tumor markers", "Biomarkers" ]
3,432,792
https://en.wikipedia.org/wiki/Amine%20oxide
In chemistry, an amine oxide, also known as an amine N-oxide or simply N-oxide, is a chemical compound that has the chemical formula . It contains a nitrogen-oxygen coordinate covalent bond with three additional hydrogen and/or substituent-groups attached to nitrogen. Sometimes it is written as or, alternatively, as . In the strict sense, the term amine oxide applies only to oxides of tertiary amines. Sometimes it is also used for the analogous derivatives of primary and secondary amines. Examples of amine oxides include pyridine-N-oxide, a water-soluble crystalline solid with melting point 62–67 °C, and N-methylmorpholine N-oxide, which is an oxidant. Applications Amine oxides are surfactants commonly used in consumer products such as shampoos, conditioners, detergents, and hard surface cleaners. Alkyl dimethyl amine oxide (chain lengths C10–C16) is the most commercially used amine oxide. They are considered a high production volume class of compounds in more than one member country of the Organisation for Economic Co-operation and Development (OECD); with annual production over in the US, Europe, and Japan, respectively. In North America, more than 95% of amine oxides are used in home cleaning products. They serve as stabilizers, thickeners, emollients, emulsifiers, and conditioners with active concentrations in the range of 0.1–10%. The remainder (< 5%) is used in personal care, institutional, commercial products and for unique patented uses such as photography. Properties Amine oxides are used as protecting group for amines and as chemical intermediates. Long-chain alkyl amine oxides are used as amphoteric surfactants and foam stabilizers. Amine oxides are highly polar molecules and have a polarity close to that of quaternary ammonium salts. Small amine oxides are very hydrophilic and have an excellent water solubility and a very poor solubility in most organic solvents. Amine oxides are weak bases with a pKb of around 4.5 that form , cationic hydroxylamines, upon protonation at a pH below their pKb. Synthesis Almost all amine oxides are prepared by the oxidation of either tertiary aliphatic amines or aromatic N-heterocycles. Hydrogen peroxide is the most common reagent both industrially and in academia, however peracids are also important. More specialised oxidising agents can see niche use, for instance Caro's acid or mCPBA. Spontaneous or catalysed reactions using molecular oxygen are rare. Certain other reactions will also produce amine oxides, such as the retro-Cope elimination, however they are rarely employed. Reactions Amine oxides exhibit many kinds of reactions. Pyrolytic elimination. Amine oxides, when heated to 150–200 °C undergo a Cope reaction to form a hydroxylamine and an alkene. The reaction requires the alkyl groups to have hydrogens at the beta-carbon (i.e. works with ethyl and above, but not methyl) Reduction to amines. Amine oxides are readily converted to the parent amine by common reduction reagents including lithium aluminium hydride, sodium borohydride, catalytic reduction, zinc / acetic acid, and iron / acetic acid. Pyridine N-oxides can be deoxygenated by phosphorus oxychloride Sacrificial catalysis. Oxidants can be regenerated by reduction of N-oxides, as in the case of regeneration of osmium tetroxide by N-methylmorpholine N-oxide in the Upjohn dihydroxylation. O-Alkylation. Pyridine N-oxides react with alkyl halides to the O-alkylated product Bis-ter-pyridine derivatives adsorbed on silver surfaces are discussed to react with oxygen to bis-ter-pyridine N-oxide. This reaction can be followed by video-scanning tunneling microscopy with sub-molecular resolution. In the Meisenheimer rearrangement (after Jakob Meisenheimer) certain N-oxides rearrange to hydroxylamines in a 1,2-rearrangement: or a 2,3-rearrangement: In the Polonovski reaction a tertiary N-oxide is cleaved by acetic acid anhydride to the corresponding acetamide and aldehyde: Metabolites Amine oxides are common metabolites of medication and psychoactive drugs. Examples include nicotine, Zolmitriptan, and morphine. Amine oxides of anti-cancer drugs have been developed as prodrugs that are metabolized in the oxygen-deficient cancer tissue to the active drug. Human safety Amine oxides (AO) are not known to be carcinogens, dermal sensitizers, or reproductive toxicants. They are readily metabolized and excreted if ingested. Chronic ingestion by rabbits found lower body weight, diarrhea, and lenticular opacities at a lowest observed adverse effect levels (LOAEL) in the range of 87–150 mg AO/kw bw/day. Tests of human skin exposure have found that after 8 hours less than 1% is absorbed into the body. Eye irritation due to amine oxides and other surfactants is moderate and temporary with no lasting effects. Environmental safety Amine oxides with an average chain length of 12.6 have been measured to be water-soluble at ~410 g/L. They are considered to have low bioaccumulation potential in aquatic species based on log Kow data from chain lengths less than C14 (bioconcentration factor < 87%). Levels of AO in untreated influent were found to be 2.3–27.8 ug/L, while in effluent they were found to be 0.4–2.91 ug/L. The highest effluent concentrations were found in oxidation ditch and trickling filter treatment plants. On average, over 96% removal has been found with secondary activated sludge treatment. Acute toxicity in fish, as indicated by 96h LC50 tests, is in the range of 1,000–3,000 ug/L for carbon chain lengths less than C14. LC50 values for chain lengths greater than C14 range from 600 to 1400 ug/L. Chronic toxicity data for fish is 420 ug/L. When normalized to C12.9, the NOEC is 310 ug/L for growth and hatchability. See also Functional group Amine, NR3 Hydroxylamine, NR2OH Phosphine oxide, PR3=O Sulfoxide, R2S=O Azoxy, RN=N+(O−)R RN=N+RO− Aminoxyl group, radicals with the general structure R2N–O• :Category:Amine oxides, containing all articles on specific amine-oxide compounds References External links Chemistry of amine oxides Surfactants, types and uses (pdf) The amine oxides homepage Nomenclature of nitrogen compounds IUPAC definition Functional groups
Amine oxide
[ "Chemistry" ]
1,550
[ "Amine oxides", "Functional groups" ]
3,434,894
https://en.wikipedia.org/wiki/Surface-enhanced%20Raman%20spectroscopy
Surface-enhanced Raman spectroscopy or surface-enhanced Raman scattering (SERS) is a surface-sensitive technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures such as plasmonic-magnetic silica nanotubes. The enhancement factor can be as much as 1010 to 1011, which means the technique may detect single molecules. History SERS from pyridine adsorbed on electrochemically roughened silver was first observed by Martin Fleischmann, Patrick J. Hendra and A. James McQuillan at the Department of Chemistry at the University of Southampton, UK in 1973. This initial publication has been cited over 6000 times. The 40th Anniversary of the first observation of the SERS effect has been marked by the Royal Society of Chemistry by the award of a National Chemical Landmark plaque to the University of Southampton. In 1977, two groups independently noted that the concentration of scattering species could not account for the enhanced signal and each proposed a mechanism for the observed enhancement. Their theories are still accepted as explaining the SERS effect. Jeanmaire and Richard Van Duyne proposed an electromagnetic effect, while Albrecht and Creighton proposed a charge-transfer effect. Rufus Ritchie, of Oak Ridge National Laboratory's Health Sciences Research Division, predicted the existence of the surface plasmon. Mechanisms The exact mechanism of the enhancement effect of SERS is still a matter of debate in the literature. There are two primary theories and while their mechanisms differ substantially, distinguishing them experimentally has not been straightforward. The electromagnetic theory proposes the excitation of localized surface plasmons, while the chemical theory proposes the formation of charge-transfer complexes. The chemical theory is based on resonance Raman spectroscopy, in which the frequency coincidence (or resonance) of the incident photon energy and electron transition greatly enhances Raman scattering intensity. Research in 2015 on a more powerful extension of the SERS technique called SLIPSERS (Slippery Liquid-Infused Porous SERS) has further supported the EM theory. Electromagnetic theory The increase in intensity of the Raman signal for adsorbates on particular surfaces occurs because of an enhancement in the electric field provided by the surface. When the incident light in the experiment strikes the surface, localized surface plasmons are excited. The field enhancement is greatest when the plasmon frequency, ωp, is in resonance with the radiation ( for spherical particles). In order for scattering to occur, the plasmon oscillations must be perpendicular to the surface; if they are in-plane with the surface, no scattering will occur. It is because of this requirement that roughened surfaces or arrangements of nanoparticles are typically employed in SERS experiments as these surfaces provide an area on which these localized collective oscillations can occur. SERS enhancement can occur even when an excited molecule is relatively far apart from the surface which hosts metallic nanoparticles enabling surface plasmon phenomena. The light incident on the surface can excite a variety of phenomena in the surface, yet the complexity of this situation can be minimized by surfaces with features much smaller than the wavelength of the light, as only the dipolar contribution will be recognized by the system. The dipolar term contributes to the plasmon oscillations, which leads to the enhancement. The SERS effect is so pronounced because the field enhancement occurs twice. First, the field enhancement magnifies the intensity of incident light, which will excite the Raman modes of the molecule being studied, therefore increasing the signal of the Raman scattering. The Raman signal is then further magnified by the surface due to the same mechanism that excited the incident light, resulting in a greater increase in the total output. At each stage the electric field is enhanced as E2, for a total enhancement of E4. The enhancement is not equal for all frequencies. For those frequencies for which the Raman signal is only slightly shifted from the incident light, both the incident laser light and the Raman signal can be near resonance with the plasmon frequency, leading to the E4 enhancement. When the frequency shift is large, the incident light and the Raman signal cannot both be on resonance with ωp, thus the enhancement at both stages cannot be maximal. The choice of surface metal is also dictated by the plasmon resonance frequency. Visible and near-infrared radiation (NIR) are used to excite Raman modes. Silver and gold are typical metals for SERS experiments because their plasmon resonance frequencies fall within these wavelength ranges, providing maximal enhancement for visible and NIR light. Copper's absorption spectrum also falls within the range acceptable for SERS experiments. Platinum and palladium nanostructures also display plasmon resonance within visible and NIR frequencies. Chemical theory Resonance Raman spectroscopy explains the huge enhancement of Raman scattering intensity. Intermolecular and intramolecular charge transfers significantly enhance Raman spectrum peaks. In particular, the enhancement is huge for species adsorbing the metal surface due to the high-intensity charge transfers from the metal surface with wide band to the adsorbing species. This resonance Raman enhancement is dominant in SERS for species on small nanoclusters with considerable band gaps, because surface plasmon appears only in metal surface with near-zero band gaps. This chemical mechanism probably occurs in concert with the electromagnetic mechanism for metal surface. Surfaces While SERS can be performed in colloidal solutions, today the most common method for performing SERS measurements is by depositing a liquid sample onto a silicon or glass surface with a nanostructured noble metal surface. While the first experiments were performed on electrochemically roughened silver, now surfaces are often prepared using a distribution of metal nanoparticles on the surface as well as using lithography or porous silicon as a support. Two dimensional silicon nanopillars decorated with silver have also been used to create SERS active substrates. The most common metals used for plasmonic surfaces in visible light SERS are silver and gold; however, aluminium has recently been explored as an alternative plasmonic material, because its plasmon band is in the UV region, contrary to silver and gold. Hence, there is great interest in using aluminium for UV SERS. It has, however, surprisingly also been shown to have a large enhancement in the infrared, which is not fully understood. In the current decade, it has been recognized that the cost of SERS substrates must be reduced in order to become a commonly used analytical chemistry measurement technique. To meet this need, plasmonic paper has experienced widespread attention in the field, with highly sensitive SERS substrates being formed through approaches such as soaking, in-situ synthesis, screen printing and inkjet printing. The shape and size of the metal nanoparticles strongly affect the strength of the enhancement because these factors influence the ratio of absorption and scattering events. There is an ideal size for these particles, and an ideal surface thickness for each experiment. If concentration and particle size can be tuned better for each experiment this will go a long way in the cost reduction of substrates. Particles that are too large allow the excitation of multipoles, which are nonradiative. As only the dipole transition leads to Raman scattering, the higher-order transitions will cause a decrease in the overall efficiency of the enhancement. Particles that are too small lose their electrical conductance and cannot enhance the field. When the particle size approaches a few atoms, the definition of a plasmon does not hold, as there must be a large collection of electrons to oscillate together. An ideal SERS substrate must possess high uniformity and high field enhancement. Such substrates can be fabricated on a wafer scale and label-free superresolution microscopy has also been demonstrated using the fluctuations of surface enhanced Raman scattering signal on such highly uniform, high-performance plasmonic metasurfaces. Due to their unique physical and chemical properties, two-dimensional (2D) materials have gained significant attention as alternative substrates for surface-enhanced Raman spectroscopy (SERS). The use of 2D materials as SERS substrates offers several advantages over traditional metal substrates, including high sensitivity, reproducibility, and chemical stability. Graphene is one of the most widely studied 2D materials for SERS applications. Graphene has a high surface area, high electron mobility, and excellent chemical stability, making it an attractive substrate for SERS. Graphene-based SERS sensors have also been shown to be highly reproducible and stable, making them attractive for real-world applications. In addition to graphene, other 2D materials, especially MXenes, have also been investigated for SERS applications. MXenes have a high surface area, good electrical conductivity, and chemical stability, making them attractive for SERS applications. As a result, MXene-based SERS sensors have been used to detect various analytes, including organic molecules, drugs and their metabolites. As research and development continue, 2D materials-based SERS sensors will likely be more widely used in various industries, including environmental monitoring, healthcare, and food safety. Applications SERS substrates are used to detect the presence of low-abundance biomolecules, and can therefore detect proteins in bodily fluids. Early detection of pancreatic cancer biomarkers was accomplished using SERS-based immunoassay approach. A SERS-base multiplex protein biomarker detection platform in a microfluidic chip is used to detect several protein biomarkers to predict the type of disease and critical biomarkers and increase the chance of differentiating diseases with similar biomarkers like pancreatic cancer, ovarian cancer, and pancreatitis. This technology has been utilized to detect urea and blood plasma label free in human serum and may become the next generation in cancer detection and screening. The ability to analyze the composition of a mixture at a nanoscale makes the use of SERS substrates that are beneficial for environmental analysis, pharmaceuticals, material sciences, art and archaeological research, forensic science, drug and explosives detection, food quality analysis, and single algal cell detection. SERS combined with plasmonic sensing can be used for high-sensitivity quantitative analysis of small molecules in human biofluids, the quantitative detection of biomolecular interaction, the detection of low-level cancer biomarkers via sandwich immunoassay platforms, the label-free characterization of exosomes, and the study of redox processes at a single-molecule level. SERS is a powerful technique for determining structural information about molecular systems. It has found a wide range of applications in ultra-sensitive chemical sensing and environmental analyses. A review of the present and future applications of SERS was published in 2020. Selection rules The term surface enhanced Raman spectroscopy implies that it provides the same information that traditional Raman spectroscopy does, simply with a greatly enhanced signal. While the spectra of most SERS experiments are similar to the non-surface enhanced spectra, there are often differences in the number of modes present. Additional modes not found in the traditional Raman spectrum can be present in the SERS spectrum, while other modes can disappear. The modes observed in any spectroscopic experiment are dictated by the symmetry of the molecules and are usually summarized by Selection rules. When molecules are adsorbed to a surface, the symmetry of the system can change, slightly modifying the symmetry of the molecule, which can lead to differences in mode selection. One common way in which selection rules are modified arises from the fact that many molecules that have a center of symmetry lose that feature when adsorbed to a surface. The loss of a center of symmetry eliminates the requirements of the mutual exclusion rule, which dictates that modes can only be either Raman or infrared active. Thus modes that would normally appear only in the infrared spectrum of the free molecule can appear in the SERS spectrum. A molecule's symmetry can be changed in different ways depending on the orientation in which the molecule is attached to the surface. In some experiments, it is possible to determine the orientation of adsorption to the surface from the SERS spectrum, as different modes will be present depending on how the symmetry is modified. Remote SERS Remote surface-enhanced Raman spectroscopy (SERS) consists of using metallic nanowaveguides supporting propagating surface plasmon polaritons (SPPs) to perform SERS at a distant location different to the one of the incident laser. Propagating SPPs supported by nanowires has been used to show the remote excitation., as well as the remote detection of SERS. A silver nanowire was also used to show remote excitation and detection using graphene as Raman scatterer Applications Different plasmonic systems have already been used to show Raman detection of biomolecules in vivo in cells and remote excitation of surface catalytic reactions. Immunoassays SERS-based immunoassays can be used for detection of low-abundance biomarkers. For example, antibodies and gold particles can be used to quantify proteins in serum with high sensitivity and specificity. Oligonucleotide targeting SERS can be used to target specific DNA and RNA sequences using a combination of gold and silver nanoparticles and Raman-active dyes, such as Cy3. Specific single nucleotide polymorphisms (SNP) can be identified using this technique. The gold nanoparticles facilitate the formation of a silver coating on the dye-labelled regions of DNA or RNA, allowing SERS to be performed. This has several potential applications: For example, Cao et al. report that gene sequences for HIV, Ebola, Hepatitis, and Bacillus Anthracis can be uniquely identified using this technique. Each spectrum was specific, which is advantageous over fluorescence detection; some fluorescent markers overlap and interfere with other gene markers. The advantage of this technique to identify gene sequences is that several Raman dyes are commercially available, which could lead to the development of non-overlapping probes for gene detection. See also Tip-enhanced Raman spectroscopy References Surface science Raman scattering Raman spectroscopy Plasmonics
Surface-enhanced Raman spectroscopy
[ "Physics", "Chemistry", "Materials_science" ]
2,921
[ "Plasmonics", "Surface science", "Condensed matter physics", "Nanotechnology", "Solid state engineering" ]
3,436,139
https://en.wikipedia.org/wiki/Pondcrete
Pondcrete is a mixture of cement and sludge. Its role is to immobilize hazardous waste and, in some cases, low-level and mixed-level radioactive waste, in the form of solid material. The material was used by the United States Department of Energy and its contractor, Rockwell International, in an attempt to handle the radioactive waste from contaminated ponds in the Rocky Flats Plant for burial in Nevada desert. Portland cement is mixed with sludge to solidify into “pondcrete” blocks and placed into large, plastic lined boxes. The sludge is taken from solar evaporation ponds which are used to remove moisture from waste materials, therefore reducing their weight. To do this, liquid waste is poured into artificial, shallow ponds. The waste is heated by solar radiation and any moisture is evaporated, leaving behind the waste. These ponds contained low level radioactive process waste as well as sanitary sewage sludge and wastes, which categorize them and the Pondcrete as a mixed waste. Radioactive waste Because the blocks were classified as mixed-level radioactive waste, including plutonium, Rockwell International was unable to store the blocks in the Nevada Test site. The Nevada Test Site did not have a permit to store mixed-level radioactive waste, so the blocks were left in temporary storage at Rocky Flats. Due to problems in production, many of the blocks did not harden correctly and eventually began to seep from the boxes causing large scale environmental contamination of the area. The blocks containing plutonium-239, radioactive waste with a half-life of 24,100 years, had failed in a year. Despite warnings by engineer Jim Stone that the blocks would most likely fail earlier than expected, the blocks were still produced. Later it would be Jim Stone who would file a lawsuit against the company, claiming that they had concealed environmental, safety and health problems from the United States Department of Energy. Investigation The contaminations led to an investigation of the Plant by the Federal Bureau of Investigation and the Environmental Protection Agency which eventually resulted in its shutdown. In 1993, Federal Judge Sherman Finesilver, approved the release of the Colorado Federal District Court Special Grand Jury Report on the investigation. The report found that the Department of Energy and the Environmental Protection Agency oversight were not performed adequately to protect the environment, and that Rockwell did not comply with environmental laws at the Rocky Flats Plant. The United States Department of Energy (DOE) and Rockwell violated the Resource Conservation and Recovery Act (RCRA) by illegally storing, treating and disposing of residues in more than 17,000 blocks of pondcrete and saltcrete in plastic lined cardboard containers outdoors on the 904 Pad at the plant. Each pondcrete block containing mixed-wastes (radionuclides, cadmium, methylene chloride and acetone) weighed between 1,500 and 1,800 pounds. These blocks did not sufficiently harden like concrete, maintaining a consistency of wet clay. Many of the boxes ruptured, possibly due to extreme temperature fluctuations in that region of Colorado, spilling these wastes onto the asphalt pad at Site 904. Rain and wind carried these wastes into drainage areas and into the air and soil. See also Saltcrete Rocky Flats Plant Radioactive Waste Radioactive contamination from the Rocky Flats Plant References Concrete Radioactive waste Waste treatment technology
Pondcrete
[ "Chemistry", "Technology", "Engineering" ]
669
[ "Structural engineering", "Water treatment", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Environmental engineering", "Concrete", "Waste treatment technology", "Radioactive waste" ]
3,436,583
https://en.wikipedia.org/wiki/Atomic%20packing%20factor
In crystallography, atomic packing factor (APF), packing efficiency, or packing fraction is the fraction of volume in a crystal structure that is occupied by constituent particles. It is a dimensionless quantity and always less than unity. In atomic systems, by convention, the APF is determined by assuming that atoms are rigid spheres. The radius of the spheres is taken to be the maximum value such that the atoms do not overlap. For one-component crystals (those that contain only one type of particle), the packing fraction is represented mathematically by where Nparticle is the number of particles in the unit cell, Vparticle is the volume of each particle, and Vunit cell is the volume occupied by the unit cell. It can be proven mathematically that for one-component structures, the most dense arrangement of atoms has an APF of about 0.74 (see Kepler conjecture), obtained by the close-packed structures. For multiple-component structures (such as with interstitial alloys), the APF can exceed 0.74. The atomic packing factor of a unit cell is relevant to the study of materials science, where it explains many properties of materials. For example, metals with a high atomic packing factor will have a higher "workability" (malleability or ductility), similar to how a road is smoother when the stones are closer together, allowing metal atoms to slide past one another more easily. Single component crystal structures Common sphere packings taken on by atomic systems are listed below with their corresponding packing fraction. Hexagonal close-packed (HCP): 0.74 Face-centered cubic (FCC): 0.74 (also called cubic close-packed, CCP) Body-centered cubic (BCC): 0.68 Simple cubic: 0.52 Diamond cubic: 0.34 The majority of metals take on either the HCP, FCC, or BCC structure. Simple cubic For a simple cubic packing, the number of atoms per unit cell is one. The side of the unit cell is of length 2r, where r is the radius of the atom. Face-centered cubic For a face-centered cubic unit cell, the number of atoms is four. A line can be drawn from the top corner of a cube diagonally to the bottom corner on the same side of the cube, which is equal to 4r. Using geometry, and the side length, a can be related to r as: Knowing this and the formula for the volume of a sphere, it becomes possible to calculate the APF as follows: Body-centered cubic The primitive unit cell for the body-centered cubic crystal structure contains several fractions taken from nine atoms (if the particles in the crystal are atoms): one on each corner of the cube and one atom in the center. Because the volume of each of the eight corner atoms is shared between eight adjacent cells, each BCC cell contains the equivalent volume of two atoms (one central and one on the corner). Each corner atom touches the center atom. A line that is drawn from one corner of the cube through the center and to the other corner passes through 4r, where r is the radius of an atom. By geometry, the length of the diagonal is a. Therefore, the length of each side of the BCC structure can be related to the radius of the atom by Knowing this and the formula for the volume of a sphere, it becomes possible to calculate the APF as follows: Hexagonal close-packed For the hexagonal close-packed structure the derivation is similar. Here the unit cell (equivalent to 3 primitive unit cells) is a hexagonal prism containing six atoms (if the particles in the crystal are atoms). Indeed, three are the atoms in the middle layer (inside the prism); in addition, for the top and bottom layers (on the bases of the prism), the central atom is shared with the adjacent cell, and each of the six atoms at the vertices is shared with other six adjacent cells. So the total number of atoms in the cell is 3 + (1/2)×2 + (1/6)×6×2 = 6. Each atom touches other twelve atoms. Now let be the side length of the base of the prism and be its height. The latter is twice the distance between adjacent layers, i. e., twice the height of the regular tetrahedron whose vertices are occupied by (say) the central atom of the lower layer, two adjacent non-central atoms of the same layer, and one atom of the middle layer "resting" on the previous three. Obviously, the edge of this tetrahedron is . If , then its height can be easily calculated to be , and, therefore, . So the volume of the hcp unit cell turns out to be (3/2) , that is 24 . It is then possible to calculate the APF as follows: See also Crystal Packing density Random close packing Cubic crystal system Diamond cubic Percolation threshold References Further reading Crystallography
Atomic packing factor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,031
[ "Crystallography", "Condensed matter physics", "Materials science" ]
238,181
https://en.wikipedia.org/wiki/Gibbs%20free%20energy
In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol ) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure–volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as where: is the internal energy of the system is the enthalpy of the system is the entropy of the system is the temperature of the system is the volume of the system is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium). The Gibbs free energy change (, measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces. The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in is necessary for a reaction to be spontaneous under these conditions. The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full. If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as , where is enthalpy, is absolute temperature, and is entropy. Overview According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy. A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur. One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process. If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative. In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted. The name "free enthalpy" was also used for G in the past. History The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions. In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated: In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body... Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions. Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. Definitions The Gibbs free energy is defined as which is the same as where: U is the internal energy (SI unit: joule), p is pressure (SI unit: pascal), V is volume (SI unit: m3), T is the temperature (SI unit: kelvin), S is the entropy (SI unit: joule per kelvin), H is the enthalpy (SI unit: joule). The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes: where: μi is the chemical potential of the ith chemical component. (SI unit: joules per particle or joules per mole) Ni is the number of particles (or number of moles) composing the ith chemical component. This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped. Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements. Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process. The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by or more conveniently as its chemical potential: In non-ideal systems, fugacity comes into play. Derivation The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy. The definition of G from above is . Taking the total differential, we have Replacing dU with the result from the first law gives The natural variables of G are then p, T, and {Ni}. Homogeneous systems Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU: Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G: This result shows that the chemical potential of a substance is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems. Gibbs free energy of reactions The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is and an infinitesimal change in G, at constant temperature and pressure, yields By the first law of thermodynamics, a change in the internal energy U is given by where is energy added as heat, and is energy added as work. The work done on the system may be written as , where is the mechanical work of compression/expansion done on or by the system and is all other forms of work, which may include electrical, magnetic, etc. Then and the infinitesimal change in G is The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath), and so it follows that Assuming that only mechanical work is done, this simplifies to This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium. In electrochemical thermodynamics When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf , an electrical work term appears in the expression for the change in Gibbs energy: where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature. The combination (, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is: If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable. Useful identities to derive the Nernst equation During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold: (see chemical equilibrium), (for a system at chemical equilibrium), (for a reversible electrochemical process at constant temperature and pressure), (definition of ), and rearranging gives which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation), where , Gibbs free energy change per mole of reaction, , Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298K, 100kPa, 1M of each reactant and product), , gas constant, , absolute temperature, , natural logarithm, , reaction quotient (unitless), , equilibrium constant (unitless), , electrical work in a reversible process (chemistry sign convention), , number of moles of electrons transferred in the reaction, , Faraday constant (charge per mole of electrons), , cell potential, , standard cell potential. Moreover, we also have which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium and Standard Gibbs energy change of formation The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚. All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved. ΔfG = ΔfG˚ + RT ln Qf, where Qf is the reaction quotient. At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes ΔfG˚ = −RT ln K, where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states. Graphical interpretation by Gibbs Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure. See also Bioenergetics Calphad (CALculation of PHAse Diagrams) Critical point (thermodynamics) Electron equivalent Enthalpy-entropy compensation Free entropy Gibbs–Helmholtz equation Grand potential Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients Spinodal – Spinodal Curves (Hessian matrix) Standard molar entropy Thermodynamic free energy UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients Notes and references External links IUPAC definition (Gibbs energy) Gibbs Free Energy – Georgia State University Physical quantities State functions Thermodynamic free energy
Gibbs free energy
[ "Physics", "Chemistry", "Mathematics" ]
3,743
[ "State functions", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Quantity", "Energy (physics)", "Thermodynamic free energy", "Wikipedia categories named after physical quantities", "Physical properties" ]
238,199
https://en.wikipedia.org/wiki/Henry%27s%20law
In physical chemistry, Henry's law is a gas law that states that the amount of dissolved gas in a liquid is directly proportional at equilibrium to its partial pressure above the liquid. The proportionality factor is called Henry's law constant. It was formulated by the English chemist William Henry, who studied the topic in the early 19th century. In simple words, we can say that the partial pressure of a gas in vapour phase is directly proportional to the mole fraction of a gas in solution. An example where Henry's law is at play is the depth-dependent dissolution of oxygen and nitrogen in the blood of underwater divers that changes during decompression, going to decompression sickness. An everyday example is carbonated soft drinks, which contain dissolved carbon dioxide. Before opening, the gas above the drink in its container is almost pure carbon dioxide, at a pressure higher than atmospheric pressure. After the bottle is opened, this gas escapes, moving the partial pressure of carbon dioxide above the liquid to be much lower, resulting in degassing as the dissolved carbon dioxide comes out of the solution. History In his 1803 publication about the quantity of gases absorbed by water, William Henry described the results of his experiments: Charles Coulston Gillispie states that John Dalton "supposed that the separation of gas particles one from another in the vapor phase bears the ratio of a small whole number to their interatomic distance in solution. Henry's law follows as a consequence if this ratio is a constant for each gas at a given temperature." Applications In production of carbonated beverages Under high pressure, solubility of increases. On opening a container of a carbonated beverage under pressure, pressure decreases to atmospheric, so that solubility decreases and the carbon dioxide forms bubbles that are released from the liquid. In the service of cask-conditioned beer It is often noted that beer served by gravity (that is, directly from a tap in the cask) is less heavily carbonated than the same beer served via a hand-pump (or beer-engine). This is because beer is pressurised on its way to the point of service by the action of the beer engine, causing carbon dioxide to dissolve in the beer. This then comes out of solution once the beer has left the pump, causing a higher level of perceptible 'condition' in the beer. For climbers or people living at high altitude Concentration of in the blood and tissues is so low that they feel weak and are unable to think properly, a condition called hypoxia. In underwater diving In underwater diving, gas is breathed at the ambient pressure which increases with depth due to the hydrostatic pressure. Solubility of gases increases with greater depth (greater pressure) according to Henry's law, so the body tissues take on more gas over time in greater depths of water. When ascending the diver is decompressed and the solubility of the gases dissolved in the tissues decreases accordingly. If the supersaturation is too great, bubbles may form and grow, and the presence of these bubbles can cause blockages in capillaries, or distortion in the more solid tissues which can cause damage known as decompression sickness. To avoid this injury the diver must ascend slowly enough that the excess dissolved gas is carried away by the blood and released into the lung gas. Fundamental types and variants of Henry's law constants There are many ways to define the proportionality constant of Henry's law, which can be subdivided into two fundamental types: One possibility is to put the aqueous phase into the numerator and the gaseous phase into the denominator ("aq/gas"). This results in the Henry's law solubility constant . Its value increases with increased solubility. Alternatively, numerator and denominator can be switched ("gas/aq"), which results in the Henry's law volatility constant . The value of decreases with increased solubility. IUPAC describes several variants of both fundamental types. This results from the multiplicity of quantities that can be chosen to describe the composition of the two phases. Typical choices for the aqueous phase are molar concentration (), molality (), and molar mixing ratio (). For the gas phase, molar concentration () and partial pressure () are often used. It is not possible to use the gas-phase mixing ratio () because at a given gas-phase mixing ratio, the aqueous-phase concentration depends on the total pressure and thus the ratio is not a constant. To specify the exact variant of the Henry's law constant, two superscripts are used. They refer to the numerator and the denominator of the definition. For example, refers to the Henry solubility defined as . Henry's law solubility constants Hs Henry solubility defined via concentration (Hscp) Atmospheric chemists often define the Henry solubility as . Here is the concentration of a species in the aqueous phase, and is the partial pressure of that species in the gas phase under equilibrium conditions. The SI unit for is mol/(m3·Pa); however, often the unit M/atm is used, since is usually expressed in M (1M = 1 mol/dm3) and in atm (1atm = 101325Pa). The dimensionless Henry solubility Hscc The Henry solubility can also be expressed as the dimensionless ratio between the aqueous-phase concentration of a species and its gas-phase concentration : . For an ideal gas, the conversion is: where is the gas constant, and is the temperature. Sometimes, this dimensionless constant is called the water–air partitioning coefficient . It is closely related to the various, slightly different definitions of the Ostwald coefficient , as discussed by Battino (1984). Henry solubility defined via aqueous-phase mixing ratio (Hsxp) Another Henry's law solubility constant is: . Here is the molar mixing ratio in the aqueous phase. For a dilute aqueous solution the conversion between and is: . where is the density of water and is the molar mass of water. Thus . The SI unit for is Pa−1, although atm−1 is still frequently used. Henry solubility defined via molality (Hsbp) It can be advantageous to describe the aqueous phase in terms of molality instead of concentration. The molality of a solution does not change with , since it refers to the mass of the solvent. In contrast, the concentration does change with , since the density of a solution and thus its volume are temperature-dependent. Defining the aqueous-phase composition via molality has the advantage that any temperature dependence of the Henry's law constant is a true solubility phenomenon and not introduced indirectly via a density change of the solution. Using molality, the Henry solubility can be defined as Here is used as the symbol for molality (instead of ) to avoid confusion with the symbol for mass. The SI unit for is mol/(kg·Pa). There is no simple way to calculate from , since the conversion between concentration and molality involves all solutes of a solution. For a solution with a total of solutes with indices , the conversion is: where is the density of the solution, and are the molar masses. Here is identical to one of the in the denominator. If there is only one solute, the equation simplifies to Henry's law is only valid for dilute solutions where and . In this case the conversion reduces further to and thus The Bunsen coefficient α According to Sazonov and Shaw, the dimensionless Bunsen coefficient is defined as "the volume of saturating gas, V1, reduced to T° = 273.15 K, p° = 1 bar, which is absorbed by unit volume V2* of pure solvent at the temperature of measurement and partial pressure of 1 bar." If the gas is ideal, the pressure cancels out, and the conversion to is simply , with = 273.15K. Note, that according to this definition, the conversion factor is not temperature-dependent. Independent of the temperature that the Bunsen coefficient refers to, 273.15K is always used for the conversion. The Bunsen coefficient, which is named after Robert Bunsen, has been used mainly in the older literature, and IUPAC considers it to be obsolete. The Kuenen coefficient S According to Sazonov and Shaw, the Kuenen coefficient is defined as "the volume of saturating gas V(g), reduced to T° = 273.15 K, p° = bar, which is dissolved by unit mass of pure solvent at the temperature of measurement and partial pressure 1 bar." If the gas is ideal, the relation to is , where is the density of the solvent, and = 273.15 K. The SI unit for is m3/kg. The Kuenen coefficient, which is named after Johannes Kuenen, has been used mainly in the older literature, and IUPAC considers it to be obsolete. Henry's law volatility constants Hv The Henry volatility defined via concentration (H) A common way to define a Henry volatility is dividing the partial pressure by the aqueous-phase concentration: The SI unit for is Pa·m3/mol. The Henry volatility defined via aqueous-phase mixing ratio (H) Another Henry volatility is The SI unit for is Pa. However, atm is still frequently used. The dimensionless Henry volatility H The Henry volatility can also be expressed as the dimensionless ratio between the gas-phase concentration of a species and its aqueous-phase concentration In chemical engineering and environmental chemistry, this dimensionless constant is often called the air–water partitioning coefficient Values of Henry's law constants A large compilation of Henry's law constants has been published by Sander (2023). A few selected values are shown in the table below: Temperature dependence When the temperature of a system changes, the Henry constant also changes. The temperature dependence of equilibrium constants can generally be described with the van 't Hoff equation, which also applies to Henry's law constants: where is the enthalpy of dissolution. Note that the letter in the symbol refers to enthalpy and is not related to the letter for Henry's law constants. This applies to the Henry's solubility ratio, ; for Henry's volatility ratio,, the sign of the right-hand side must be reversed. Integrating the above equation and creating an expression based on at the reference temperature = 298.15 K yields: The van 't Hoff equation in this form is only valid for a limited temperature range in which does not change much with temperature (around 20K of variations). The following table lists some temperature dependencies: Solubility of permanent gases usually decreases with increasing temperature at around room temperature. However, for aqueous solutions, the Henry's law solubility constant for many species goes through a minimum. For most permanent gases, the minimum is below 120 °C. Often, the smaller the gas molecule (and the lower the gas solubility in water), the lower the temperature of the maximum of the Henry's law constant. Thus, the maximum is at about 30 °C for helium, 92 to 93 °C for argon, nitrogen and oxygen, and 114 °C for xenon. Effective Henry's law constants The Henry's law constants mentioned so far do not consider any chemical equilibria in the aqueous phase. This type is called the intrinsic, or physical, Henry's law constant. For example, the intrinsic Henry's law solubility constant of formaldehyde can be defined as In aqueous solution, formaldehyde is almost completely hydrated: H2CO + H2O <=> H2C(OH)2 The total concentration of dissolved formaldehyde is Taking this equilibrium into account, an effective Henry's law constant can be defined as For acids and bases, the effective Henry's law constant is not a useful quantity because it depends on the pH of the solution. In order to obtain a pH-independent constant, the product of the intrinsic Henry's law constant and the acidity constant is often used for strong acids like hydrochloric acid (HCl): Although is usually also called a Henry's law constant, it is a different quantity and it has different units than . Dependence on ionic strength (Sechenov equation) Values of Henry's law constants for aqueous solutions depend on the composition of the solution, i.e., on its ionic strength and on dissolved organics. In general, the solubility of a gas decreases with increasing salinity ("salting out"). However, a "salting in" effect has also been observed, for example for the effective Henry's law constant of glyoxal. The effect can be described with the Sechenov equation, named after the Russian physiologist Ivan Sechenov (sometimes the German transliteration "Setschenow" of the Cyrillic name Се́ченов is used). There are many alternative ways to define the Sechenov equation, depending on how the aqueous-phase composition is described (based on concentration, molality, or molar fraction) and which variant of the Henry's law constant is used. Describing the solution in terms of molality is preferred because molality is invariant to temperature and to the addition of dry salt to the solution. Thus, the Sechenov equation can be written as where is the Henry's law constant in pure water, is the Henry's law constant in the salt solution, is the molality-based Sechenov constant, and is the molality of the salt. Non-ideal solutions Henry's law has been shown to apply to a wide range of solutes in the limit of infinite dilution (x → 0), including non-volatile substances such as sucrose. In these cases, it is necessary to state the law in terms of chemical potentials. For a solute in an ideal dilute solution, the chemical potential depends only on the concentration. For non-ideal solutions, the activity coefficients of the components must be taken into account: , where for a volatile solute; c° = 1 mol/L. For non-ideal solutions, the infinite dilution activity coefficient γc depends on the concentration and must be determined at the concentration of interest. The activity coefficient can also be obtained for non-volatile solutes, where the vapor pressure of the pure substance is negligible, by using the Gibbs-Duhem relation: By measuring the change in vapor pressure (and hence chemical potential) of the solvent, the chemical potential of the solute can be deduced. The standard state for a dilute solution is also defined in terms of infinite-dilution behavior. Although the standard concentration c° is taken to be 1 mol/L by convention, the standard state is a hypothetical solution of 1 mol/L in which the solute has its limiting infinite-dilution properties. This has the effect that all non-ideal behavior is described by the activity coefficient: the activity coefficient at 1 mol/L is not necessarily unity (and is frequently quite different from unity). All the relations above can also be expressed in terms of molalities b rather than concentrations, e.g.: where for a volatile solute; b° = 1 mol/kg. The standard chemical potential μm°, the activity coefficient γm and the Henry's law constant Hvpb all have different numerical values when molalities are used in place of concentrations. Solvent mixtures Henry's law solubility constant for a gas 2 in a mixture M of two solvents 1 and 3 depends on the individual constants for each solvent, and according to: Where , are the molar ratios of each solvent in the mixture and a13 is the interaction parameter of the solvents from Wohl expansion of the excess chemical potential of the ternary mixtures. A similar relationship can be found for the volatility constant , by remembering that and that, both being positive real numbers, , thus: For a water-ethanol mixture, the interaction parameter a13 has values around for ethanol concentrations (volume/volume) between 5% and 25%. Miscellaneous In geochemistry In geochemistry, a version of Henry's law applies to the solubility of a noble gas in contact with silicate melt. One equation used is where C is the number concentrations of the solute gas in the melt and gas phases, β = 1/kBT, an inverse temperature parameter (kB is the Boltzmann constant), μE is the excess chemical potentials of the solute gas in the two phases. Comparison to Raoult's law Henry's law is a limiting law that only applies for "sufficiently dilute" solutions, while Raoult's law is generally valid when the liquid phase is almost pure or for mixtures of similar substances. The range of concentrations in which Henry's law applies becomes narrower the more the system diverges from ideal behavior. Roughly speaking, that is the more chemically "different" the solute is from the solvent. For a dilute solution, the concentration of the solute is approximately proportional to its mole fraction x, and Henry's law can be written as This can be compared with Raoult's law: where p* is the vapor pressure of the pure component. At first sight, Raoult's law appears to be a special case of Henry's law, where Hvpx = p*. This is true for pairs of closely related substances, such as benzene and toluene, which obey Raoult's law over the entire composition range: such mixtures are called ideal mixtures. The general case is that both laws are limit laws, and they apply at opposite ends of the composition range. The vapor pressure of the component in large excess, such as the solvent for a dilute solution, is proportional to its mole fraction, and the constant of proportionality is the vapor pressure of the pure substance (Raoult's law). The vapor pressure of the solute is also proportional to the solute's mole fraction, but the constant of proportionality is different and must be determined experimentally (Henry's law). In mathematical terms: Raoult's law: Henry's law: Raoult's law can also be related to non-gas solutes. See also References External links EPA On-line Tools for Site Assessment Calculation – Henry's law conversion Physical chemistry Eponymous laws of physics Equilibrium chemistry Engineering thermodynamics Gas laws Underwater diving physics
Henry's law
[ "Physics", "Chemistry", "Engineering" ]
3,946
[ "Applied and interdisciplinary physics", "Underwater diving physics", "Engineering thermodynamics", "Equilibrium chemistry", "Thermodynamics", "Gas laws", "nan", "Mechanical engineering", "Physical chemistry" ]
238,301
https://en.wikipedia.org/wiki/Gel%20electrophoresis%20of%20nucleic%20acids
Gel electrophoresis of nucleic acids is an analytical technique to separate DNA or RNA fragments by size and reactivity. Nucleic acid molecules are placed on a gel, where an electric field induces the nucleic acids (which are negatively charged due to their sugar-phosphate backbone) to migrate toward the positively charged anode. The molecules separate as they travel through the gel based on the each molecule's size and shape. Longer molecules move more slowly because the gel resists their movement more forcefully than it resists shorter molecules. After some time, the electricity is turned off and the positions of the different molecules are analyzed. The nucleic acid to be separated can be prepared in several ways before separation by electrophoresis. In the case of large DNA molecules, the DNA is frequently cut into smaller fragments using a DNA restriction endonuclease (or restriction enzyme). In other instances, such as PCR amplified samples, enzymes present in the sample that might affect the separation of the molecules are removed through various means before analysis. Once the nucleic acid is properly prepared, the samples of the nucleic acid solution are placed in the wells of the gel and a voltage is applied across the gel for a specified amount of time. The DNA fragments of different lengths are visualized using a fluorescent dye specific for DNA, such as ethidium bromide. The gel shows bands corresponding to different nucleic acid molecules populations with different molecular weight. Fragment size is usually reported in "nucleotides", "base pairs" or "kb" (for thousands of base pairs) depending upon whether single- or double-stranded nucleic acid has been separated. Fragment size determination is typically done by comparison to commercially available DNA markers containing linear DNA fragments of known length. The types of gel most commonly used for nucleic acid electrophoresis are agarose (for relatively long DNA molecules) and polyacrylamide (for high resolution of short DNA molecules, for example in DNA sequencing). Gels have conventionally been run in a "slab" format such as that shown in the figure, but capillary electrophoresis has become important for applications such as high-throughput DNA sequencing. Electrophoresis techniques used in the assessment of DNA damage include alkaline gel electrophoresis and pulsed field gel electrophoresis. For short DNA segments such as 20 to 60 bp double stranded DNA, running them in polyacrylamide gel (PAGE) will give better resolution (native condition). Similarly, RNA and single-stranded DNA can be run and visualised by PAGE gels containing denaturing agents such as urea. PAGE gels are widely used in techniques such as DNA foot printing, EMSA and other DNA-protein interaction techniques. The measurement and analysis are mostly done with a specialized gel analysis software. Capillary electrophoresis results are typically displayed in a trace view called an electropherogram. Factors affecting migration of nucleic acids A number of factors can affect the migration of nucleic acids: the dimension of the gel pores, the voltage used, the ionic strength of the buffer, and the concentration intercalating dye such as ethidium bromide if used during electrophoresis. Size of DNA The gel sieves the DNA by the size of the DNA molecule whereby smaller molecules travel faster. Double-stranded DNA moves at a rate that is approximately inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments and it is not possible to separate them using standard agarose gel electrophoresis. The limit of resolution depends on gel composition and field strength. and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. Separation of very large DNA fragments requires pulse field gel electrophoresis (PFGE). In field inversion gel electrophoresis (FIGE, a kind of PFGE), it is possible to have "band inversion" - where large molecules may move faster than small molecules. Conformation of DNA The conformation of the DNA molecule can significantly affect the movement of the DNA, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present, and gel from the electrophoresis of the plasmids would normally show a main band which would be the negatively supercoiled form, while other forms of DNA may appear as minor fainter bands. These minor bands may be nicked DNA (open circular form) and the relaxed closed circular form which normally run slower than supercoiled DNA, and the single-stranded form (which can sometimes appear depending on the preparation methods) may move ahead of the supercoiled DNA. The rate at which the various forms move however can change using different electrophoresis conditions, for example linear DNA may run faster or slower than supercoiled DNA depending on conditions, and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. Unless supercoiled DNA markers are used, the size of a circular DNA like plasmid therefore may be more accurately gauged after it has been linearized by restriction digest. DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way. Concentration of ethidium bromide Circular DNA are more strongly affected by ethidium bromide concentration than linear DNA if ethidium bromide is present in the gel during electrophoresis. All naturally occurring DNA circles are underwound, but ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence during electrophoresis can affect its movement in gel. Increasing ethidium bromide intercalated into the DNA can change it from a negatively supercoiled molecule into a fully relaxed form, then to positively coiled superhelix at maximum intercalation. Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology. Gel concentration The concentration of the gel determines the pore size of the gel which affects the migration of DNA. The resolution of the DNA changes with the percentage concentration of the gel. Increasing the agarose concentration of a gel reduces the migration speed and improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. For a standard agarose gel electrophoresis, 0.7% gel concentration gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel concentration gives good resolution for small 0.2–1kb fragments. Up to 3% gel concentration can be used for separating very tiny fragments but a vertical polyacrylamide gel would be more appropriate for resolving small fragments. High concentrations gel, however, requires longer run times (sometimes days) and high percentage gels are often brittle and may not set evenly. High percentage agarose gels should be run with PFGE or FIGE. Low percentage gels (0.1−0.2%) are fragile and may break. 1% gels are common for many applications. Applied field At low voltages, the rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. However, in increasing electric field strength, the mobility of high-molecular-weight DNA fragments increases differentially, and the effective range of separation decreases and resolution therefore is lower at high voltage. For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended. Voltage is also limited by the fact that it heats the gel and may cause the gel to melt if a gel is run at high voltage for a prolonged period, particularly for low-melting point agarose gel. The mobility of DNA however may change in an unsteady field. In a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. This phenomenon can result in band inversion whereby larger DNA fragments move faster than smaller ones in PFGE. Mechanism of migration and separation The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis, i.e. there is no separation by size without a gel matrix. Hydrodynamic interaction between different parts of the DNA are cut off by streaming counterions moving in the opposite direction, so no mechanism exists to generate a dependence of velocity on length on a scale larger than screening length of about 10 nm. This makes it different from other processes such as sedimentation or diffusion where long-ranged hydrodynamic interaction are important. The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, however the precise mechanism responsible the separation is not entirely clear. A number of models exists for the mechanism of separation of biomolecules in gel matrix, a widely accepted one is the Ogston model which treats the polymer matrix as a sieve consisting of randomly distributed network of inter-connected pores. A globular protein or a random coil DNA moves through the connected pores large enough to accommodate its passage, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this process of sieving. The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. At higher electric field strength, this turned into a biased reptation model, whereby the leading end of the molecule become strongly biased in the forward direction, and this leading edge pulls the rest of the molecule along. In the fully biased mode, the mobility reached a saturation point and DNA beyond a certain size cannot be separated. Perfect parallel alignment of the chain with the field however is not observed in practice as that would mean the same mobility for long and short molecules. Further refinement of the biased reptation model takes into account of the internal fluctuations of the chain. The biased reptation model has also been used to explain the mobility of DNA in PFGE. The orientation of the DNA is progressively built up by reptation after the onset of a field, and the time it reached the steady state velocity is dependent on the size of the molecule. When the field is changed, larger molecules take longer to reorientate, it is therefore possible to discriminate between the long chains that cannot reach its steady state velocity from the short ones that travel most of the time in steady velocity. Other models, however, also exist. Real-time fluorescence microscopy of stained molecules showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres. This observation may be termed the "caterpillar" model. Other model proposes that the DNA gets entangled with the polymer matrix, and the larger the molecule, the more likely it is to become entangled and its movement impeded. Visualization The most common dye used to make DNA or RNA bands visible for agarose gel electrophoresis is ethidium bromide, usually abbreviated as EtBr. It fluoresces under UV light when intercalated into the major groove of DNA (or RNA). By running DNA through an EtBr-treated gel and visualizing it with UV light, any band containing more than ~20 ng DNA becomes distinctly visible. EtBr is a known mutagen, and safer alternatives are available, such as GelRed, produced by Biotium, which binds to the minor groove. SYBR Green I is another dsDNA stain, produced by Invitrogen. It is more expensive, but 25 times more sensitive, and possibly safer than EtBr, though there is no data addressing its mutagenicity or toxicity in humans. SYBR Safe is a variant of SYBR Green that has been shown to have low enough levels of mutagenicity and toxicity to be deemed nonhazardous waste under U.S. Federal regulations. It has similar sensitivity levels to EtBr, but, like SYBR Green, is significantly more expensive. In countries where safe disposal of hazardous waste is mandatory, the costs of EtBr disposal can easily outstrip the initial price difference, however. Since EtBr stained DNA is not visible in natural light, scientists mix DNA with negatively charged loading buffers before adding the mixture to the gel. Loading buffers are useful because they are visible in natural light (as opposed to UV light for EtBr stained DNA), and they co-sediment with DNA (meaning they move at the same speed as DNA of a certain length). Xylene cyanol and Bromophenol blue are common dyes found in loading buffers; they run about the same speed as DNA fragments that are 5000 bp and 300 bp in length respectively, but the precise position varies with percentage of the gel. Other less frequently used progress markers are Cresol Red and Orange G which run at about 125 bp and 50 bp, respectively. Visualization can also be achieved by transferring DNA after SDS-PAGE to a nitrocellulose membrane followed by exposure to a hybridization probe. This process is termed Southern blotting. For fluorescent dyes, after electrophoresis the gel is illuminated with an ultraviolet lamp (usually by placing it on a light box, while using protective gear to limit exposure to ultraviolet radiation). The illuminator apparatus mostly also contains imaging apparatus that takes an image of the gel, after illumination with UV radiation. The ethidium bromide fluoresces reddish-orange in the presence of DNA, since it has intercalated with the DNA. The DNA band can also be cut out of the gel, and can then be dissolved to retrieve the purified DNA. The gel can then be photographed usually with a digital or polaroid camera. Although the stained nucleic acid fluoresces reddish-orange, images are usually shown in black and white (see figures). UV damage to the DNA sample can reduce the efficiency of subsequent manipulation of the sample, such as ligation and cloning. Shorter wavelength UV radiations (302 or 312 nm) cause greater damage, for example exposure for as little as 45 seconds can significantly reduce transformation efficiency. Therefore if the DNA is to be use for downstream procedures, exposure to a shorter wavelength UV radiations should be limited, instead higher-wavelength UV radiation (365 nm) which cause less damage should be used. Higher wavelength radiations however produces weaker fluorescence, therefore if it is necessary to capture the gel image, a shorter wavelength UV light can be used a short time. Addition of Cytidine or guanosine to the electrophoresis buffer at 1 mM concentration may protect the DNA from damage. Alternatively, a blue light excitation source with a blue-excitable stain such as SYBR Green or GelGreen may be used. Gel electrophoresis research often takes advantage of software-based image analysis tools, such as ImageJ. References Molecular biology Electrophoresis DNA
Gel electrophoresis of nucleic acids
[ "Chemistry", "Biology" ]
3,258
[ "Instrumental analysis", "Biochemical separation processes", "Molecular biology techniques", "Molecular biology", "Biochemistry", "Electrophoresis" ]
238,560
https://en.wikipedia.org/wiki/Ilya%20Prigogine
Viscount Ilya Romanovich Prigogine (; ; 28 May 2003) was a Belgian physical chemist of Russian-Jewish origin, noted for his work on dissipative structures, complex systems, and irreversibility. Prigogine's work most notably earned him the 1977 Nobel Prize in Chemistry “for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures”, as well as the Francqui Prize in 1955, and the Rumford Medal in 1976. Biography Early life and studies Prigogine was born in Moscow a few months before the October Revolution of 1917, into a Jewish family. His father, Ruvim (Roman) Abramovich Prigogine, was a chemist who studied at the Imperial Moscow Technical School and owned a soap factory; his mother, Yulia Vikhman, was a pianist who attended the Moscow Conservatory. In 1921, the factory having been nationalized by the new Soviet regime and the feeling of insecurity rising amidst the civil war, the family left Russia. After a brief period in Lithuania, they went to Germany and settled in Berlin; 8 years later, due to the poor economic situation and the creeping emergence of Nazism, they moved on to Brussels, where Prigogine received Belgian nationality in 1949. His brother Alexandre (1913–1991) became an ornithologist. As a teenager, Prigogine was interested in music, history and archeology. He graduated from the Athenée d'Ixelles in 1935, majoring in Greek and Latin. His parents encouraged him to become a lawyer, and he initially enrolled in law studies at the Free University of Brussels. At that time he developed an interest in psychology and the study of behavior; in turn, reading about these subjects triggered an interest in chemistry, as chemical processes impact the mind and body; this also triggered a more fundamental interest in physics, as they explain chemistry. He ended up dropping out from the law faculty. Prigogine afterwards simultaneously enrolled in chemistry and physics at the Free University of Brussels, something he achieved with "uncommon success"; he earned the equivalent of a Master's degree in both disciplines in 1939, and a PhD in chemistry in 1941 under Théophile de Donder. Early career, World War II He started his research career under the German occupation of Belgium. From 1940 onwards he gave clandestine lectures to students. In 1941, the university formally closed to protest the forced appointment of Flemish pro-Nazi New Order professors by the occupiers; he continued giving clandestine lectures until the Liberation of Belgium in 1944. During that time window he also published 21 articles. In 1943, Prigogine and his future wife Hélène Jofé were arrested by the Germans; after multiple interventions including by the Queen Elisabeth, they were eventually released a couple of weeks later. Later career In 1951, he became a full professor at his alma mater; at 34 years old, he was the youngest ever full professor at the science faculty in Brussels. In 1959, he was appointed director of the International Solvay Institute in Brussels, Belgium. In that year, he also started teaching at the University of Texas at Austin in the United States, where he later was appointed Regental Professor and Ashbel Smith Professor of Physics and Chemical Engineering. From 1961 until 1966 he was affiliated with the Enrico Fermi Institute at the University of Chicago and was a visiting professor at Northwestern University. In Austin, in 1967, he co-founded the Center for Thermodynamics and Statistical Mechanics, now the Center for Complex Quantum Systems. In that year, he also returned to Belgium, where he became director of the Center for Statistical Mechanics and Thermodynamics. He was a member of numerous scientific organizations, and received numerous awards, prizes and 53 honorary degrees. In 1955, Prigogine was awarded the Francqui Prize for Exact Sciences. For his study in irreversible thermodynamics, he received the Rumford Medal in 1976, and in 1977, the Nobel Prize in Chemistry "for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures". In 1989, he was awarded the title of viscount in the Belgian nobility by the King of the Belgians. Until his death, he was president of the International Academy of Science, Munich and was in 1997, one of the founders of the International Commission on Distance Education (CODE), a worldwide accreditation agency. Prigogine received an Honorary Doctorate from Heriot-Watt University in 1985 and in 1998 he was awarded an honoris causa doctorate by the UNAM in Mexico City. Prigogine was first married to belgian poet Hélène Jofé (as an author also known as Hélène Prigogine) and in 1945 they had a son Yves. After their divorce, he married Polish-born chemist Maria Prokopowicz (also known as Maria Prigogine) in 1961. In 1970 they had a son, Pascal. In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto. Research Prigogine defined dissipative structures and their role in thermodynamic systems far from equilibrium, a discovery that won him the Nobel Prize in Chemistry in 1977. In summary, Ilya Prigogine discovered that importation and dissipation of energy into chemical systems could result in the emergence of new structures (hence dissipative structures) due to internal self reorganization. In his 1955 text, Prigogine drew connections between dissipative structures and the Rayleigh-Bénard instability and the Turing mechanism. And his 1977 work on self-reorganization was recognized as relevant for psychology. Dissipative structures theory Dissipative structure theory led to pioneering research in self-organizing systems, as well as philosophical inquiries into the formation of complexity in biological entities and the quest for a creative and irreversible role of time in the natural sciences. With professor Robert Herman, he also developed the basis of the two fluid model, a traffic model in traffic engineering for urban networks, analogous to the two fluid model in classical statistical mechanics, a common problem that had attracted Prigogine's attention some years before. Prigogine's formal concept of self-organization was used also as a "complementary bridge" between general systems theory and thermodynamics, conciliating the cloudiness of some important systems theory concepts such as entropy instead of molecular disorder, and emergence, fluctuations and irreversibility instead of “birth and death” with scientific rigor. Work on unsolved problems in physics In his later years, his work concentrated on the fundamental role of indeterminism in nonlinear systems on both the classical and quantum level. Prigogine and coworkers proposed a Liouville space extension of quantum mechanics. A Liouville space is the vector space formed by the set of (self-adjoint) linear operators, equipped with an inner product, that act on a Hilbert space. There exists a mapping of each linear operator into Liouville space, yet not every self-adjoint operator of Liouville space has a counterpart in Hilbert space, and in this sense Liouville space has a richer structure than Hilbert space. The Liouville space extension proposal by Prigogine and co-workers aimed to solve the arrow of time problem of thermodynamics and the measurement problem of quantum mechanics. Prigogine co-authored several books with Isabelle Stengers, including The End of Certainty and La Nouvelle Alliance (Order out of Chaos). The End of Certainty In his 1996 book, La Fin des certitudes, written in collaboration with Isabelle Stengers and published in English in 1997 as The End of Certainty: Time, Chaos, and the New Laws of Nature, Prigogine contends that determinism is no longer a viable scientific belief: "The more we know about our universe, the more difficult it becomes to believe in determinism." This is a major departure from the approach of Newton, Einstein and Schrödinger, all of whom expressed their theories in terms of deterministic equations. According to Prigogine, determinism loses its explanatory power in the face of irreversibility and instability. Prigogine traces the dispute over determinism back to Darwin, whose attempt to explain individual variability according to evolving populations inspired Ludwig Boltzmann to explain the behavior of gases in terms of populations of particles rather than individual particles. This led to the field of statistical mechanics and the realization that gases undergo irreversible processes. In deterministic physics, all processes are time-reversible, meaning that they can proceed backward as well as forward through time. As Prigogine explains, determinism is fundamentally a denial of the arrow of time. With no arrow of time, there is no longer a privileged moment known as the "present," which follows a determined "past" and precedes an undetermined "future." All of time is simply given, with the future as determined or as undetermined as the past. With irreversibility, the arrow of time is reintroduced to physics. Prigogine notes numerous examples of irreversibility, including diffusion, radioactive decay, solar radiation, weather and the emergence and evolution of life. Like weather systems, organisms are unstable systems existing far from thermodynamic equilibrium. Instability resists standard deterministic explanation. Instead, due to sensitivity to initial conditions, unstable systems can only be explained statistically, that is, in terms of probability. Prigogine asserts that Newtonian physics has now been "extended" three times: first with the introduction of spacetime in general relativity, then with the use of the wave function in quantum mechanics, and finally with the recognition of indeterminism in the study of unstable systems (chaos theory). Publications Defay, R. & Prigogine, I. (1966). Surface tension and adsorption. Longmans, Green & Co. LTD. Prigogine, I. The Behavior of Matter under Nonequilibrium Conditions: Fundamental Aspects and Applications in Energy-oriented Problems, United States Department of Energy, Progress Reports: September 1984 – November 1987, (7 October 1987). Department of Physics at the University of Texas-Austin 15 April 1988 – 14 April 1989, (January 1989), Center for Studies in Statistical Mathematics at the University of Texas-Austin. 15 April 1990 – 14 April 1991, (December 1990), Center for Studies in Statistical Mechanics and Complex Systems at the University of Texas-Austin. Prigogine, I. "Time, Dynamics and Chaos: Integrating Poincare's 'Non-Integrable Systems'", Center for Studies in Statistical Mechanics and Complex Systems at the University of Texas-Austin, United States Department of Energy-Office of Energy Research, Commission of the European Communities (October 1990). Petrosky, T. & Prigogine, I. (1997). The Liouville space extension of quantum mechanics. In: Advances in Chemical Physics, 99, 1-120. Editor (with Stuart A. Rice) of the Advances in Chemical Physics book series published by John Wiley & Sons (presently over 140 volumes) Prigogine I, (papers and interviews) Is future given?, World Scientific, 2003. (145p.) Ilya Prigogine Prize for Thermodynamics The Ilya Prigogine Prize for Thermodynamics was initialized in 2001 and patronized by Ilya Prigogine himself until his death in 2003. It is awarded on a biennial basis during the Joint European Thermodynamics Conference (JETC) and considers all branches of thermodynamics (applied, theoretical, and experimental as well as quantum thermodynamics and classical thermodynamics). See also Autocatalytic reactions and order creation List of Jewish Nobel laureates Schismatrix Systems theory Prigogine's theorem Process philosophy References Further reading External links including the Nobel Lecture, 8 December 1977 Time, Structure and Fluctuations The Center for Complex Quantum Systems Emergent computation Interview with Prigogine (Belgian VRT, 1977) 1917 births 2003 deaths Nobel laureates in Chemistry Belgian Nobel laureates Soviet emigrants to Germany German emigrants to Belgium Belgian physicists Jewish physicists Jewish Nobel laureates Belgian physical chemists Free University of Brussels (1834–1969) alumni Belgian systems scientists Jewish scientists Complex systems scientists Theoretical chemists Thermodynamicists Academic staff of the Free University of Brussels (1834–1969) University of Texas at Austin faculty Foreign members of the USSR Academy of Sciences Foreign members of the Russian Academy of Sciences Foreign associates of the National Academy of Sciences Jewish chemists Belgian Jews Belgian people of Russian-Jewish descent Naturalised citizens of Belgium Viscounts of Belgium Emigrants from the Russian Empire to Belgium Computational chemists Members of the German Academy of Sciences at Berlin Presidents of the International Society for the Systems Sciences Russian scientists Recipients of the Cothenius Medal
Ilya Prigogine
[ "Physics", "Chemistry" ]
2,705
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists", "Thermodynamics", "Thermodynamicists" ]
238,680
https://en.wikipedia.org/wiki/Avogadro%27s%20law
Avogadro's law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) or Avogadro-Ampère's hypothesis is an experimental gas law relating the volume of a gas to the amount of substance of gas present. The law is a specific case of the ideal gas law. A modern statement is: Avogadro's law states that "equal volumes of all gases, at the same temperature and pressure, have the same number of molecules." For a given mass of an ideal gas, the volume and amount (moles) of the gas are directly proportional if the temperature and pressure are constant. The law is named after Amedeo Avogadro who, in 1812, hypothesized that two given samples of an ideal gas, of the same volume and at the same temperature and pressure, contain the same number of molecules. As an example, equal volumes of gaseous hydrogen and nitrogen contain the same number of molecules when they are at the same temperature and pressure, and display ideal gas behavior. In practice, real gases show small deviations from the ideal behavior and the law holds only approximately, but is still a useful approximation for scientists. Mathematical definition The law can be written as: or where V is the volume of the gas; n is the amount of substance of the gas (measured in moles); k is a constant for a given temperature and pressure. This law describes how, under the same condition of temperature and pressure, equal volumes of all gases contain the same number of molecules. For comparing the same substance under two different sets of conditions, the law can be usefully expressed as follows: The equation shows that, as the number of moles of gas increases, the volume of the gas also increases in proportion. Similarly, if the number of moles of gas is decreased, then the volume also decreases. Thus, the number of molecules or atoms in a specific volume of ideal gas is independent of their size or the molar mass of the gas. Derivation from the ideal gas law The derivation of Avogadro's law follows directly from the ideal gas law, i.e. where R is the gas constant, T is the Kelvin temperature, and P is the pressure (in pascals). Solving for V/n, we thus obtain Compare that to which is a constant for a fixed pressure and a fixed temperature. An equivalent formulation of the ideal gas law can be written using Boltzmann constant kB, as where N is the number of particles in the gas, and the ratio of R over kB is equal to the Avogadro constant. In this form, for V/N is a constant, we have If T and P are taken at standard conditions for temperature and pressure (STP), then k′ = 1/n0, where n0 is the Loschmidt constant. Historical account and influence Avogadro's hypothesis (as it was known originally) was formulated in the same spirit of earlier empirical gas laws like Boyle's law (1662), Charles's law (1787) and Gay-Lussac's law (1808). The hypothesis was first published by Amedeo Avogadro in 1811, and it reconciled Dalton atomic theory with the "incompatible" idea of Joseph Louis Gay-Lussac that some gases were composite of different fundamental substances (molecules) in integer proportions. In 1814, independently from Avogadro, André-Marie Ampère published the same law with similar conclusions. As Ampère was more well known in France, the hypothesis was usually referred there as Ampère's hypothesis, and later also as Avogadro–Ampère hypothesis or even Ampère–Avogadro hypothesis. Experimental studies carried out by Charles Frédéric Gerhardt and Auguste Laurent on organic chemistry demonstrated that Avogadro's law explained why the same quantities of molecules in a gas have the same volume. Nevertheless, related experiments with some inorganic substances showed seeming exceptions to the law. This apparent contradiction was finally resolved by Stanislao Cannizzaro, as announced at Karlsruhe Congress in 1860, four years after Avogadro's death. He explained that these exceptions were due to molecular dissociations at certain temperatures, and that Avogadro's law determined not only molecular masses, but atomic masses as well. Ideal gas law Boyle, Charles and Gay-Lussac laws, together with Avogadro's law, were combined by Émile Clapeyron in 1834, giving rise to the ideal gas law. At the end of the 19th century, later developments from scientists like August Krönig, Rudolf Clausius, James Clerk Maxwell and Ludwig Boltzmann, gave rise to the kinetic theory of gases, a microscopic theory from which the ideal gas law can be derived as an statistical result from the movement of atoms/molecules in a gas. Avogadro constant Avogadro's law provides a way to calculate the quantity of gas in a receptacle. Thanks to this discovery, Johann Josef Loschmidt, in 1865, was able for the first time to estimate the size of a molecule. His calculation gave rise to the concept of the Loschmidt constant, a ratio between macroscopic and atomic quantities. In 1910, Millikan's oil drop experiment determined the charge of the electron; using it with the Faraday constant (derived by Michael Faraday in 1834), one is able to determine the number of particles in a mole of substance. At the same time, precision experiments by Jean Baptiste Perrin led to the definition of the Avogadro number as the number of molecules in one gram-molecule of oxygen. Perrin named the number to honor Avogadro for his discovery of the namesake law. Later standardization of the International System of Units led to the modern definition of the Avogadro constant. Molar volume At standard temperature and pressure (100 kPa and 273.15 K), we can use Avogadro's law to find the molar volume of an ideal gas: Similarly, at standard atmospheric pressure (101.325 kPa) and 0 °C (273.15 K): Notes References Gas laws Amount of substance it:Volume molare#Legge di Avogadro
Avogadro's law
[ "Physics", "Chemistry", "Mathematics" ]
1,275
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Chemical quantities", "Amount of substance", "Gas laws", "Wikipedia categories named after physical quantities" ]
238,689
https://en.wikipedia.org/wiki/Meselson%E2%80%93Stahl%20experiment
The Meselson–Stahl experiment is an experiment by Matthew Meselson and Franklin Stahl in 1958 which supported Watson and Crick's hypothesis that DNA replication was semiconservative. In semiconservative replication, when the double-stranded DNA helix is replicated, each of the two new double-stranded DNA helices consisted of one strand from the original helix and one newly synthesized. It has been called "the most beautiful experiment in biology". Meselson and Stahl decided the best way to trace the parent DNA would be to tag them by changing one of its atoms. Since nitrogen is present in all of the DNA bases, they generated parent DNA containing a heavier isotope of nitrogen than would be present naturally. This altered mass allowed them to determine how much of the parent DNA was present in the DNA after successive cycles of replication. Hypothesis Three hypotheses had been previously proposed for the method of replication of DNA. In the semiconservative hypothesis, proposed by Watson and Crick, the two strands of a DNA molecule separate during replication. Each strand then acts as a template for synthesis of a new strand. The conservative hypothesis proposed that the entire DNA molecule acted as a template for the synthesis of an entirely new one. According to this model, histone proteins bind to the DNA, revolving the strand and exposing the nucleotide bases (which normally line the interior) for hydrogen bonding. The dispersive hypothesis is exemplified by a model proposed by Max Delbrück, which attempts to solve the problem of unwinding the two strands of the double helix by a mechanism that breaks the DNA backbone every 10 nucleotides or so, untwists the molecule, and attaches the old strand to the end of the newly synthesized one. This would synthesize the DNA in short pieces alternating from one strand to the other. Each of these three models makes a different prediction about the distribution of the "old" DNA in molecules formed after replication. In the conservative hypothesis, after replication, one molecule is the entirely conserved "old" molecule, and the other is all newly synthesized DNA. The semiconservative hypothesis predicts that each molecule after replication will contain one old and one new strand. The dispersive model predicts that each strand of each new molecule will contain a mixture of old and new DNA. Experimental procedure and results Nitrogen is a major constituent of DNA. 14N is by far the most abundant isotope of nitrogen, but DNA with the heavier (but non-radioactive) 15N isotope is also functional. E. coli was grown for several generations in a medium containing NH4Cl with 15N. When DNA is extracted from these cells and made to undergo buoyant density centrifugation on a salt (CsCl) density gradient, the DNA separates out at the point at which its density equals that of the salt solution. The DNA of the cells grown in 15N medium had a higher density than cells grown in normal 14N medium. After that, E. coli cells with only 15N in their DNA were transferred to a 14N medium and were allowed to divide; the progress of cell division was monitored by microscopic cell counts and by colony assay. DNA was extracted periodically and was compared to pure 14N DNA and 15N DNA. After one replication, the DNA was found to have intermediate density. Since conservative replication would result in equal amounts of DNA of the higher and lower densities (but no DNA of an intermediate density), conservative replication was excluded. However, this result was consistent with both semiconservative and dispersive replication. Semiconservative replication would result in double-stranded DNA with one strand of 15N DNA, and one of 14N DNA, while dispersive replication would result in double-stranded DNA with both strands having mixtures of 15N and 14N DNA, either of which would have appeared as DNA of an intermediate density. The authors continued to sample cells as replication continued. DNA from cells after two replications had been completed was found to consist of equal amounts of DNA with two different densities, one corresponding to the intermediate density of DNA of cells grown for only one division in 14N medium, the other corresponding to DNA from cells grown exclusively in 14N medium. This was inconsistent with dispersive replication, which would have resulted in a single density, lower than the intermediate density of the one-generation cells, but still higher than cells grown only in 14N DNA medium, as the original 15N DNA would have been split evenly among all DNA strands. The result was consistent with the semiconservative replication hypothesis. References External links Matthew Meselson's Short Talk: "The Semi-Conservative Replication of DNA" DNA From The Beginning An animation which explains the experiment. The Meselson–Stahl Experiment Another useful animation. Meselson and Stahl Experiment English Animation Description of the Meselson-Stahl Experiment written by Nathan H. Lents, including original data from Visionlearning DNA DNA replication Genetics experiments History of genetics 1958 in biology
Meselson–Stahl experiment
[ "Biology" ]
1,036
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
238,706
https://en.wikipedia.org/wiki/Alpha-fetoprotein
Alpha-fetoprotein (AFP, α-fetoprotein; also sometimes called alpha-1-fetoprotein, alpha-fetoglobulin, or alpha fetal protein) is a protein that in humans is encoded by the AFP gene. The AFP gene is located on the q arm of chromosome 4 (4q13.3). Maternal AFP serum level is used to screen for Down syndrome, neural tube defects, and other chromosomal abnormalities. AFP is a major plasma protein produced by the yolk sac and the fetal liver during fetal development. It is thought to be the fetal analog of serum albumin. AFP binds to copper, nickel, fatty acids and bilirubin and is found in monomeric, dimeric and trimeric forms. Structure AFP is a glycoprotein of 591 amino acids and a carbohydrate moiety. Function The function of AFP in adult humans is unknown. AFP is the most abundant plasma protein found in the human fetus. In the fetus, AFP is produced by both the liver and the yolk sac. It is believed to function as a carrier protein (similar to albumin) that transports materials such as fatty acids to cells. Maternal plasma levels peak near the end of the first trimester, and begin decreasing prenatally at that time, then decrease rapidly after birth. Normal adult levels in the newborn are usually reached by the age of 8 to 12 months. While the function in humans is unknown, in rodents it binds estradiol to prevent the transport of this hormone across the placenta to the fetus. The main function of this is to prevent the virilization of female fetuses. As human AFP does not bind estrogen, its function in humans is less clear. In human liver cancer, AFP is found to bind glypican-3 (GPC3), another oncofetal antigen. The rodent AFP system can be overridden with massive injections of estrogen, which overwhelm the AFP system and will masculinize the fetus. The masculinizing effect of estrogens may seem counter-intuitive since estrogens are critical for the proper development of female secondary characteristics during puberty. However, this is not the case prenatally. Gonadal hormones from the testes, such as testosterone and anti-Müllerian hormone, are required to cause development of a phenotypic male. Without these hormones, the fetus will develop into a phenotypic female even if genetically XY. The conversion of testosterone into estradiol by aromatase in many tissues may be an important step in masculinization of that tissue. Masculinization of the brain is thought to occur both by conversion of testosterone into estradiol by aromatase, but also by de novo synthesis of estrogens within the brain. Thus, AFP may protect the fetus from maternal estradiol that would otherwise have a masculinizing effect on the fetus, but its exact role is still controversial. Serum levels Maternal In pregnant women, fetal AFP levels can be monitored in the urine of the pregnant woman. Since AFP is quickly cleared from the mother's serum via her kidneys, maternal urine AFP correlates with fetal serum levels, although the maternal urine level is much lower than the fetal serum level. AFP levels rise until about week 32. Maternal serum alpha-fetoprotein (MSAFP) screening is performed at 16 to 18 weeks of gestation. If MSAFP levels indicate an anomaly, amniocentesis may be offered to the patient. Infants The normal range of AFP for adults and children is variously reported as under 50, under 10, or under 5 ng/mL. At birth, normal infants have AFP levels four or more orders of magnitude above this normal range, that decreases to a normal range over the first year of life. During this time, the normal range of AFP levels spans approximately two orders of magnitude. Correct evaluation of abnormal AFP levels in infants must take into account these normal patterns. Very high AFP levels may be subject to hooking (see Tumor marker), which results in the level being reported significantly lower than the actual concentration. This is important for analysis of a series of AFP tumor marker tests, e.g. in the context of post-treatment early surveillance of cancer survivors, where the rate of decrease of AFP has diagnostic value. Clinical significance Measurement of AFP is generally used in two clinical contexts. First, it is measured in pregnant women through the analysis of maternal blood or amniotic fluid as a screening test for certain developmental abnormalities, such as aneuploidy. Second, serum AFP level is elevated in people with certain tumors, and so it is used as a biomarker to follow these diseases. Some of these diseases are listed below: Developmental birth defects associated with elevated AFP Omphalocele Gastroschisis Neural tube defects: ↑ α-fetoprotein in amniotic fluid and maternal serum Tumors associated with elevated AFP Hepatocellular carcinoma Metastatic disease affecting the liver Nonseminomatous germ cell tumors Yolk sac tumor Other conditions associated with elevated AFP Ataxia telangiectasia: elevated AFP is used as one factor in diagnosis A peptide derived from AFP that is referred to as AFPep is claimed to possess anti-cancer properties. In the treatment of testicular cancer it is paramount to differentiate seminomatous and nonseminomatous tumors. This is typically done pathologically after removal of the testicle and confirmed by tumor markers. However, if the pathology is pure seminoma, if the AFP is elevated, the tumor is treated as a nonseminomatous tumor because it contains yolk sac (nonseminomatous) components. See also Tumor marker AFP-L3 Triple test Advanced maternal age References Further reading External links Glycoproteins Tumor markers Obstetrics Midwifery
Alpha-fetoprotein
[ "Chemistry", "Biology" ]
1,261
[ "Biomarkers", "Tumor markers", "Glycoproteins", "Glycobiology", "Chemical pathology" ]
238,790
https://en.wikipedia.org/wiki/Methylene%20blue
Methylthioninium chloride, commonly called methylene blue, is a salt used as a dye and as a medication. As a medication, it is mainly used to treat methemoglobinemia by chemically reducing the ferric iron in hemoglobin to ferrous iron. Specifically, it is used to treat methemoglobin levels that are greater than 30% or in which there are symptoms despite oxygen therapy. It has previously been used for treating cyanide poisoning and urinary tract infections, but this use is no longer recommended. Methylene blue is typically given by injection into a vein. Common side effects include headache, nausea, and vomiting. While use during pregnancy may harm the fetus, not using it in methemoglobinemia is likely more dangerous. Methylene blue was first prepared in 1876, by Heinrich Caro. It is on the World Health Organization's List of Essential Medicines. Uses Methemoglobinemia Methylene blue is employed as a medication for the treatment of methemoglobinemia, which can arise from ingestion of certain pharmaceuticals, toxins, or broad beans in those susceptible. Normally, through the NADH- or NADPH-dependent methemoglobin reductase enzymes, methemoglobin is reduced back to hemoglobin. When large amounts of methemoglobin occur secondary to toxins, methemoglobin reductases are overwhelmed. Methylene blue, when injected intravenously as an antidote, is itself first reduced to leucomethylene blue, which then reduces the heme group from methemoglobin to hemoglobin. Methylene blue can reduce the half life of methemoglobin from hours to minutes. At high doses, however, methylene blue actually induces methemoglobinemia, reversing this pathway. Methylphen Cyanide poisoning Since its reduction potential is similar to that of oxygen and can be reduced by components of the electron transport chain, large doses of methylene blue are sometimes used as an antidote to potassium cyanide poisoning, a method first successfully tested in 1933 by Matilda Moldenhauer Brooks in San Francisco, although first demonstrated by Bo Sahlin of Lund University, in 1926. Dye or stain Methylene blue is used in endoscopic polypectomy as an adjunct to saline or epinephrine, and is used for injection into the submucosa around the polyp to be removed. This allows the submucosal tissue plane to be identified after the polyp is removed, which is useful in determining if more tissue needs to be removed, or if there has been a high risk for perforation. Methylene blue is also used as a dye in chromoendoscopy, and is sprayed onto the mucosa of the gastrointestinal tract in order to identify dysplasia, or pre-cancerous lesions. Intravenously injected methylene blue is readily released into the urine and thus can be used to test the urinary tract for leaks or fistulas. In surgeries such as sentinel lymph node dissections, methylene blue can be used to visually trace the lymphatic drainage of tested tissues. Similarly, methylene blue is added to bone cement in orthopedic operations to provide easy discrimination between native bone and cement. Additionally, methylene blue accelerates the hardening of bone cement, increasing the speed at which bone cement can be effectively applied. Methylene blue is used as an aid to visualisation/orientation in a number of medical devices, including a Surgical sealant film, TissuePatch. In fistulas and pilonidal sinuses it is used to identify the tract for complete excision. It can also be used during gastrointestinal surgeries (such as bowel resection or gastric bypass) to test for leaks. It is sometimes used in cytopathology, in mixtures including Wright-Giemsa and Diff-Quik. It confers a blue color to both nuclei and cytoplasm, and makes the nuclei more visible. When methylene blue is "polychromed" (oxidized in solution or "ripened" by fungal metabolism, as originally noted in the thesis of Dr. D. L. Romanowsky in the 1890s), it gets serially demethylated and forms all the tri-, di-, mono- and non-methyl intermediates, which are Azure B, Azure A, Azure C, and thionine, respectively. This is the basis of the basophilic part of the spectrum of Romanowski-Giemsa effect. If only synthetic Azure B and Eosin Y is used, it may serve as a standardized Giemsa stain; but, without methylene blue, the normal neutrophilic granules tend to overstain and look like toxic granules. On the other hand, if methylene blue is used it might help to give the normal look of neutrophil granules and may also enhance the staining of nucleoli and polychromatophilic RBCs (reticulocytes). A traditional application of methylene blue is the intravital or supravital staining of nerve fibers, an effect first described by Paul Ehrlich in 1887. A dilute solution of the dye is either injected into tissue or applied to small freshly removed pieces. The selective blue coloration develops with exposure to air (oxygen) and can be fixed by immersion of the stained specimen in an aqueous solution of ammonium molybdate. Vital methylene blue was formerly much used for examining the innervation of muscle, skin and internal organs. The mechanism of selective dye uptake is incompletely understood; vital staining of nerve fibers in skin is prevented by ouabain, a drug that inhibits the Na/K-ATPase of cell membranes. Placebo Methylene blue has been used as a placebo; physicians would tell their patients to expect their urine to change color and view this as a sign that their condition had improved. This same side effect makes methylene blue difficult to use in traditional placebo-controlled clinical studies, including those testing for its efficacy as a treatment. Isobutyl nitrite toxicity Isobutyl nitrite is one of the compounds used as poppers, an inhalant drug that induces a brief euphoria. Isobutyl nitrite is known to cause methemoglobinemia. Severe methemoglobinemia may be treated with methylene blue. Ifosfamide toxicity Another use of methylene blue is to treat ifosfamide neurotoxicity. Methylene blue was first reported for treatment and prophylaxis of ifosfamide neuropsychiatric toxicity in 1994. A toxic metabolite of ifosfamide, chloroacetaldehyde (CAA), disrupts the mitochondrial respiratory chain, leading to an accumulation of nicotinamide adenine dinucleotide hydrogen (NADH). Methylene blue acts as an alternative electron acceptor, and reverses the NADH inhibition of hepatic gluconeogenesis while also inhibiting the transformation of chloroethylamine into chloroacetaldehyde, and inhibits multiple amine oxidase activities, preventing the formation of CAA. The dosing of methylene blue for treatment of ifosfamide neurotoxicity varies, depending upon its use simultaneously as an adjuvant in ifosfamide infusion, versus its use to reverse psychiatric symptoms that manifest after completion of an ifosfamide infusion. Reports suggest that methylene blue up to six doses a day have resulted in improvement of symptoms within 10 minutes to several days. Alternatively, it has been suggested that intravenous methylene blue every six hours for prophylaxis during ifosfamide treatment in people with history of ifosfamide neuropsychiatric toxicity. Prophylactic administration of methylene blue the day before initiation of ifosfamide, and three times daily during ifosfamide chemotherapy has been recommended to lower the occurrence of ifosfamide neurotoxicity. Shock It has also been used in septic shock and anaphylaxis. Methylene blue consistently increases blood pressure in people with vasoplegic syndrome (redistributive shock), but has not been shown to improve delivery of oxygen to tissues or to decrease mortality. Methylene blue has been used in calcium channel blocker toxicity as a rescue therapy for distributive shock unresponsive to first line agents. Evidence for its use in this circumstance is very poor and limited to a handful of case reports. Side effects Methylene blue is a monoamine oxidase inhibitor (MAOI) and, if infused intravenously at doses exceeding 5 mg/kg, may result in serotonin syndrome if combined with any selective serotonin reuptake inhibitors (SSRIs) or other serotonergic drugs (e.g., duloxetine, sibutramine, venlafaxine, clomipramine, imipramine). It causes hemolytic anemia in carriers of the G6PD (favism) enzymatic deficiency. Chemistry Methylene blue is a formal derivative of phenothiazine. It is a dark green powder that yields a blue solution in water. The hydrated form has 3 molecules of water per unit of methylene blue. Preparation This compound is prepared by oxidation of 4-aminodimethylaniline in the presence of sodium thiosulfate to give the quinonediiminothiosulfonic acid, reaction with dimethylaniline, oxidation to the indamine, and cyclization to give the thiazine: A green electrochemical procedure, using only dimethyl-4-phenylenediamine and sulfide ions has been proposed. Light absorption properties The maximum absorption of light is near 670 nm. The specifics of absorption depend on a number of factors, including protonation, adsorption to other materials, and metachromasy - the formation of dimers and higher-order aggregates depending on concentration and other interactions: Other uses Redox indicator Methylene blue is widely used as a redox indicator in analytical chemistry. Solutions of this substance are blue when in an oxidizing environment, but will turn colorless if exposed to a reducing agent. The redox properties can be seen in a classical demonstration of chemical kinetics in general chemistry, the "blue bottle" experiment. Typically, a solution is made of glucose (dextrose), methylene blue, and sodium hydroxide. Upon shaking the bottle, oxygen oxidizes methylene blue, and the solution turns blue. The dextrose will gradually reduce the methylene blue to its colorless, reduced form. Hence, when the dissolved dextrose is entirely consumed, the solution will turn blue again. The redox midpoint potential E' is +0.01 V. Peroxide generator Methylene blue is also a photosensitizer used to create singlet oxygen when exposed to both oxygen and light. It is used in this regard to make organic peroxides by a Diels-Alder reaction which is spin forbidden with normal atmospheric triplet oxygen. Sulfide analysis The formation of methylene blue after the reaction of hydrogen sulfide with dimethyl-p-phenylenediamine and iron(III) at pH 0.4 – 0.7 is used to determine by photometric measurements sulfide concentration in the range 0.020 to 1.50 mg/L (20 ppb to 1.5 ppm). The test is very sensitive and the blue coloration developing upon contact of the reagents with dissolved H2S is stable for 60 min. Ready-to-use kits such as the Spectroquant sulfide test facilitate routine analyses. The methylene blue sulfide test is a convenient method often used in soil microbiology to quickly detect in water the metabolic activity of sulfate reducing bacteria (SRB). In this colorimetric test, methylene blue is a product formed by the reaction and not a reagent added to the system. The addition of a strong reducing agent, such as ascorbic acid, to a sulfide-containing solution is sometimes used to prevent sulfide oxidation from atmospheric oxygen. Although it is certainly a sound precaution for the determination of sulfide with an ion selective electrode, it might however hamper the development of the blue color if the freshly formed methylene blue is also reduced, as described here above in the paragraph on redox indicator. Test for milk freshness Methylene blue is a dye behaving as a redox indicator that is commonly used in the food industry to test the freshness of milk and dairy products. A few drops of methylene blue solution added to a sample of milk should remain blue (oxidized form in the presence of enough dissolved ), otherwise (discoloration caused by the reduction of methylene blue into its colorless reduced form) the dissolved concentration in the milk sample is low indicating that the milk is not fresh (already abiotically oxidized by whose concentration in solution decreases) or could be contaminated by bacteria also consuming the atmospheric dissolved in the milk. In other words, aerobic conditions should prevail in fresh milk and methylene blue is simply used as an indicator of the dissolved oxygen remaining in the milk. Water testing The adsorption of methylene blue serves as an indicator defining the adsorptive capacity of granular activated carbon in water filters. Adsorption of methylene blue is very similar to adsorption of pesticides from water, this quality makes methylene blue serve as a good predictor for filtration qualities of carbon. It is as well a quick method of comparing different batches of activated carbon of the same quality. A color reaction in an acidified, aqueous methylene blue solution containing chloroform can detect anionic surfactants in a water sample. Such a test is known as an MBAS assay (methylene blue active substances assay). The MBAS assay cannot distinguish between specific surfactants, however. Some examples of anionic surfactants are carboxylates, phosphates, sulfates, and sulfonates. Methylene blue value of fine aggregate The methylene blue value is defined as the number of milliliter's standard methylene value solution decolorized 0.1 g of activated carbon (dry basis). Methylene blue value reflects the amount of clay minerals in aggregate samples. In materials science, methylene blue solution is successively added to fine aggregate which is being agitated in water. The presence of free dye solution can be checked with stain test on a filter paper. Biological staining In biology, methylene blue is used as a dye for a number of different staining procedures, such as Wright's stain and Jenner's stain. Since it is a temporary staining technique, methylene blue can also be used to examine RNA or DNA under the microscope or in a gel: as an example, a solution of methylene blue can be used to stain RNA on hybridization membranes in northern blotting to verify the amount of nucleic acid present. While methylene blue is not as sensitive as ethidium bromide, it is less toxic and it does not intercalate in nucleic acid chains, thus avoiding interference with nucleic acid retention on hybridization membranes or with the hybridization process itself. It can also be used as an indicator to determine whether eukaryotic cells such as yeast are alive or dead. The methylene blue is reduced in viable cells, leaving them unstained. However dead cells are unable to reduce the oxidized methylene blue and the cells are stained blue. Methylene blue can interfere with the respiration of the yeast as it picks up hydrogen ions made during the process. Aquaculture Methylene blue is used in aquaculture and by tropical fish hobbyists as a treatment for fungal infections. It can also be effective in treating fish infected with ich although a combination of malachite green and formaldehyde is far more effective against the parasitic protozoa Ichthyophthirius multifiliis. It is usually used to protect newly laid fish eggs from being infected by fungus or bacteria. This is useful when the hobbyist wants to artificially hatch the fish eggs. Methylene blue is also very effective when used as part of a "medicated fish bath" for treatment of ammonia, nitrite, and cyanide poisoning as well as for topical and internal treatment of injured or sick fish as a "first response". History Methylene blue has been described as "the first fully synthetic drug used in medicine." Methylene blue was first prepared in 1876 by German chemist Heinrich Caro. Its use in the treatment of malaria was pioneered by Paul Guttmann and Paul Ehrlich in 1891. During this period before the first World War, researchers like Ehrlich believed that drugs and dyes worked in the same way, by preferentially staining pathogens and possibly harming them. Changing the cell membrane of pathogens is in fact how various drugs work, so the theory was partially correct although far from complete. Methylene blue continued to be used in the second World War, where it was not well liked by soldiers, who observed, "Even at the loo, we see, we pee, navy blue." Antimalarial use of the drug has recently been revived. It was discovered to be an antidote to carbon monoxide poisoning and cyanide poisoning in 1933 by Matilda Brooks. References External links Antidotes Genetics techniques Histology Redox indicators Thiazine dyes Vital stains World Health Organization essential medicines Monoamine oxidase inhibitors Phenothiazines Chlorides Wikipedia medicine articles ready to translate Dimethylamino compounds
Methylene blue
[ "Chemistry", "Engineering", "Biology" ]
3,731
[ "Genetics techniques", "Chlorides", "Inorganic compounds", "Genetic engineering", "Salts", "Redox indicators", "Histology", "Electrochemistry", "Microscopy" ]
238,839
https://en.wikipedia.org/wiki/Tetrodotoxin
Tetrodotoxin (TTX) is a potent neurotoxin. Its name derives from Tetraodontiformes, an order that includes pufferfish, porcupinefish, ocean sunfish, and triggerfish; several of these species carry the toxin. Although tetrodotoxin was discovered in these fish, it is found in several other animals (e.g., in blue-ringed octopuses, rough-skinned newts, and moon snails). It is also produced by certain infectious or symbiotic bacteria like Pseudoalteromonas, Pseudomonas, and Vibrio as well as other species found in symbiotic relationships with animals and plants. Although it produces thousands of intoxications annually and several deaths, it has shown efficacy for the treatment of cancer-related pain in phase II and III clinical trials. Tetrodotoxin is a sodium channel blocker. It inhibits the firing of action potentials in neurons by binding to the voltage-gated sodium channels in nerve cell membranes and blocking the passage of sodium ions (responsible for the rising phase of an action potential) into the neuron. This prevents the nervous system from carrying messages and thus muscles from contracting in response to nervous stimulation. Its mechanism of actionselective blocking of the sodium channelwas shown definitively in 1964 by Toshio Narahashi and John W. Moore at Duke University, using the sucrose gap voltage clamp technique. Sources in nature Apart from their bacterial species of most likely ultimate biosynthetic origin (see below), tetrodotoxin has been isolated from widely differing animal species, including: all octopi and cuttlefish in small amounts, but specifically several species of the blue-ringed octopus, including Hapalochlaena maculosa (where it was called "maculotoxin"), various pufferfish species, certain angelfish, species of Nassarius gastropods, species of Naticidae (moon snails), several starfish, including Astropecten species, several species of xanthid crabs. species of Chaetognatha (arrow worms), species of Nemertea (ribbon worms), a polyclad flatworm, land planarians of the genus Bipalium, toads of the genus Atelopus, toads of the genus Brachycephalus, the eastern newt (Notophthalmus viridescens) the western or rough-skinned newts (Taricha; wherein it was originally termed "tarichatoxin"), Tarichatoxin was shown to be identical to TTX in 1964 by Mosher et al., and the identity of maculotoxin and TTX was reported in Science in 1978, and the synonymity of these two toxins is supported in modern reports (e.g., at Pubchem and in modern toxicology textbooks) though historic monographs questioning this continue in reprint. The toxin is variously used by animals as a defensive biotoxin to ward off predation, or as both a defensive and predatory venom (e.g., in octopuses, chaetognaths, and ribbon worms). Even though the toxin acts as a defense mechanism, some predators such as the common garter snake have developed insensitivity to TTX, which allows them to prey upon toxic newts. The association of TTX with consumed, infecting, or symbiotic bacterial populations within the animal species from which it is isolated is relatively clear; presence of TTX-producing bacteria within an animal's microbiome is determined by culture methods, the presence of the toxin by chemical analysis, and the association of the bacteria with TTX production by toxicity assay of media in which suspected bacteria are grown. As Lago et al. note, "there is good evidence that uptake of bacteria producing TTX is an important element of TTX toxicity in marine animals that present this toxin." TTX-producing bacteria include Actinomyces, Aeromonas, Alteromonas, Bacillus, Pseudomonas, and Vibrio species; in the following animals, specific bacterial species have been implicated: The association of bacterial species with the production of the toxin is unequivocal – Lago and coworkers state, "[e]ndocellular symbiotic bacteria have been proposed as a possible source of eukaryotic TTX by means of an exogenous pathway", and Chau and coworkers note that the "widespread occurrence of TTX in phylogenetically distinct organisms... strongly suggests that symbiotic bacteria play a role in TTX biosynthesis" – although the correlation has been extended to most but not all animals in which the toxin has been identified. To the contrary, there has been a failure in a single case, that of newts (Taricha granulosa), to detect TTX-producing bacteria in the tissues with highest toxin levels (skin, ovaries, muscle), using PCR methods, although technical concerns about the approach have been raised. Critically for the general argument, Takifugu rubripes puffers captured and raised in laboratory on controlled, TTX-free diets "lose toxicity over time", while cultured, TTX-free Takifugu niphobles puffers fed on TTX-containing diets saw TTX in the livers of the fishes increase to toxic levels. Hence, as bacterial species that produce TTX are broadly present in aquatic sediments, a strong case is made for ingestion of TTX and/or TTX-producing bacteria, with accumulation and possible subsequent colonization and production. Nevertheless, without clear biosynthetic pathways (not yet found in animals, but shown for bacteria), it remains uncertain whether it is simply via bacteria that each animal accumulates TTX; the question remains as to whether the quantities can be sufficiently explained by ingestion, ingestion plus colonization, or some other mechanism. Biochemistry Tetrodotoxin binds to what is known as site 1 of the fast voltage-gated sodium channel. Site 1 is located at the extracellular pore opening of the ion channel. Any molecule bound to this site will block sodium ions from going into the nerve cell through this channel (which is ultimately necessary for nerve conduction). Saxitoxin, neosaxitoxin, and several of the conotoxins also bind the same site. The use of this toxin as a biochemical probe has elucidated two distinct types of voltage-gated sodium channels (VGSCs) present in mammals: tetrodotoxin-sensitive voltage-gated sodium channels (TTX-s Na+ channels) and tetrodotoxin-resistant voltage-gated sodium channels (TTX-r Na+ channels). Tetrodotoxin inhibits TTX-s Na+ channels at concentrations of around 1–10 nM, whereas micromolar concentrations of tetrodotoxin are required to inhibit TTX-r Na+ channels. Nerve cells containing TTX-r Na+ channels are located primarily in cardiac tissue, while nerve cells containing TTX-s Na+ channels dominate the rest of the body. TTX and its analogs have historically been important agents for use as chemical tool compounds, for use in channel characterization and in fundamental studies of channel function. The prevalence of TTX-s Na+ channels in the central nervous system makes tetrodotoxin a valuable agent for the silencing of neural activity within a cell culture. Biosynthesis The biosynthetic route to TTX is only partially understood. It is long known that the molecule is related to saxitoxin, and as of 2011 it is believed that there are separate routes for aquatic (bacterial) and terrestrial (newt) TTX. In 2020, new intermediates found in newts suggest that the synthesis starts with geranyl guanidine in the amphibian; these intermediates were not found in aquatic TTX-containing animals, supporting the separate-route theory. In 2021, the first genome of a TTX-producing bacterium was produced. This "Bacillus sp. 1839" was identified as Cytobacillus gottheilii using its rRNA sequence. The researcher responsible for this study has not yet identified a coherent pathway but hopes to do so in the future. Resistance Animals that accumulate TTX as a defense mechanism as well as their predators must evolve to be resistant to the effects of TTX. Mutations in the VGSC genes, especially the genes for Nav 1.4 (skeletal muscle VGSC, "TTX-s"), are found in many such animals. These mutations have independently arisen several times, even multiple times in different populations of the same species as seen in the garter snake. They consist of different amino acid substitutions in similar positions, a weak example of convergent evolution caused by how TTX binds to the unmutated VGSC. Another path to TTX resistance is toxin-binding proteins that hold onto TTX tightly enough to prevent it reaching the vulnerable VGSCs. Various proteins that bind TTX have been found in pufferfish, crabs, and gastropods. There are also proteins that bind saxitoxin (STX), a toxin with a similar mode of action. Chemical synthesis In 1964, a team of scientists led by Robert B. Woodward elucidated the structure of tetrodotoxin. The structure was confirmed by X-ray crystallography in 1970. Yoshito Kishi and coworkers reported the first total synthesis of racemic tetrodotoxin in 1972. M. Isobe and coworkers and J. Du Bois reported the asymmetric total synthesis of tetrodotoxin in 2003. The two 2003 syntheses used very different strategies, with Isobe's route based on a Diels-Alder approach and Du Bois's work using C–H bond activation. Since then, methods have rapidly advanced, with several new strategies for the synthesis of tetrodotoxin having been developed. Poisoning Toxicity TTX is extremely toxic. The material safety data sheet for TTX lists the oral median lethal dose (LD50) for mice as 334 μg per kg. For comparison, the oral LD50 of potassium cyanide for mice is 8,500 μg per kg, demonstrating that even orally, TTX is more poisonous than cyanide. TTX is even more dangerous if administered intravenously; the amount needed to reach a lethal dose by injection is 8 μg per kg in mice. The toxin can enter the body of a victim by ingestion, injection, or inhalation, or through abraded skin. Poisoning occurring as a consequence of consumption of fish from the order Tetraodontiformes is extremely serious. The organs (e.g., liver) of the pufferfish can contain levels of tetrodotoxin sufficient to produce the described paralysis of the diaphragm and corresponding death due to respiratory failure. Toxicity varies between species and at different seasons and geographic localities, and the flesh of many pufferfish may not be dangerously toxic. The mechanism of toxicity is through the blockage of fast voltage-gated sodium channels, which are required for the normal transmission of signals between the body and brain. As a result, TTX causes loss of sensation, and paralysis of voluntary muscles including the diaphragm and intercostal muscles, stopping breathing. History The therapeutic uses of puffer fish (tetraodon) eggs were mentioned in the first Chinese pharmacopoeia Pen-T’so Ching (The Book of Herbs, allegedly 2838–2698 BC by Shennong; but a later date is more likely), where they were classified as having "medium" toxicity, but could have a tonic effect when used at the correct dose. The principal use was "to arrest convulsive diseases". In the Pen-T’so Kang Mu (Index Herbacea or The Great Herbal by Li Shih-Chen, 1596) some types of the fish Ho-Tun (the current Chinese name for tetraodon) were also recognized as both toxic yet, at the right dose, useful as part of a tonic. Increased toxicity in Ho-Tun was noted in fish caught at sea (rather than river) after the month of March. It was recognized that the most poisonous parts were the liver and eggs, but that toxicity could be reduced by soaking the eggs. (Tetrodotoxin is slightly water-soluble, and soluble at 1 mg/ml in slightly acidic solutions.) The German physician Engelbert Kaempfer, in his "A History of Japan" (translated and published in English in 1727), described how well known the toxic effects of the fish were, to the extent that it would be used for suicide and that the Emperor specifically decreed that soldiers were not permitted to eat it. There is also evidence from other sources that knowledge of such toxicity was widespread throughout southeast Asia and India. The first recorded cases of TTX poisoning affecting Westerners are from the logs of Captain James Cook from 7 September 1774. On that date Cook recorded his crew eating some local tropic fish (pufferfish), then feeding the remains to the pigs kept on board. The crew experienced numbness and shortness of breath, while the pigs were all found dead the next morning. In hindsight, it is clear that the crew survived a mild dose of tetrodotoxin, while the pigs ate the pufferfish body parts that contain most of the toxin, thus being fatally poisoned. The toxin was first isolated and named in 1909 by Japanese scientist Dr. Yoshizumi Tahara. It was one of the agents studied by Japan's Unit 731, which evaluated biological weapons on human subjects in the 1930s. Symptoms and treatment The diagnosis of pufferfish poisoning is based on the observed symptomatology and recent dietary history. Symptoms typically develop within 30 minutes of ingestion, but may be delayed by up to four hours; however, if the dose is fatal, symptoms are usually present within 17 minutes of ingestion. Having pins and needles of the lips and tongue is followed by developing it in the extremities, hypersalivation, sweating, headache, weakness, lethargy, incoordination, tremor, paralysis, bluish skin, loss of voice, difficulty swallowing, and seizures. The gastrointestinal symptoms are often severe and include nausea, vomiting, diarrhoea, and abdominal pain; death is usually secondary to respiratory failure. There is increasing respiratory distress, speech is affected, and the victim usually exhibits shortness of breath, excess pupil dilation, and abnormally low blood pressure. Paralysis increases, and convulsions, mental impairment, and irregular heartbeats may occur. The victim, although completely paralysed, may be conscious and in some cases completely understandable until shortly before death, which generally occurs within 4 to 6 hours (range ~20 minutes to ~8 hours). However, some victims enter a coma. If the patient survives 24 hours, recovery without any aftereffects will usually occur over a few days. Therapy is supportive and based on symptoms, with aggressive early airway management. If consumed, treatment can consist of emptying the stomach, feeding the victim activated charcoal to bind the toxin, and taking standard life-support measures to keep the victim alive until the effect of the poison has worn off. Alpha adrenergic agonists are recommended in addition to intravenous fluids to increase the blood pressure; anticholinesterase agents "have been proposed as a treatment option but have not been tested adequately". No antidote has been developed and approved for human use, but a primary research report (preliminary result) indicates that a monoclonal antibody specific to tetrodotoxin is in development by USAMRIID that was effective, in the one study, for reducing toxin lethality in tests on mice. Worldwide distribution of toxicity Poisonings from tetrodotoxin have been almost exclusively associated with the consumption of pufferfish from waters of the Indo-Pacific Ocean regions, primarily because equally toxic pufferfishes from other regions are much less commonly eaten. Several reported cases of poisonings, including fatalities, nonetheless involved pufferfish from the Atlantic Ocean, Gulf of Mexico, and Gulf of California. There have been no confirmed cases of tetrodotoxicity from the Atlantic pufferfish, Sphoeroides maculatus, but three studies found extracts from fish of this species highly toxic in mice. Several recent intoxications from these fishes in Florida were due to saxitoxin, which causes paralytic shellfish poisoning with very similar symptoms and signs. The trumpet shell Charonia sauliae has been implicated in food poisonings, and evidence suggests it contains a tetrodotoxin derivative. There have been several reported poisonings from mislabelled pufferfish, and at least one report of a fatal episode in Oregon when an individual swallowed a rough-skinned newt Taricha granulosa on a dare. In 2009, a major scare in the Auckland Region of New Zealand was sparked after several dogs died eating Pleurobranchaea maculata (grey side-gilled seaslug) on beaches. Children and pet owners were asked to avoid beaches, and recreational fishing was also interrupted for a time. After exhaustive analysis, it was found that the sea slugs must have ingested tetrodotoxin. Statistical factors Statistics from the Tokyo Bureau of Social Welfare and Public Health indicate 20–44 incidents of fugu poisoning per year between 1996 and 2006 in the entire country, leading to 34–64 hospitalizations and 0–6 deaths per year, for an average fatality rate of 6.8%. Of the 23 incidents recorded within Tokyo between 1993 and 2006, only one took place in a restaurant, while the others all involved fishermen eating their catch. From 2006 through 2009 in Japan there were 119 incidents involving 183 people but only seven people died. Only a few cases have been reported in the United States, and outbreaks in countries outside the Indo-Pacific area are rare. In Haiti, tetrodotoxin was thought to have been used in voodoo preparations, in so-called zombie poisons. Subsequent careful analysis has however repeatedly called early studies into question on technical grounds, and failed to identify the toxin in any preparation. Discussion of the matter has therefore all but disappeared from the primary literature since the early 1990s. Kao and Yasumoto concluded in the first of their papers in 1986 that "the widely circulated claim in the lay press to the effect that tetrodotoxin is the causal agent in the initial zombification process is without factual foundation." Genetic background is not a factor in susceptibility to tetrodotoxin poisoning. This toxicosis may be avoided by not consuming animal species known to contain tetrodotoxin, principally pufferfish; other tetrodotoxic species are not usually consumed by humans. Fugu as a food Poisoning from tetrodotoxin is of particular public health concern in Japan, where fugu is a traditional delicacy. It is prepared and sold in special restaurants where trained and licensed chefs carefully remove the viscera to reduce the danger of poisoning. There is potential for misidentification and mislabelling, particularly of prepared, frozen fish products. Food analysis The mouse bioassay developed for paralytic shellfish poisoning (PSP) can be used to monitor tetrodotoxin in pufferfish and is the current method of choice. An HPLC method with post-column reaction with alkali and fluorescence has been developed to determine tetrodotoxin and its associated toxins. The alkali degradation products can be confirmed as their trimethylsilyl derivatives by gas chromatography/mass spectrometry. Detection in body fluids Tetrodotoxin may be quantified in serum, whole blood or urine to confirm a diagnosis of poisoning in hospitalized patients or to assist in the forensic investigation of a case of fatal overdosage. Most analytical techniques involve mass spectrometric detection following gas or liquid chromatographic separation. Modern therapeutic research Tetrodotoxin has been investigated as a possible treatment for cancer-associated pain. Early clinical trials demonstrate significant pain relief in some patients. It has also been studied in relation to migraine headaches. Mutations in one particular TTX-sensitive Na+ channel are associated with some migraine headaches, although it is unclear as to whether this has any therapeutic relevance for most people with migraine. Tetrodotoxin has been used clinically to relieve negative affects associated with heroin withdrawal. Regulation In the U.S., tetrodotoxin appears on the select agents list of the Department of Health and Human Services, and scientists must register with HHS to use tetrodotoxin in their research. However, investigators possessing less than 500 mg are exempt from regulation. Popular culture Tetrodotoxin serves as a plot device for characters to fake death, as in the films Hello Again (1987), The Serpent and the Rainbow (1988), The A-Team (2010), Captain America: The Winter Soldier (2014), and War (2019), and in episodes of "Jane the Virgin", Miami Vice (1985), Nikita, MacGyver (season 7, episode 6, where the antidote is Datura stramonium leaf), CSI: NY (season 4, episode 9, "Boo"), and Chuck. In Law Abiding Citizen (2009) and Alex Cross (2012), its paralysis is presented as a method of assisting torture. The toxin was also referenced in "synthetic form" in season 1, episode 2, of the series "FBI". The toxin is used as a weapon in both the second season of Archer, in Covert Affairs and in the Inside No. 9 episode "The Riddle of the Sphinx". In Columbo, episode 2 of season 7, fugu is used to kill the antagonists victim. In The Apothecary Diaries light novel, as well as the respective manga and anime adaptations, fugu toxin is encountered across multiple mystery arcs. Based on the presumption that tetrodotoxin is not always fatal, but at near-lethal doses can leave a person extremely unwell with the person remaining conscious, tetrodotoxin has been alleged to result in zombieism, and has been suggested as an ingredient in Haitian Vodou preparations. This idea first appeared in the 1938 non-fiction book Tell My Horse by Zora Neale Hurston in which there were multiple accounts of purported tetrodotoxin poisoning in Haiti by a voodoo sorcerer called the bokor. These stories were later popularized by Harvard-trained ethnobotanist Wade Davis in his 1985 book and Wes Craven's 1988 film, both titled The Serpent and the Rainbow. James Ellroy includes "blowfish toxin" as an ingredient in Haitian Vodou preparations to produce zombieism and poisoning deaths in his dark, disturbing, violent novel Blood's a Rover. But this theory has been questioned by the scientific community since the 1990s based on analytical chemistry-based tests of multiple preparations and review of earlier reports (see above). See also Clairvius Narcisse, Haitian man allegedly buried alive under the effect of TTX Tetrodocain, North Korean medical injection derived from tetrodotoxin 4-Aminopyridine Brevetoxin Ciguatoxin Conotoxin Domoic acid Neosaxitoxin Neurotoxin Okadaic acid Saxitoxin Tectin References Further reading External links Tetrodotoxin: essential data (1999) Tetrodotoxin from the Bad Bug Book at the U.S. Food and Drug Administration website New York Times, "Whatever Doesn't Kill Some Animals Can Make Them Deadly" U.S. National Library of Medicine: Hazardous Substances Databank – Tetrodotoxin Marine neurotoxins Ion channel toxins Guanidine alkaloids Alcohols Ichthyotoxins Sodium channel blockers Orthoesters Adamantane-like molecules Secondary metabolites Analgesics Non-protein ion channel toxins Zwitterions Voltage-gated sodium channel blockers Octopus toxins Amphibian toxins
Tetrodotoxin
[ "Physics", "Chemistry" ]
5,048
[ "Matter", "Chemical ecology", "Secondary metabolites", "Guanidine alkaloids", "Alkaloids by chemical classification", "Zwitterions", "Metabolism", "Ions" ]
239,038
https://en.wikipedia.org/wiki/Construction
Construction is a general term meaning the art and science of forming objects, systems, or organizations. It comes from the Latin word constructio (from com- "together" and struere "to pile up") and Old French construction. To 'construct' is a verb: the act of building, and the noun is construction: how something is built or the nature of its structure. In its most widely used context, construction covers the processes involved in delivering buildings, infrastructure, industrial facilities, and associated activities through to the end of their life. It typically starts with planning, financing, and design that continues until the asset is built and ready for use. Construction also covers repairs and maintenance work, any works to expand, extend and improve the asset, and its eventual demolition, dismantling or decommissioning. The construction industry contributes significantly to many countries' gross domestic products (GDP). Global expenditure on construction activities was about $4 trillion in 2012. In 2022, expenditure on the construction industry exceeded $11 trillion a year, equivalent to about 13 percent of global GDP. This spending was forecasted to rise to around $14.8 trillion in 2030. The construction industry promotes economic development and brings many non-monetary benefits to many countries, but it is one of the most hazardous industries. For example, about 20% (1,061) of US industry fatalities in 2019 happened in construction. History The first huts and shelters were constructed by hand or with simple tools. As cities grew during the Bronze Age, a class of professional craftsmen, like bricklayers and carpenters, appeared. Occasionally, slaves were used for construction work. In the Middle Ages, the artisan craftsmen were organized into guilds. In the 19th century, steam-powered machinery appeared, and later, diesel- and electric-powered vehicles such as cranes, excavators and bulldozers. Fast-track construction has been increasingly popular in the 21st century. Some estimates suggest that 40% of construction projects are now fast-track construction. Construction industry sectors Broadly, there are three sectors of construction: buildings, infrastructure and industrial: Building construction is usually further divided into residential and non-residential. Infrastructure, also called 'heavy civil' or 'heavy engineering', includes large public works, dams, bridges, highways, railways, water or wastewater and utility distribution. Industrial construction includes offshore construction (mainly of energy installations), mining and quarrying, refineries, chemical processing, mills and manufacturing plants. The industry can also be classified into sectors or markets. For example, Engineering News-Record (ENR), a US-based construction trade magazine, has compiled and reported data about the size of design and construction contractors. In 2014, it split the data into nine market segments: transportation, petroleum, buildings, power, industrial, water, manufacturing, sewage/waste, telecom, hazardous waste, and a tenth category for other projects. ENR used data on transportation, sewage, hazardous waste and water to rank firms as heavy contractors. The Standard Industrial Classification and the newer North American Industry Classification System classify companies that perform or engage in construction into three subsectors: building construction, heavy and civil engineering construction, and specialty trade contractors. There are also categories for professional services firms (e.g., engineering, architecture, surveying, project management). Building construction Building construction is the process of adding structures to areas of land, also known as real property sites. Typically, a project is instigated by or with the owner of the property (who may be an individual or an organisation); occasionally, land may be compulsorily purchased from the owner for public use. Residential construction Residential construction may be undertaken by individual land-owners (self-built), by specialist housebuilders, by property developers, by general contractors, or by providers of public or social housing (e.g.: local authorities, housing associations). Where local zoning or planning policies allow, mixed-use developments may comprise both residential and non-residential construction (e.g.: retail, leisure, offices, public buildings, etc.). Residential construction practices, technologies, and resources must conform to local building authority's regulations and codes of practice. Materials readily available in the area generally dictate the construction materials used (e.g.: brick versus stone versus timber). Costs of construction on a per square meter (or per square foot) basis for houses can vary dramatically based on site conditions, access routes, local regulations, economies of scale (custom-designed homes are often more expensive to build) and the availability of skilled tradespeople. Non-residential construction Depending upon the type of building, non-residential building construction can be procured by a wide range of private and public organisations, including local authorities, educational and religious bodies, transport undertakings, retailers, hoteliers, property developers, financial institutions and other private companies. Most construction in these sectors is undertaken by general contractors. Infrastructure construction Civil engineering covers the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, tunnels, airports, water and sewerage systems, pipelines, and railways. Some general contractors have expertise in civil engineering; civil engineering contractors are firms dedicated to work in this sector, and may specialise in particular types of infrastructure. Industrial construction Industrial construction includes offshore construction (mainly of energy installations: oil and gas platforms, wind power), mining and quarrying, refineries, breweries, distilleries and other processing plants, power stations, steel mills, warehouses and factories. Construction processes Some construction projects are small renovations or repair jobs, like repainting or fixing leaks, where the owner may act as designer, paymaster and laborer for the entire project. However, more complex or ambitious projects usually require additional multi-disciplinary expertise and manpower, so the owner may commission one or more specialist businesses to undertake detailed planning, design, construction and handover of the work. Often the owner will appoint one business to oversee the project (this may be a designer, a contractor, a construction manager, or other advisors); such specialists are normally appointed for their expertise in project delivery and construction management and will help the owner define the project brief, agree on a budget and schedule, liaise with relevant public authorities, and procure materials and the services of other specialists (the supply chain, comprising subcontractors and materials suppliers). Contracts are agreed for the delivery of services by all businesses, alongside other detailed plans aimed at ensuring legal, timely, on-budget and safe delivery of the specified works. Design, finance, and legal aspects overlap and interrelate. The design must be not only structurally sound and appropriate for the use and location, but must also be financially possible to build, and legal to use. The financial structure must be adequate to build the design provided and must pay amounts that are legally owed. Legal structures integrate design with other activities and enforce financial and other construction processes. These processes also affect procurement strategies. Clients may, for example, appoint a business to design the project, after which a competitive process is undertaken to appoint a lead contractor to construct the asset (design–bid–build); they may appoint a business to lead both design and construction (design-build); or they may directly appoint a designer, contractor and specialist subcontractors (construction management). Some forms of procurement emphasize collaborative relationships (partnering, alliancing) between the client, the contractor, and other stakeholders within a construction project, seeking to ameliorate often highly competitive and adversarial industry practices. DfMA (design for manufacture and assembly) approaches also emphasize early collaboration with manufacturers and suppliers regarding products and components. Construction or refurbishment work in a "live" environment (where residents or businesses remain living in or operating on the site) requires particular care, planning and communication. Planning When applicable, a proposed construction project must comply with local land-use planning policies including zoning and building code requirements. A project will normally be assessed (by the 'authority having jurisdiction', AHJ, typically the municipality where the project will be located) for its potential impacts on neighbouring properties, and upon existing infrastructure (transportation, social infrastructure, and utilities including water supply, sewerage, electricity, telecommunications, etc.). Data may be gathered through site analysis, site surveys and geotechnical investigations. Construction normally cannot start until planning permission has been granted, and may require preparatory work to ensure relevant infrastructure has been upgraded before building work can commence. Preparatory works will also include surveys of existing utility lines to avoid damage-causing outages and other hazardous situations. Some legal requirements come from malum in se considerations, or the desire to prevent indisputably bad phenomena, e.g. explosions or bridge collapses. Other legal requirements come from malum prohibitum considerations, or factors that are a matter of custom or expectation, such as isolating businesses from a business district or residences from a residential district. An attorney may seek changes or exemptions in the law that governs the land where the building will be built, either by arguing that a rule is inapplicable (the bridge design will not cause a collapse), or that the custom is no longer needed (acceptance of live-work spaces has grown in the community). During the construction of a building, a municipal building inspector usually inspects the ongoing work periodically to ensure that construction adheres to the approved plans and the local building code. Once construction is complete, any later changes made to a building or other asset that affect safety, including its use, expansion, structural integrity, and fire protection, usually require municipality approval. Finance Depending on the type of project, mortgage bankers, accountants, and cost engineers may participate in creating an overall plan for the financial management of a construction project. The presence of the mortgage banker is highly likely, even in relatively small projects since the owner's equity in the property is the most obvious source of funding for a building project. Accountants act to study the expected monetary flow over the life of the project and to monitor the payouts throughout the process. Professionals including cost engineers, estimators and quantity surveyors apply expertise to relate the work and materials involved to a proper valuation. Financial planning ensures adequate safeguards and contingency plans are in place before the project is started, and ensures that the plan is properly executed over the life of the project. Construction projects can suffer from preventable financial problems. Underbids happen when builders ask for too little money to complete the project. Cash flow problems exist when the present amount of funding cannot cover the current costs for labour and materials; such problems may arise even when the overall budget is adequate, presenting a temporary issue. Cost overruns with government projects have occurred when the contractor identified change orders or project changes that increased costs, which are not subject to competition from other firms as they have already been eliminated from consideration after the initial bid. Fraud is also an issue of growing significance within construction. Large projects can involve highly complex financial plans and often start with a conceptual cost estimate performed by a building estimator. As portions of a project are completed, they may be sold, supplanting one lender or owner for another, while the logistical requirements of having the right trades and materials available for each stage of the building construction project carry forward. Public–private partnerships (PPPs) or private finance initiatives (PFIs) may also be used to help deliver major projects. According to McKinsey in 2019, the "vast majority of large construction projects go over budget and take 20% longer than expected". Legal A construction project is a complex net of construction contracts and other legal obligations, each of which all parties must carefully consider. A contract is the exchange of a set of obligations between two or more parties, and provides structures to manage issues. For example, construction delays can be costly, so construction contracts set out clear expectations and clear paths to manage delays. Poorly drafted contracts can lead to confusion and costly disputes. At the start of a project, legal advisors seek to identify ambiguities and other potential sources of trouble in the contract structures, and to present options for preventing problems. During projects, they work to avoid and resolve conflicts that arise. In each case, the lawyer facilitates an exchange of obligations that matches the reality of the project. Procurement Traditional or Design-bid-build Design-bid-build is the most common and well-established method of construction procurement. In this arrangement, the architect, engineer or builder acts for the client as the project coordinator. They design the works, prepare specifications and design deliverables (models, drawings, etc.), administer the contract, tender the works, and manage the works from inception to completion. In parallel, there are direct contractual links between the client and the main contractor, who, in turn, has direct contractual relationships with subcontractors. The arrangement continues until the project is ready for handover. Design-build Design-build became more common from the late 20th century, and involves the client contracting a single entity to provide design and construction. In some cases, the design-build package can also include finding the site, arranging funding and applying for all necessary statutory consents. Typically, the client invites several Design & Build (D&B) contractors to submit proposals to meet the project brief and then selects a preferred supplier. Often this will be a consortium involving a design firm and a contractor (sometimes more than one of each). In the United States, departments of transportation usually use design-build contracts as a way of progressing projects where states lack the skills or resources, particularly for very large projects. Construction management In a construction management arrangement, the client enters into separate contracts with the designer (architect or engineer), a construction manager, and individual trade contractors. The client takes on the contractual role, while the construction or project manager provides the active role of managing the separate trade contracts, and ensuring that they complete all work smoothly and effectively together. This approach is often used to speed up procurement processes, to allow the client greater flexibility in design variation throughout the contract, to enable the appointment of individual work contractors, to separate contractual responsibility on each individual throughout the contract, and to provide greater client control. Design In the industrialized world, construction usually involves the translation of designs into reality. Most commonly (i.e.: in a design-bid-build project), the design team is employed by (i.e. in contract with) the property owner. Depending upon the type of project, a design team may include architects, civil engineers, mechanical engineers, electrical engineers, structural engineers, fire protection engineers, planning consultants, architectural consultants, and archaeological consultants. A 'lead designer' will normally be identified to help coordinate different disciplinary inputs to the overall design. This may be aided by integration of previously separate disciplines (often undertaken by separate firms) into multi-disciplinary firms with experts from all related fields, or by firms establishing relationships to support design-build processes. The increasing complexity of construction projects creates the need for design professionals trained in all phases of a project's life-cycle and develop an appreciation of the asset as an advanced technological system requiring close integration of many sub-systems and their individual components, including sustainability. For buildings, building engineering is an emerging discipline that attempts to meet this new challenge. Traditionally, design has involved the production of sketches, architectural and engineering drawings, and specifications. Until the late 20th century, drawings were largely hand-drafted; adoption of computer-aided design (CAD) technologies then improved design productivity, while the 21st-century introduction of building information modeling (BIM) processes has involved the use of computer-generated models that can be used in their own right or to generate drawings and other visualisations as well as capturing non-geometric data about building components and systems. On some projects, work on-site will not start until design work is largely complete; on others, some design work may be undertaken concurrently with the early stages of on-site activity (for example, work on a building's foundations may commence while designers are still working on the detailed designs of the building's internal spaces). Some projects may include elements that are designed for off-site construction (see also prefabrication and modular building) and are then delivered to the site ready for erection, installation or assembly. On-site construction Once contractors and other relevant professionals have been appointed and designs are sufficiently advanced, work may commence on the project site. Typically, a construction site will include a secure perimeter to restrict unauthorised access, site access control points, office and welfare accommodation for personnel from the main contractor and other firms involved in the project team, and storage areas for materials, machinery and equipment. According to the McGraw-Hill Dictionary of Architecture and Construction's definition, construction may be said to have started when the first feature of the permanent structure has been put in place, such as pile driving, or the pouring of slabs or footings. Commissioning and handover Commissioning is the process of verifying that all subsystems of a new building (or other assets) work as intended to achieve the owner's project requirements and as designed by the project's architects and engineers. Defects liability period A period after handover (or practical completion) during which the owner may identify any shortcomings in relation to the building specification ('defects'), with a view to the contractor correcting the defect. Maintenance, repair and improvement Maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure, and supporting utilities in industrial, business, governmental, and residential installations. Demolition Demolition is the discipline of safely and efficiently tearing down buildings and other artificial structures. Demolition contrasts with deconstruction, which involves taking a building apart while carefully preserving valuable elements for reuse purposes (recycling – see also circular economy). Industry scale and characteristics Economic activity The output of the global construction industry was worth an estimated $10.8 trillion in 2017, and in 2018 was forecast to rise to $12.9 trillion by 2022, and to around $14.8 trillion in 2030. As a sector, construction accounts for more than 10% of global GDP (in developed countries, construction comprises 6–9% of GDP), and employs around 7% of the total employed workforce around the globe (accounting for over 273 million full- and part-time jobs in 2014). Since 2010, China has been the world's largest single construction market. The United States is the second largest construction market with a 2018 output of $1.581 trillion. In the United States in February 2020, around $1.4 trillion worth of construction work was in progress, according to the Census Bureau, of which just over $1.0 trillion was for the private sector (split roughly 55:45% between residential and nonresidential); the remainder was public sector, predominantly for state and local government. In Armenia, the construction sector experienced growth during the latter part of 2000s. Based on National Statistical Service, Armenia's construction sector generated approximately 20% of Armenia's GDP during the first and second quarters of 2007. In 2009, according to the World Bank, 30% of Armenia's economy was from construction sector. In Vietnam, the construction industry plays an important role in the national economy. The Vietnamese construction industry has been one of the fastest growing in the Asia-Pacific region in recent years. The market was valued at nearly $60 billion in 2021. In the first half of 2022, Vietnam's construction industry growth rate reached 5.59%. In 2022, Vietnam's construction industry accounted for more than 6% of the country's GDP, equivalent to over 589.7 billion Vietnamese dong. The industry of industry and construction accounts for 38.26% of Vietnam's GDP. At the same time, the industry is one of the most attractive industries for foreign direct investment (FDI) in recent years. Construction is a major source of employment in most countries; high reliance on small businesses, and under-representation of women are common traits. For example: In the US, construction employed around 11.4m people in 2020, with a further 1.8m employed in architectural, engineering, and related professional services – equivalent to just over 8% of the total US workforce. The construction workers were employed in over 843,000 organisations, of which 838,000 were privately held businesses. In March 2016, 60.4% of construction workers were employed by businesses with fewer than 50 staff. Women are substantially underrepresented (relative to their share of total employment), comprising 10.3% of the US construction workforce, and 25.9% of professional services workers, in 2019. The United Kingdom construction sector contributed £117 billion (6%) to UK GDP in 2018, and in 2019 employed 2.4m workers (6.6% of all jobs). These worked either for 343,000 'registered' construction businesses, or for 'unregistered' businesses, typically self-employed contractors; just over one million small/medium-sized businesses, mainly self-employed individuals, worked in the sector in 2019, comprising about 18% of all UK businesses. Women comprised 12.5% of the UK construction workforce. According to McKinsey research, productivity growth per worker in construction has lagged behind many other industries across different countries including in the United States and in European countries. In the United States, construction productivity per worker has declined by half since the 1960s. Construction GVA by country Employment Some workers may be engaged in manual labour as unskilled or semi-skilled workers; they may be skilled tradespeople; or they may be supervisory or managerial personnel. Under safety legislation in the United Kingdom, for example, construction workers are defined as people "who work for or under the control of a contractor on a construction site"; in Canada, this can include people whose work includes ensuring conformance with building codes and regulations, and those who supervise other workers. Laborers comprise a large grouping in most national construction industries. In the United States, for example, in May 2023, the construction sector employed just over 7.9 million people, of whom 859,000 were laborers, while 3.7 million were construction trades workers (including 603,000 carpenters, 559,000 electricians, 385,000 plumbers, and 321,000 equipment operators). Like most business sectors, there is also substantial white-collar employment in construction - out of 7.9 million US construction sector workers, 681,000 were recorded by the United States Department of Labor in May 2023 as in 'office and administrative support occupations', 620,000 in 'management occupations' and 480,000 in 'business and financial operations occupations'. Large-scale construction requires collaboration across multiple disciplines. A project manager normally manages the budget on the job, and a construction manager, design engineer, construction engineer or architect supervises it. Those involved with the design and execution must consider zoning requirements and legal issues, environmental impact of the project, scheduling, budgeting and bidding, construction site safety, availability and transportation of building materials, logistics, and inconvenience to the public, including those caused by construction delays. Some models and policy-making organisations promote the engagement of local labour in construction projects as a means of tackling social exclusion and addressing skill shortages. In the UK, the Joseph Rowntree Foundation reported in 2000 on 25 projects which had aimed to offer training and employment opportunities for locally based school leavers and unemployed people. The Foundation published "a good practice resource book" in this regard at the same time.<ref>Macfarlane, R., Using local labour in construction: A good practice resource book, The Policy Press/Joseph Rowntree Foundation, published 17 November 2000, accessed 17 February 2024</ref> Use of local labour and local materials were specified for the construction of the Danish Storebaelt bridge, but there were legal issues which were challenged in court and addressed by the European Court of Justice in 1993. The court held that a contract condition requiring use of local labour and local materials was incompatible with EU treaty principles. Later UK guidance noted that social and employment clauses, where used, must be compatible with relevant EU regulation. Employment of local labour was identified as one of several social issues which could potentially be incorporated in a sustainable procurement approach, although the interdepartmental Sustainable Procurement Group recognised that "there is far less scope to incorporate [such] social issues in public procurement than is the case with environmental issues". There are many routes to the different careers within the construction industry. There are three main tiers of construction workers based on educational background and training, which vary by country: Unskilled and semi-skilled workers Unskilled and semi-skilled workers provide general site labor, often have few or no construction qualifications, and may receive basic site training. Skilled tradespeople Skilled tradespeople have typically served apprenticeships (sometimes in labor unions) or received technical training; this group also includes on-site managers who possess extensive knowledge and experience in their craft or profession. Skilled manual occupations include carpenters, electricians, plumbers, ironworkers, heavy equipment operators and masons, as well as those involved in project management. In the UK these require further education qualifications, often in vocational subject areas, undertaken either directly after completing compulsory education or through "on the job" apprenticeships. Professional, technical or managerial personnel Professional, technical and managerial personnel often have higher education qualifications, usually graduate degrees, and are trained to design and manage construction processes. These roles require more training as they demand greater technical knowledge, and involve more legal responsibility. Example roles (and qualification routes) include: Architect – Will usually have studied architecture to degree level, and then undertaken further study and gained professional experience. In many countries, the title of "architect" is protected by law, strictly limiting its use to qualified people. Civil engineer – Typically holds a degree in a related subject and may only be eligible for membership of a professional institution (such as the UK's ICE) following completion of additional training and experience. In some jurisdictions, a new university graduate must hold a master's degree to become chartered, and persons with bachelor's degrees may become Incorporated Engineers. Building services engineer – May also be referred to as an "M&E" or "mechanical, electrical, and plumbing (MEP) engineer" and typically holds a degree in mechanical or electrical engineering. Project manager – Typically holds a 4-year or greater higher education qualification, but are often also qualified in another field such as architecture, civil engineering or quantity surveying. Structural engineer – Typically holds a bachelor's or master's degree in structural engineering. Quantity surveyor – Typically holds a bachelor's degree in quantity surveying. UK chartered status is gained from the Royal Institution of Chartered Surveyors. Safety Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In the US in 2019, 1,061, or about 20%, of worker fatalities in private industry occurred in construction. In 2017, more than a third of US construction fatalities (366 out of 971 total fatalities) were the result of falls; in the UK, half of the average 36 fatalities per annum over a five-year period to 2021 were attributed to falls from height. Proper safety equipment such as harnesses, hard hats and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Other major causes of fatalities in the construction industry include electrocution, transportation accidents, and trench cave-ins. Other safety risks for workers in construction include hearing loss due to high noise exposure, musculoskeletal injury, chemical exposure, and high levels of stress. Besides that, the high turnover of workers in construction industry imposes a huge challenge of accomplishing the restructuring of work practices in individual workplaces or with individual workers. Construction has been identified by the National Institute for Occupational Safety and Health (NIOSH) as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues. A study conducted in 2022 found “significant effect of air pollution exposure on construction-related injuries and fatalities”, especially with the exposure of nitrogen dioxide. Sustainability Sustainability is an aspect of "green building", defined by the United States Environmental Protection Agency (EPA) as "the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building's life-cycle from siting to design, construction, operation, maintenance, renovation and deconstruction." Decarbonising construction The construction industry may require transformation at pace and at scale if it is to successfully contribute to achieving the target set out in The Paris Agreement of limiting global temperature rise to 1.5C above industrial levels. The World Green Building Council has stated the buildings and infrastructure around the world can reach 40% less embodied carbon emissions but that this can only be achieved through urgent transformation. Conclusions from industry leaders have suggested that the net zero transformation is likely to be challenging for the construction industry, but it does present an opportunity. Action is demanded from governments, standards bodies, the construction sector, and the engineering profession to meet the decarbonising targets. In 2021, the National Engineering Policy Centre published its report Decarbonising Construction: Building a new net zero industry,'' which outlined key areas to decarbonise the construction sector and the wider built environment. This report set out around 20 different recommendations to transform and decarbonise the construction sector, including recommendations for engineers, the construction industry and decision makers, plus outlined six-overarching ‘system levers’ where action taken now will result in rapid decarbonisation of the construction sector. These levels are: Setting and stipulating progressive targets for carbon reduction Embedding quantitative whole-life carbon assessment into public procurement Increasing design efficiency, materials reuse and retrofit of buildings Improving whole-life carbon performance Improving skills for net zero Adopting a joined up, systems approach to decarbonisation across the construction sector and with other sectors Progress is being made internationally to decarbonise the sector including improvements to sustainable procurement practice such as the CO2 performance ladder in the Netherlands and the Danish Partnership for Green Public Procurement. There are also now demonstrations of applying the principles of circular economy practices in practice such as Circl, ABN AMRO's sustainable pavilion and the Brighton Waste House. See also Notes References
Construction
[ "Engineering" ]
6,276
[ "Construction" ]
239,050
https://en.wikipedia.org/wiki/Project%20manager
A project manager is a professional in the field of project management. Project managers have the responsibility of the planning, procurement and execution of a project, in any undertaking that has a defined scope, defined start and a defined finish; regardless of industry. Project managers are first point of contact for any issues or discrepancies arising from within the heads of various departments in an organization before the problem escalates to higher authorities, as project representative. Project management is the responsibility of a project manager. This individual seldom participates directly in the activities that produce the result, but rather strives to maintain the progress, mutual interaction and tasks of various parties in such a way that reduces the risk of overall failure, maximizes benefits, and minimizes costs. Overview A project manager is the person responsible for accomplishing the project objectives. Key project management responsibilities include defining and communicating project objectives that are clear, useful and attainable procuring the project requirements like workforce, required information, various agreements and material or technology needed to accomplish project objectives managing the constraints of the project management triangle, which are cost, time, scope and quality A project manager is a client representative and has to determine and implement the exact needs of the client, based on knowledge of the organization they are representing. An expertise is required in the domain the project managers are working to efficiently handle all the aspects of the project. The ability to adapt to the various internal procedures of the client and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized. Project management key topics Important areas of project management may include: specifying reasons for the importance of a project specifying the quality of deliverables resource-estimation estimating timescales negotiating investment, corporate agreement and funding implementation of a management plan in a project team-building and motivation risk assessments and changes in a project maintain sustaining projects monitoring progress against plans stakeholder management provider-management closing the project George Roth and Hilary Bradbury identify a desire for more non-authoritarian leadership in project work. Project tools Some tools, knowledge and techniques for managing projects may be unique to project management - for example: work-breakdown structures, critical-path analysis and earned-value management. Understanding and applying the tools and techniques which are generally recognized as good practices are not sufficient alone for effective project management. Effective project management requires that the project manager understands and uses the knowledge and skills from at least four areas of expertise. Examples are PMBOK, Application Area Knowledge: standards and regulations set forth by ISO for project management, General Management Skills and Project Environment Management There are many options for project-management software to assist in executing projects for project managers and any associated teams. Project teams If recruiting and building an effective team, the manager must consider not only the technical skills of each team member, but also the critical roles of and chemistry between workers. A project team has mainly three separate components: project manager, core team and contracted team. Risk Most of the project-management issues that influence a project arise from risk, which in turn arises from uncertainty. Successful project managers focus on this as their main concern and attempt to reduce risk significantly, often by adhering to a policy of open communication, ensuring that project participants can voice their opinions and concerns. Responsibilities The project manager is accountable for ensuring that everyone on the team knows and executes his or her role, feels empowered and supported in the role, knows the roles of the other team members and acts upon the belief that those roles will be performed. The specific responsibilities of the project manager may vary depending on the industry, the company size, the company maturity, and the company culture. However, there are some responsibilities that are common to all project managers, noting: Developing the project plans Managing the project stakeholders Managing communication Managing the project team Managing the project risks Managing the project schedule Managing the project budget Managing the project conflicts Managing the project delivery Contract administration Types Architectural project manager Architectural project manager are project managers in the field of architecture. They have many of the same skills as their counterpart in the construction industry. And will often work closely with the construction project manager in the office of the general contractor (GC), and at the same time, coordinate the work of the design team and numerous consultants who contribute to a construction project, and manage communication with the client. The issues of budget, scheduling, and quality control are the responsibility of the project manager in an architect's office. Construction manager Construction managers are primarily involved in the areas of design, bidding, contact management and construction of a project, as well as the in-between phases and post-construction. Until recently, the American construction industry lacked any level of standardization, with individual States determining the eligibility requirements within their jurisdiction. However, several trade associations based in the United States have made strides in creating a commonly accepted set of qualifications and tests to determine a project manager's competency. The Construction Management Association of America (CMAA) maintains the Certified Construction Manager (CCM) designation. The purpose of the CCM is to standardize the education, experience and professional understanding needed to practice construction management at the highest level. The Project Management Institute has made some headway into being a standardizing body with its creation of the Project Management Professional (PMP) designation. The Constructor Certification Commission of the American Institute of Constructors holds semiannual nationwide tests. Eight American Construction Management programs require that students take these exams before they may receive their Bachelor of Science in construction management degree, and 15 other universities actively encourage their students to consider the exams. The Associated Colleges of Construction Education and the Associated Schools of Construction have made considerable progress in developing national standards for construction education programs. The profession has recently grown to accommodate several dozen construction management Bachelor of Science programs. Many universities have also begun offering a master's degree in project management. These programs generally are tailored to working professionals who have project management experience or project related experience; they provide a more intense and in depth education surrounding the knowledge areas within the project management body of knowledge. The United States Navy construction battalions, nicknamed the SeaBees, puts their command through strenuous training and certifications at every level. To become a chief petty officer in the SeaBees is equivalent to a BS in construction management with the added benefit of several years of experience to their credit. See ACE accreditation. Engineering project manager In engineering, project management involves seeing a product or device through the developing and manufacturing stages, working with various professionals in different fields of engineering and manufacturing to go from concept to finished product. Optionally, this can include different versions and standards as required by different countries, requiring knowledge of laws, requirements and infrastructure. Insurance claim project manager In the insurance industry project managers often oversee and manage the restoration of a client's home/office after a fire, flood, or other disaster, covering the fields from electronics through to the demolition and construction contractors. IT project manager IT project management generally falls into two categories, namely software (development) project manager and infrastructure project manager. Software project manager A software project manager has many of the same skills as their counterparts in other industries. Beyond the skills normally associated with traditional project management in industries such as construction and manufacturing, a software project manager will typically have an extensive background in software development. Many software project managers hold a degree in computer science, information technology, management of information systems or another related field. In traditional project management a heavyweight, predictive methodology such as the waterfall model is often employed, but software project managers must also be skilled in more lightweight, adaptive methodologies such as DSDM, Scrum and XP. These project management methodologies are based on the uncertainty of developing a new software system and advocate smaller, incremental development cycles. These incremental or iterative cycles are time boxed (constrained to a known period of time, typically from one to four weeks) and produce a working subset of the entire system deliverable at the end of each iteration. The increasing adoption of lightweight approaches is due largely to the fact that software requirements are very susceptible to change, and it is extremely difficult to illuminate all the potential requirements in a single project phase before the software development commences. The software project manager is also expected to be familiar with the software development life cycle (SDLC). This may require in-depth knowledge of requirements solicitation, application development, logical and physical database design and networking. This knowledge is typically the result of the aforementioned education and experience. There is not a widely accepted certification for software project managers, but many will hold the Project Management Professional (PMP) designation offered by the Project Management Institute, PRINCE2 or an advanced degree in project management, such as a MSPM or other graduate degree in technology management. IT infrastructure project management An infrastructure IT PM is concerned with the nuts and bolts of the IT department, including computers, servers, storage, networking, and such aspects of them as backup, business continuity, upgrades, replacement, and growth. Often, a secondary data center will be constructed in a remote location to help protect the business from outages caused by natural disasters or weather. Recently, cyber security has become a significant growth area within IT infrastructure management. The infrastructure PM usually has an undergraduate degree in engineering or computer science, while a master's degree in project management is required for senior-level positions. Along with the formal education, most senior-level PMs are certified, by the Project Management Institute, as Project Management professionals. PMI also has several additional certification options, but PMP is by far the most popular. Infrastructure PMs are responsible for managing projects that have budgets from a few thousand dollars up to many millions of dollars. They must understand the business and the business goals of the sponsor and the capabilities of the technology in order to reach the desired goals of the project. The most difficult part of the infrastructure PM's job maybe this translation of business needs / wants into technical specifications. Oftentimes, business analysts are engaged to help with this requirement. The team size of a large infrastructure project may run into several hundred engineers and technicians, many of whom have strong personalities and require strong leadership if the project goals are to be met. Due to the high operations expense of maintaining a large staff of highly skilled IT engineering talent, many organizations outsource their infrastructure implementations and upgrades to third-party companies. Many of these companies have strong project management organizations with the ability to not only manage their clients projects, but to also generate high quality revenue at the same time. Social science research project manager Project managers in the field of social science have many of the same skills as their counterparts in the IT industry. For example, project managers for the 2020 United States Census followed program and project management policies, framework, and control processes for all projects established within the program. They managed projects designed as part of the program to produce official statistics, such as projects in systems engineering, questionnaire design, sampling, data collection, and public communications. Project managers of qualitative research studies must also manage scope, schedule, and cost related to research design, participant recruitment, interviewing, reporting, as well as stakeholder engagement. See also Event planning and production Master of Science in Project Management Project engineer Project management Project portfolio management Project planning Product management Construction manager References Further reading Project Management Institute (PMI), USA US DoD (2003). Interpretive Guidance for Project Manager Positions. August 2003. Open source handbook for project managers Open source handbook for project managers. July 2006. Project management Management occupations Building engineering Product lifecycle management Computer occupations Managers
Project manager
[ "Technology", "Engineering" ]
2,343
[ "Building engineering", "Computer occupations", "Civil engineering", "Architecture" ]
239,126
https://en.wikipedia.org/wiki/William%20Jones%20%28mathematician%29
William Jones, FRS (16751 July 1749) was a Welsh mathematician best known for his use of the symbol (the Greek letter Pi) to represent the ratio of the circumference of a circle to its diameter. He was a close friend of Sir Isaac Newton and Sir Edmund Halley. In November 1711, Jones became a fellow of the Royal Society, and later served as the Royal Society's vice-president. Early life William Jones was born as the son of Siôn Siôr (John George Jones) and Elizabeth Rowland in the parish of Llanfihangel Tre'r Beirdd, about west of Benllech on the Isle of Anglesey in Wales. He attended a charity school at Llanfechell, also on the Isle of Anglesey, where his mathematical talents were spotted by the local landowner Lord Bulkeley, who arranged for him to work in a merchant's counting-house in London. His main patrons were the Bulkeley family of north Wales, and later the Earl of Macclesfield. Early mathematical career Jones initially served at sea, teaching mathematics on board Navy ships between 1695 and 1702, where he became very interested in navigation and published A New Compendium of the Whole Art of Navigation in 1702, dedicated to a benefactor John Harris. In this work he applied mathematics to navigation, studying methods of calculating position at sea. After his voyages were over he became a mathematics teacher in London, both in coffee houses and as a private tutor to George Parker, the son of the future Earl of Macclesfield, and also the future Baron Hardwicke. He also held a number of undemanding posts in government offices with the help of his former pupils. Later career Jones published Synopsis Palmariorum Matheseos in 1706, a work which was intended for beginners and which included theorems on differential calculus and infinite series. This used for the ratio of circumference to diameter, following earlier abbreviations for the Greek word periphery (περιφέρεια) by William Oughtred and others. His 1711 work Analysis per quantitatum series, fluxiones ac differentias introduced the dot notation for differentiation in calculus. He was noticed and befriended by two of Britain's foremost mathematicians – Edmund Halley and Sir Isaac Newton – and was elected a fellow of the Royal Society in 1711. He later became the editor and publisher of many of Newton's manuscripts and built up an extraordinary library that was one of the greatest collections of books on science and mathematics ever known, and only recently fully dispersed. Jones' Last Will and Testament left his library, along with his gold watch, to Earl of Macclesfield in acknowledgement of his support. Personal life He married twice, firstly the widow of his counting-house employer, whose property he inherited on her death, and secondly, in 1731, Mary, the 22-year-old daughter of cabinet-maker George Nix, with whom he had two surviving children. His son, also named William Jones and born in 1746, was a renowned philologist who established links between Latin, Greek and Sanskrit, leading to the concept of the Indo-European language group. References External links William Jones and other important Welsh mathematicians William Jones and his Circle: The Man who invented Pi Pi Day 2015: meet the man who invented π 1675 births 1749 deaths People from Anglesey 17th-century English mathematicians 18th-century British mathematicians Fellows of the Royal Society 17th-century Welsh scientists 18th-century Welsh scientists 18th-century Welsh mathematicians Pi-related people
William Jones (mathematician)
[ "Mathematics" ]
720
[ "Pi-related people", "Pi" ]
239,462
https://en.wikipedia.org/wiki/Phosphodiesterase
A phosphodiesterase (PDE) is an enzyme that breaks a phosphodiester bond. Usually, phosphodiesterase refers to cyclic nucleotide phosphodiesterases, which have great clinical significance and are described below. However, there are many other families of phosphodiesterases, including phospholipases C and D, autotaxin, sphingomyelin phosphodiesterase, DNases, RNases, and restriction endonucleases (which all break the phosphodiester backbone of DNA or RNA), as well as numerous less-well-characterized small-molecule phosphodiesterases. The cyclic nucleotide phosphodiesterases comprise a group of enzymes that degrade the phosphodiester bond in the second messenger molecules cAMP and cGMP. They regulate the localization, duration, and amplitude of cyclic nucleotide signaling within subcellular domains. PDEs are therefore important regulators of signal transduction mediated by these second messenger molecules. History These multiple forms (isoforms or subtypes) of phosphodiesterase were isolated from rat brain using polyacrylamide gel electrophoresis in the early 1970s by Weiss and coworkers, and were soon afterward shown to be selectively inhibited by a variety of drugs in brain and other tissues, also by Weiss and coworkers. The potential for selective phosphodiesterase inhibitors to be used as therapeutic agents was predicted in the 1970s by Weiss and coworkers. This prediction has now come to pass in a variety of fields (e.g. sildenafil as a PDE5 inhibitor and Rolipram as a PDE4 inhibitor). Nomenclature and classification The PDE nomenclature signifies the PDE family with an Arabic numeral, then a capital letter denotes the gene in that family, and a second and final Arabic numeral then indicates the splice variant derived from a single gene (e.g., PDE1C3: family 1, gene C, splicing variant 3). The superfamily of PDE enzymes is classified into 11 families, namely PDE1-PDE11, in mammals. The classification is based on: amino acid sequences substrate specificities regulatory properties pharmacological properties tissue distribution Different PDEs of the same family are functionally related despite the fact that their amino acid sequences can show considerable divergence. PDEs have different substrate specificities. Some are cAMP-selective hydrolases (PDE4, 7 and 8); others are cGMP-selective (PDE5, 6, and 9). Others can hydrolyse both cAMP and cGMP (PDE1, 2, 3, 10, and 11). PDE3 is sometimes referred to as cGMP-inhibited phosphodiesterase. Although PDE2 can hydrolyze both cyclic nucleotides, binding of cGMP to the regulatory GAF-B domain will increase cAMP affinity and hydrolysis to the detriment of cGMP. This mechanism, as well as others, allows for cross-regulation of the cAMP and cGMP pathways. PDE12 cleaves 2',5'-phosphodiester bond linking adenosines of the 5'-triphosphorylated oligoadenylates. PDE12 is not a member of the cyclic nucleotide phosphodiesterase superfamily that contains PDE1 through PDE11. Clinical significance Phosphodiesterase enzymes have been shown to be different in different types of cells, including normal and leukemic lymphocytes and are often targets for pharmacological inhibition due to their unique tissue distribution, structural properties, and functional properties. Inhibitors of PDE can prolong or enhance the effects of physiological processes mediated by cAMP or cGMP by inhibition of their degradation by PDE. Sildenafil (Viagra) is an inhibitor of cGMP-specific phosphodiesterase type 5, which enhances the vasodilatory effects of cGMP in the corpus cavernosum and is used to treat erectile dysfunction. Sildenafil is also currently being investigated for its myo- and cardioprotective effects, with particular interest being given to the compound's therapeutic value in the treatment of Duchenne muscular dystrophy and benign prostatic hyperplasia. Paraxanthine, the main metabolite of caffeine, is another cGMP-specific phosphodiesterase inhibitor which inhibits PDE9, a cGMP preferring phosphodiesterase. PDE9 is expressed as high as PDE5 in the corpus cavernosum. Pharmacological effect of PDE inhibitors PDE inhibitors have been identified as new potential therapeutics in areas such as pulmonary arterial hypertension, coronary heart disease, dementia, depression, asthma, COPD, protozoal infections (including malaria) and schizophrenia. PDE also are important in seizure incidence. For example, PDE compromised the antiepileptic activity of adenosine. In addition, using of a PDE inhibitor (pentoxifylline) in pentylenetetrazole-induced seizure indicated the antiepileptic effect by increasing the time latency to seizure incidence and decreasing the seizure duration in vivo. Cilostazol (Pletal) inhibits PDE3. This inhibition allows red blood cells to be more able to bend. This is useful in conditions such as intermittent claudication, as the cells can maneuver through constricted veins and arteries more easily. Dipyridamole inhibits PDE-3 and PDE-5. This leads to intraplatelet accumulation of cAMP and/or cGMP, inhibiting platelet aggregation. Zaprinast inhibits the growth of asexual blood-stage malaria parasites (Plasmodium falciparum) in vitro with an ED50 value of 35 μM, and inhibits PfPDE1, a P. falciparum cGMP-specific phosphodiesterase, with an IC50 value of 3.8 μM. Xanthines such as caffeine and theobromine are cAMP-phosphodiesterase inhibitors. However, the inhibitory effect of xanthines on phosphodiesterases are only seen at dosages higher than what people normally consume. Sildenafil, Tadalafil and Vardenafil are PDE-5 inhibitors and are widely used in the treatment of erectile dysfunction. Other applications Recently a PDE was found to break down and release human body grime found on laundry. With the help of this newly discovered nuclease, the yellow stains and odors, that normally remain on clothes with classical detergents, can easily be removed. References External links EC 3.1.4 Molecular biology
Phosphodiesterase
[ "Chemistry", "Biology" ]
1,454
[ "Biochemistry", "Molecular biology" ]
240,060
https://en.wikipedia.org/wiki/Regulatory%20sequence
A regulatory sequence is a segment of a nucleic acid molecule which is capable of increasing or decreasing the expression of specific genes within an organism. Regulation of gene expression is an essential feature of all living organisms and viruses. Description In DNA, regulation of gene expression normally happens at the level of RNA biosynthesis (transcription). It is accomplished through the sequence-specific binding of proteins (transcription factors) that activate or inhibit transcription. Transcription factors may act as activators, repressors, or both. Repressors often act by preventing RNA polymerase from forming a productive complex with the transcriptional initiation region (promoter), while activators facilitate formation of a productive complex. Furthermore, DNA motifs have been shown to be predictive of epigenomic modifications, suggesting that transcription factors play a role in regulating the epigenome. In RNA, regulation may occur at the level of protein biosynthesis (translation), RNA cleavage, RNA splicing, or transcriptional termination. Regulatory sequences are frequently associated with messenger RNA (mRNA) molecules, where they are used to control mRNA biogenesis or translation. A variety of biological molecules may bind to the RNA to accomplish this regulation, including proteins (e.g., translational repressors and splicing factors), other RNA molecules (e.g., miRNA) and small molecules, in the case of riboswitches. Activation and implementation A regulatory DNA sequence does not regulate unless it is activated. Different regulatory sequences are activated and then implement their regulation by different mechanisms. Enhancer activation and implementation Expression of genes in mammals can be upregulated when signals are transmitted to the promoters associated with the genes. Cis-regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis-regulatory sequence. These cis-regulatory sequences include enhancers, silencers, insulators and tethering elements. Among this constellation of sequences, enhancers and their associated transcription factor proteins have a leading role in the regulation of gene expression. Enhancers are sequences of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene. The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factor proteins (in 2018 Lambert et al. indicated there were about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (RNAP II) enzyme bound to the promoter. Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of a transcription factor bound to an enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene. CpG island methylation and demethylation 5-Methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see figure). 5-mC is an epigenetic marker found predominantly on cytosines within CpG dinucleotides, which consist of a cytosine is followed by a guanine reading in the 5′ to 3′ direction along the DNA strand (CpG sites). About 28 million CpG dinucleotides occur in the human genome. In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methyl-CpG, or 5-mCpG). Methylated cytosines within CpG sequences often occur in groups, called CpG islands. About 59% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene expression. DNA methylation regulates gene expression through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands. These MBD proteins have both a methyl-CpG-binding domain and a transcriptional repression domain. They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin by means such as catalyzing the introduction of repressive histone marks or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization. Transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a given gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. There are approximately 1,400 different transcription factors encoded in the human genome and they constitute about 6% of all human protein coding genes. About 94% of transcription factor binding sites that are associated with signal-responsive genes occur in enhancers while only about 6% of such sites occur in promoters. EGR1 is a transcription factor important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA. While only small amounts of EGR1 protein are detectable in cells that are un-stimulated, EGR1 translation into protein at one hour after stimulation is markedly elevated. Expression of EGR1 in various types of cells can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. In the brain, when neurons are activated, EGR1 proteins are upregulated, and they bind to (recruit) pre-existing TET1 enzymes, which are highly expressed in neurons. TET enzymes can catalyze demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters. Activation by double- or single-strand breaks About 600 regulatory sequences in promoters and about 800 regulatory sequences in enhancers appear to depend on double-strand breaks initiated by topoisomerase 2β (TOP2B) for activation. The induction of particular double-strand breaks is specific with respect to the inducing signal. When neurons are activated in vitro, just 22 TOP2B-induced double-strand breaks occur in their genomes. However, when contextual fear conditioning is carried out in a mouse, this conditioning causes hundreds of gene-associated DSBs in the medial prefrontal cortex and hippocampus, which are important for learning and memory. Such TOP2B-induced double-strand breaks are accompanied by at least four enzymes of the non-homologous end joining (NHEJ) DNA repair pathway (DNA-PKcs, KU70, KU80 and DNA LIGASE IV) (see figure). These enzymes repair the double-strand breaks within about 15 minutes to 2 hours. The double-strand breaks in the promoter are thus associated with TOP2B and at least these four repair enzymes. These proteins are present simultaneously on a single promoter nucleosome (there are about 147 nucleotides in the DNA sequence wrapped around a single nucleosome) located near the transcription start site of their target gene. The double-strand break introduced by TOP2B apparently frees the part of the promoter at an RNA polymerase–bound transcription start site to physically move to its associated enhancer. This allows the enhancer, with its bound transcription factors and mediator proteins, to directly interact with the RNA polymerase that had been paused at the transcription start site to start transcription. Similarly, topoisomerase I (TOP1) enzymes appear to be located at many enhancers, and those enhancers become activated when TOP1 introduces a single-strand break. TOP1 causes single-strand breaks in particular enhancer DNA regulatory sequences when signaled by a specific enhancer-binding transcription factor. Topoisomerase I breaks are associated with different DNA repair factors than those surrounding TOP2B breaks. In the case of TOP1, the breaks are associated most immediately with DNA repair enzymes MRE11, RAD50 and ATR. Examples Genomes can be analyzed systematically to identify regulatory regions. Conserved non-coding sequences often contain regulatory regions, and so they are often the subject of these analyses. CAAT box CCAAT box Operator (biology) Pribnow box TATA box SECIS element, mRNA Polyadenylation signal, mRNA A-box Z-box C-box E-box G-box Insulin gene Regulatory sequences for the insulin gene are: A5 Z negative regulatory element (NRE) C2 E2 A3 cAMP response element A2 CAAT enhancer binding (CEB) C1 E1 G1 See also Regulator gene Regulation of gene expression Cis-acting element Gene regulatory network Open Regulatory Annotation Database Operon DNA binding site Promoter Trans-acting factor ORegAnno References External links ORegAnno - Open Regulatory Annotation Database ReMap - database of transcriptional regulators Gene expression
Regulatory sequence
[ "Chemistry", "Biology" ]
2,329
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Regulatory sequences" ]
240,123
https://en.wikipedia.org/wiki/Plasticity%20%28physics%29
In physics and materials science, plasticity (also known as plastic deformation) is the ability of a solid material to undergo permanent deformation, a non-reversible change of shape in response to applied forces. For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding. Plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, and foams. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations. Such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure; in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks. In cellular materials such as liquid foams or biological tissues, plasticity is mainly a consequence of bubble or cell rearrangements, notably T1 processes. For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain. Elastic deformation, however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation". Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming, may need increasingly higher stresses to deform further. Generally, plastic deformation is also dependent on the deformation speed, i.e. higher stresses usually have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically. Contributing properties The plasticity of a material is directly proportional to the ductility and malleability of the material. Physical mechanisms In metals Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity when hot than when cold. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals. Most metals are rendered plastic by heating and hence shaped hot. Slip systems Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page. The result is a permanent change of shape within the crystal and plastic deformation. The presence of dislocations increases the likelihood of planes. Reversible plasticity On the nanoscale the primary plastic deformation in simple face-centered cubic metals is reversible, as long as there is no material transport in form of cross-slip. Shape-memory alloys such as Nitinol wire also exhibit a reversible form of plasticity which is more properly called pseudoelasticity. Shear banding The presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands. Microplasticity Microplasticity is a local phenomenon in metals. It occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain. Amorphous materials Crazing In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of strain and stretch marks. Cellular materials These materials plastically deform when the bending moment exceeds the fully plastic moment. This applies to open cell foams where the bending moment is exerted on the cell walls. The foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3. This is because beams yield axially instead of bending. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells. Soils and sand Soils, particularly clays, display a significant amount of inelasticity under load. The causes of plasticity in soils can be quite complex and are strongly dependent on the microstructure, chemical composition, and water content. Plastic behavior in soils is caused primarily by the rearrangement of clusters of adjacent grains. Rocks and concrete Inelastic deformations of rocks and concrete are primarily caused by the formation of microcracks and sliding motions relative to these cracks. At high temperatures and pressures, plastic behavior can also be affected by the motion of dislocations in individual grains in the microstructure. Time-independent yielding and plastic flow in crystalline materials Time-independent plastic flow in both single crystals and polycrystals is defined by a critical/maximum resolved shear stress (τCRSS), initiating dislocation migration along parallel slip planes of a single slip system, thereby defining the transition from elastic to plastic deformation behavior in crystalline materials. Time-independent yielding and plastic flow in single crystals The critical resolved shear stress for single crystals is defined by Schmid’s law τCRSS=σy/m, where σy is the yield strength of the single crystal and m is the Schmid factor. The Schmid factor comprises two variables λ and φ, defining the angle between the slip plane direction and the tensile force applied, and the angle between the slip plane normal and the tensile force applied, respectively. Notably, because m > 1, σy > τCRSS. Critical resolved shear stress dependence on temperature, strain rate, and point defects There are three characteristic regions of the critical resolved shear stress as a function of temperature. In the low temperature region 1 (T ≤ 0.25Tm), the strain rate must be high to achieve high τCRSS which is required to initiate dislocation glide and equivalently plastic flow. In region 1, the critical resolved shear stress has two components: athermal (τa) and thermal (τ*) shear stresses, arising from the stress required to move dislocations in the presence of other dislocations, and the resistance of point defect obstacles to dislocation migration, respectively. At T = T*, the moderate temperature region 2 (0.25Tm < T < 0.7Tm) is defined, where the thermal shear stress component τ* → 0, representing the elimination of point defect impedance to dislocation migration. Thus the temperature-independent critical resolved shear stress τCRSS = τa remains so until region 3 is defined. Notably, in region 2 moderate temperature time-dependent plastic deformation (creep) mechanisms such as solute-drag should be considered. Furthermore, in the high temperature region 3 (T ≥ 0.7Tm) έ can be low, contributing to low τCRSS, however plastic flow will still occur due to thermally activated high temperature time-dependent plastic deformation mechanisms such as Nabarro–Herring (NH) and Coble diffusional flow through the lattice and along the single crystal surfaces, respectively, as well as dislocation climb-glide creep. Stages of time-independent plastic flow, post yielding During the easy glide stage 1, the work hardening rate, defined by the change in shear stress with respect to shear strain (dτ/dγ) is low, representative of a small amount of applied shear stress necessary to induce a large amount of shear strain. Facile dislocation glide and corresponding flow is attributed to dislocation migration along parallel slip planes only (i.e. one slip system). Moderate impedance to dislocation migration along parallel slip planes is exhibited according to the weak stress field interactions between these dislocations, which heightens with smaller interplanar spacing. Overall, these migrating dislocations within a single slip system act as weak obstacles to flow, and a modest rise in stress is observed in comparison to the yield stress. During the linear hardening stage 2 of flow, the work hardening rate becomes high as considerable stress is required to overcome the stress field interactions of dislocations migrating on non-parallel slip planes (i.e. multiple slip systems), acting as strong obstacles to flow. Much stress is required to drive continual dislocation migration for small strains. The shear flow stress is directly proportional to the square root of the dislocation density (τflow ~ρ½), irrespective of the evolution of dislocation configurations, displaying the reliance of hardening on the number of dislocations present. Regarding this evolution of dislocation configurations, at small strains the dislocation arrangement is a random 3D array of intersecting lines. Moderate strains correspond to cellular dislocation structures of heterogeneous dislocation distribution with large dislocation density at the cell boundaries, and small dislocation density within the cell interior. At even larger strains the cellular dislocation structure reduces in size until a minimum size is achieved. Finally, the work hardening rate becomes low again in the exhaustion/saturation of hardening stage 3 of plastic flow, as small shear stresses produce large shear strains. Notably, instances when multiple slip systems are oriented favorably with respect to the applied stress, the τCRSS for these systems may be similar and yielding may occur according to dislocation migration along multiple slip systems with non-parallel slip planes, displaying a stage 1 work-hardening rate typically characteristic of stage 2. Lastly, distinction between time-independent plastic deformation in body-centered cubic transition metals and face centered cubic metals is summarized below. Time-independent yielding and plastic flow in polycrystals Plasticity in polycrystals differs substantially from that in single crystals due to the presence of grain boundary (GB) planar defects, which act as very strong obstacles to plastic flow by impeding dislocation migration along the entire length of the activated slip plane(s). Hence, dislocations cannot pass from one grain to another across the grain boundary. The following sections explore specific GB requirements for extensive plastic deformation of polycrystals prior to fracture, as well as the influence of microscopic yielding within individual crystallites on macroscopic yielding of the polycrystal. The critical resolved shear stress for polycrystals is defined by Schmid’s law as well (τCRSS=σy/ṁ), where σy is the yield strength of the polycrystal and ṁ is the weighted Schmid factor. The weighted Schmid factor reflects the least favorably oriented slip system among the most favorably oriented slip systems of the grains constituting the GB. Grain boundary constraint in polycrystals The GB constraint for polycrystals can be explained by considering a grain boundary in the xz plane between two single crystals A and B of identical composition, structure, and slip systems, but misoriented with respect to each other. To ensure that voids do not form between individually deforming grains, the GB constraint for the bicrystal is as follows: εxxA = εxxB (the x-axial strain at the GB must be equivalent for A and B), εzzA = εzzB (the z-axial strain at the GB must be equivalent for A and B), and εxzA = εxzB (the xz shear strain along the xz-GB plane must be equivalent for A and B). In addition, this GB constraint requires that five independent slip systems be activated per crystallite constituting the GB. Notably, because independent slip systems are defined as slip planes on which dislocation migrations cannot be reproduced by any combination of dislocation migrations along other slip system’s planes, the number of geometrical slip systems for a given crystal system - which by definition can be constructed by slip system combinations - is typically greater than that of independent slip systems. Significantly, there is a maximum of five independent slip systems for each of the seven crystal systems, however, not all seven crystal systems acquire this upper limit. In fact, even within a given crystal system, the composition and Bravais lattice diversifies the number of independent slip systems (see the table below). In cases for which crystallites of a polycrystal do not obtain five independent slip systems, the GB condition cannot be met, and thus the time-independent deformation of individual crystallites results in cracks and voids at the GBs of the polycrystal, and soon fracture is realized. Hence, for a given composition and structure, a single crystal with less than five independent slip systems is stronger (exhibiting a greater extent of plasticity) than its polycrystalline form. Implications of the grain boundary constraint in polycrystals Although the two crystallites A and B discussed in the above section have identical slip systems, they are misoriented with respect to each other, and therefore misoriented with respect to the applied force. Thus, microscopic yielding within a crystallite interior may occur according to the rules governing single crystal time-independent yielding. Eventually, the activated slip planes within the grain interiors will permit dislocation migration to the GB where many dislocations then pile up as geometrically necessary dislocations. This pile up corresponds to strain gradients across individual grains as the dislocation density near the GB is greater than that in the grain interior, imposing a stress on the adjacent grain in contact. When considering the AB bicrystal as a whole, the most favorably oriented slip system in A will not be the that in B, and hence τACRSS ≠ τBCRSS. Paramount is the fact that macroscopic yielding of the bicrystal is prolonged until the higher value of τCRSS between grains A and B is achieved, according to the GB constraint. Thus, for a given composition and structure, a polycrystal with five independent slip systems is stronger (greater extent of plasticity) than its single crystalline form. Correspondingly, the work hardening rate will be higher for the polycrystal than the single crystal, as more stress is required in the polycrystal to produce strains. Importantly, just as with single crystal flow stress, τflow ~ρ½, but is also inversely proportional to the square root of average grain diameter (τflow ~d-½ ). Therefore, the flow stress of a polycrystal, and hence the polycrystal’s strength, increases with small grain size. The reason for this is that smaller grains have a relatively smaller number of slip planes to be activated, corresponding to a fewer number of dislocations migrating to the GBs, and therefore less stress induced on adjacent grains due to dislocation pile up. In addition, for a given volume of polycrystal, smaller grains present more strong obstacle grain boundaries. These two factors provide an understanding as to why the onset of macroscopic flow in fine-grained polycrystals occurs at larger applied stresses than in coarse-grained polycrystals. Mathematical descriptions Deformation theory There are several mathematical descriptions of plasticity. One is deformation theory (see e.g. Hooke's law) where the Cauchy stress tensor (of order d-1 in d dimensions) is a function of the strain tensor. Although this description is accurate when a small part of matter is subjected to increasing loading (such as strain loading), this theory cannot account for irreversibility. Ductile materials can sustain large plastic deformations without fracture. However, even ductile metals will fracture when the strain becomes large enough—this is as a result of work hardening of the material, which causes it to become brittle. Heat treatment such as annealing can restore the ductility of a worked piece, so that shaping can continue. Flow plasticity theory In 1934, Egon Orowan, Michael Polanyi and Geoffrey Ingram Taylor, roughly simultaneously, realized that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations. The mathematical theory of plasticity, flow plasticity theory, uses a set of non-linear, non-integrable equations to describe the set of changes on strain and stress with respect to a previous state and a small increase of deformation. Yield criteria If the stress exceeds a critical value, as was mentioned above, the material will undergo plastic, or irreversible, deformation. This critical stress can be tensile or compressive. The Tresca and the von Mises criteria are commonly used to determine whether a material has yielded. However, these criteria have proved inadequate for a large range of materials and several other yield criteria are also in widespread use. Tresca criterion The Tresca criterion is based on the notion that when a material fails, it does so in shear, which is a relatively good assumption when considering metals. Given the principal stress state, we can use Mohr's circle to solve for the maximum shear stresses our material will experience and conclude that the material will fail if where σ1 is the maximum normal stress, σ3 is the minimum normal stress, and σ0 is the stress under which the material fails in uniaxial loading. A yield surface may be constructed, which provides a visual representation of this concept. Inside of the yield surface, deformation is elastic. On the surface, deformation is plastic. It is impossible for a material to have stress states outside its yield surface. Huber–von Mises criterion The Huber–von Mises criterion is based on the Tresca criterion but takes into account the assumption that hydrostatic stresses do not contribute to material failure. M. T. Huber was the first who proposed the criterion of shear energy. Von Mises solves for an effective stress under uniaxial loading, subtracting out hydrostatic stresses, and states that all effective stresses greater than that which causes material failure in uniaxial loading will result in plastic deformation. Again, a visual representation of the yield surface may be constructed using the above equation, which takes the shape of an ellipse. Inside the surface, materials undergo elastic deformation. Reaching the surface means the material undergoes plastic deformations. See also Yield (engineering) Atterberg limits Deformation (mechanics) Deformation (engineering) Plastometer Poisson's ratio References Further reading Solid mechanics Deformation (mechanics)
Plasticity (physics)
[ "Physics", "Materials_science", "Engineering" ]
4,078
[ "Solid mechanics", "Deformation (mechanics)", "Materials science", "Plasticity (physics)", "Mechanics" ]
240,224
https://en.wikipedia.org/wiki/Standard%20state
The standard state of a material (pure substance, mixture or solution) is a reference point used to calculate its properties under different conditions. A degree sign (°) or a superscript Plimsoll symbol (⦵) is used to designate a thermodynamic quantity in the standard state, such as change in enthalpy (ΔH°), change in entropy (ΔS°), or change in Gibbs free energy (ΔG°). The degree symbol has become widespread, although the Plimsoll is recommended in standards, see discussion about typesetting below. In principle, the choice of standard state is arbitrary, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a conventional set of standard states for general use. The standard state should not be confused with standard temperature and pressure (STP) for gases, nor with the standard solutions used in analytical chemistry. STP is commonly used for calculations involving gases that approximate an ideal gas, whereas standard state conditions are used for thermodynamic calculations. For a given material or substance, the standard state is the reference state for the material's thermodynamic state properties such as enthalpy, entropy, Gibbs free energy, and for many other material standards. The standard enthalpy change of formation for an element in its standard state is zero, and this convention allows a wide range of other thermodynamic quantities to be calculated and tabulated. The standard state of a substance does not have to exist in nature: for example, it is possible to calculate values for steam at 298.15 K and , although steam does not exist (as a gas) under these conditions. The advantage of this practice is that tables of thermodynamic properties prepared in this way are self-consistent. Conventional standard states Many standard states are non-physical states, often referred to as "hypothetical states". Nevertheless, their thermodynamic properties are well-defined, usually by an extrapolation from some limiting condition, such as zero pressure or zero concentration, to a specified condition (usually unit concentration or pressure) using an ideal extrapolating function, such as ideal solution or ideal gas behavior, or by empirical measurements. Strictly speaking, temperature is not part of the definition of a standard state. However, most tables of thermodynamic quantities are compiled at specific temperatures, most commonly room temperature (), or, somewhat less commonly, the freezing point of water (). Gases The standard state for a gas is the hypothetical state it would have as a pure substance obeying the ideal gas equation at standard pressure. IUPAC recommends using a standard pressure p⦵ or P° equal to , or 1 bar. No real gas has perfectly ideal behavior, but this definition of the standard state allows corrections for non-ideality to be made consistently for all the different gases. Liquids and solids The standard state for liquids and solids is simply the state of the pure substance subjected to a total pressure of (or 1 bar). For most elements, the reference point of ΔfH⦵ = 0 is defined for the most stable allotrope of the element, such as graphite in the case of carbon, and the β-phase (white tin) in the case of tin. An exception is white phosphorus, the most common allotrope of phosphorus, which is defined as the standard state despite the fact that it is only metastable. This is because the thermodynamically stable black allotrope is difficult to prepare pure. Solutes For a substance in solution (solute), the standard state is usually chosen as the hypothetical state it would have at the standard state molality or amount concentration but exhibiting infinite-dilution behavior (where there are no solute-solute interactions, but solute-solvent interactions are present). The reason for this unusual definition is that the behavior of a solute at the limit of infinite dilution is described by equations which are very similar to the equations for ideal gases. Hence taking infinite-dilution behavior to be the standard state allows corrections for non-ideality to be made consistently for all the different solutes. The standard state molality is , while the standard state molarity is . Other choices are possible. For example, the use of a standard state concentration of 10−7 mol/L for the hydrogen ion in a real, aqueous solution is common in the field of biochemistry. In other application areas such as electrochemistry, the standard state is sometimes chosen as the actual state of the real solution at a standard concentration (often ). The activity coefficients will not transfer from convention to convention and so it is very important to know and understand what conventions were used in the construction of tables of standard thermodynamic properties before using them to describe solutions. Adsorbates For molecules adsorbed on surfaces there have been various conventions proposed based on hypothetical standard states. For adsorption that occurs on specific sites (Langmuir adsorption isotherm) the most common standard state is a relative coverage of , as this choice results in a cancellation of the configurational entropy term and is also consistent with neglecting to include the standard state (which is a common error). The advantage of using is that the configurational term cancels and the entropy extracted from thermodynamic analyses is thus reflective of intra-molecular changes between the bulk phase (such as gas or liquid) and the adsorbed state. There may be benefit to tabulating values based on both the relative coverage based standard state and in an additional column the absolute coverage based standard state. For 2D gas states, the complication of discrete states does not arise and an absolute density base standard state has been proposed, similar for the 3D gas phase. Typesetting At the time of development in the nineteenth century, the superscript Plimsoll symbol (⦵) was adopted to indicate the non-zero nature of the standard state. IUPAC recommends in the 3rd edition of Quantities, Units and Symbols in Physical Chemistry a symbol which seems to be a degree sign (°) as a substitute for the plimsoll mark. In the very same publication the plimsoll mark appears to be constructed by combining a horizontal stroke with a degree sign. A range of similar symbols are used in the literature: a stroked lowercase letter O (o), a superscript zero (0) or a circle with a horizontal bar either where the bar extends beyond the boundaries of the circle () or is enclosed by the circle, dividing the circle in half (). Compared to the plimsoll symbol used in 1800s text, the U+29B5 glyph is too large and its horizontal line does not sufficiently extend beyond the boundaries of the circle. It is easily confused with the Greek letter theta (uppercase Θ or , lowercase θ). As of 2024, the character has been proposed for Unicode. It is a regular-sized Unicode symbol meant to be used in superscripted form when denoting standard state, replacing U+29B5 for this purpose. Ian M. Mills, who was involved in producing a revision of Quantities, Units and Symbols in Physical Chemistry, suggested that a superscript zero () is an equal alternative to indicate "standard state", though a degree symbol (°) is used in the same article. The degree symbol has come into widespread use in general, inorganic, and physical chemistry textbooks in recent years. When read out loud, the symbol is pronounced "naught". See also Standard conditions for temperature and pressure Standard molar entropy References Thermodynamics mk:Стандардна состојба ur:معیاری حالات
Standard state
[ "Physics", "Chemistry", "Mathematics" ]
1,603
[ "Thermodynamics", "Dynamical systems" ]
240,228
https://en.wikipedia.org/wiki/Biological%20pump
The biological pump (or ocean carbon biological pump or marine biological carbon pump) is the ocean's biologically driven sequestration of carbon from the atmosphere and land runoff to the ocean interior and seafloor sediments. In other words, it is a biologically mediated process which results in the sequestering of carbon in the deep ocean away from the atmosphere and the land. The biological pump is the biological component of the "marine carbon pump" which contains both a physical and biological component. It is the part of the broader oceanic carbon cycle responsible for the cycling of organic matter formed mainly by phytoplankton during photosynthesis (soft-tissue pump), as well as the cycling of calcium carbonate (CaCO3) formed into shells by certain organisms such as plankton and mollusks (carbonate pump). Budget calculations of the biological carbon pump are based on the ratio between sedimentation (carbon export to the ocean floor) and remineralization (release of carbon to the atmosphere). The biological pump is not so much the result of a single process, but rather the sum of a number of processes each of which can influence biological pumping. Overall, the pump transfers about 10.2 gigatonnes of carbon every year into the ocean's interior and a total of 1300 gigatonnes carbon over an average 127 years. This takes carbon out of contact with the atmosphere for several thousand years or longer. An ocean without a biological pump would result in atmospheric carbon dioxide levels about 400 ppm higher than the present day. Overview The element carbon plays a central role in climate and life on Earth. It is capable of moving among and between the geosphere, cryosphere, atmosphere, biosphere and hydrosphere. This flow of carbon is referred to as the Earth's carbon cycle. It is also intimately linked to the cycling of other elements and compounds. The ocean plays a fundamental role in Earth's carbon cycle, helping to regulate atmospheric CO2 concentration. The biological pump is a set of processes that transfer organic carbon from the surface to the deep ocean, and is at the heart of the ocean carbon cycle. The biological pump depends on the fraction of primary produced organic matter that survives degradation in the euphotic zone and that is exported from surface water to the ocean interior, where it is mineralized to inorganic carbon, with the result that carbon is transported against the gradient of dissolved inorganic carbon (DIC) from the surface to the deep ocean. This transfer occurs through physical mixing and transport of dissolved and particulate organic carbon (POC), vertical migrations of organisms (zooplankton, fish) and through gravitational settling of particulate organic carbon. The biological pump can be divided into three distinct phases, the first of which is the production of fixed carbon by planktonic phototrophs in the euphotic (sunlit) surface region of the ocean. In these surface waters, phytoplankton use carbon dioxide (CO2), nitrogen (N), phosphorus (P), and other trace elements (barium, iron, zinc, etc.) during photosynthesis to make carbohydrates, lipids, and proteins. Some plankton, (e.g. coccolithophores and foraminifera) combine calcium (Ca) and dissolved carbonates (carbonic acid and bicarbonate) to form a calcium carbonate (CaCO3) protective coating. Once this carbon is fixed into soft or hard tissue, the organisms either stay in the euphotic zone to be recycled as part of the regenerative nutrient cycle or once they die, continue to the second phase of the biological pump and begin to sink to the ocean floor. The sinking particles will often form aggregates as they sink, which greatly increases the sinking rate. It is this aggregation that gives particles a better chance of escaping predation and decomposition in the water column and eventually making it to the sea floor. The fixed carbon that is decomposed by bacteria either on the way down or once on the sea floor then enters the final phase of the pump and is remineralized to be used again in primary production. The particles that escape these processes entirely are sequestered in the sediment and may remain there for millions of years. It is this sequestered carbon that is responsible for ultimately lowering atmospheric CO2. Biology, physics and gravity interact to pump organic carbon into the deep sea. The processes of fixation of inorganic carbon in organic matter during photosynthesis, its transformation by food web processes (trophodynamics), physical mixing, transport and gravitational settling are referred to collectively as the biological pump. The biological pump is responsible for transforming dissolved inorganic carbon (DIC) into organic biomass and pumping it in particulate or dissolved form into the deep ocean. Inorganic nutrients and carbon dioxide are fixed during photosynthesis by phytoplankton, which both release dissolved organic matter (DOM) and are consumed by herbivorous zooplankton. Larger zooplankton - such as copepods - egest fecal pellets which can be reingested and sink or collect with other organic detritus into larger, more-rapidly-sinking aggregates. DOM is partially consumed by bacteria (black dots) and respired; the remaining refractory DOM is advected and mixed into the deep sea. DOM and aggregates exported into the deep water are consumed and respired, thus returning organic carbon into the enormous deep ocean reservoir of DIC. About 1% of the particles leaving the surface ocean reach the seabed and are consumed, respired, or buried in the sediments. There, carbon is stored for millions of years. The net effect of these processes is to remove carbon in organic form from the surface and return it to DIC at greater depths, maintaining the surface-to-deep ocean gradient of DIC. Thermohaline circulation returns deep-ocean DIC to the atmosphere on millennial timescales. Primary production The first step in the biological pump is the synthesis of both organic and inorganic carbon compounds by phytoplankton in the uppermost, sunlit layers of the ocean. Organic compounds in the form of sugars, carbohydrates, lipids, and proteins are synthesized during the process of photosynthesis: CO2 + H2O + light → CH2O + O2 In addition to carbon, organic matter found in phytoplankton is composed of nitrogen, phosphorus and various trace metals. The ratio of carbon to nitrogen and phosphorus varies from place to place, but has an average ratio near 106C:16N:1P, known as the Redfield ratio. Trace metals such as magnesium, cadmium, iron, calcium, barium and copper are orders of magnitude less prevalent in phytoplankton organic material, but necessary for certain metabolic processes and therefore can be limiting nutrients in photosynthesis due to their lower abundance in the water column. Oceanic primary production accounts for about half of the carbon fixation carried out on Earth. Approximately 50–60 Pg of carbon are fixed by marine phytoplankton each year despite the fact that they account for less than 1% of the total photosynthetic biomass on Earth. The majority of this carbon fixation (~80%) is carried out in the open ocean while the remaining amount occurs in the very productive upwelling regions of the ocean. Despite these productive regions producing 2 to 3 times as much fixed carbon per area, the open ocean accounts for greater than 90% of the ocean area and therefore is the larger contributor. Forms of carbon Dissolved and particulate carbon Phytoplankton supports all life in the ocean as it converts inorganic compounds into organic constituents. This autotrophically produced biomass presents the foundation of the marine food web. In the diagram immediately below, the arrows indicate the various production (arrowhead pointing toward DOM pool) and removal processes of DOM (arrowhead pointing away), while the dashed arrows represent dominant biological processes involved in the transfer of DOM. Due to these processes, the fraction of labile DOM decreases rapidly with depth, whereas the refractory character of the DOM pool considerably increases during its export to the deep ocean. DOM, dissolved organic matter. Ocean carbon pools The marine biological pump depends on a number of key pools, components and processes that influence its functioning. There are four main pools of carbon in the ocean. Dissolved inorganic carbon (DIC) is the largest pool. It constitutes around 38,000 Pg C and includes dissolved carbon dioxide (CO2), bicarbonate (), carbonate (), and carbonic acid (). The equilibrium between carbonic acid and carbonate determines the pH of the seawater. Carbon dioxide dissolves easily in water and its solubility is inversely related to temperature. Dissolved CO2 is taken up in the process of photosynthesis, and can reduce the partial pressure of CO2 in the seawater, favouring drawdown from the atmosphere. The reverse process respiration, releases CO2 back into the water, can increase partial pressure of CO2 in the seawater, favouring release back to the atmosphere. The formation of calcium carbonate by organisms such as coccolithophores has the effect of releasing CO2 into the water. Dissolved organic carbon (DOC) is the next largest pool at around 662 Pg C. DOC can be classified according to its reactivity as refractory, semi-labile or labile. The labile pool constitutes around 0.2 Pg C, is bioavailable, and has a high production rate (~ 15−25 Pg C y−1). The refractory component is the biggest pool (~642 Pg C ± 32; but has a very low turnover rate (0.043 Pg C y−1). The turnover time for refractory DOC is thought to be greater than 1000 years. Particulate organic carbon (POC) constitutes around 2.3 Pg C, and is relatively small compared with DIC and DOC. Though small in size, this pool is highly dynamic, having the highest turnover rate of any organic carbon pool on the planet. Driven by primary production, it produces around 50 Pg C y−1 globally. It can be separated into living (e.g. phytoplankton, zooplankton, bacteria) and non-living (e.g. detritus) material. Of these, the phytoplankton carbon is particularly important, because of its role in marine primary production, and also because it serves as the food resource for all the larger organisms in the pelagic ecosystem. Particulate inorganic carbon (PIC) is the smallest of the pools at around 0.03 Pg C. It is present in the form of calcium carbonate (CaCO3) in particulate form, and impacts the carbonate system and pH of the seawater. Estimates for PIC production are in the region of 0.8–1.4 Pg C y−1, with at least 65% of it being dissolved in the upper water column, the rest contributing to deep sediments. Coccolithophores and foraminifera are estimated to be the dominant sources of PIC in the open ocean. The PIC pool is of particular importance due to its role in the ocean carbonate system, and in facilitating the export of carbon to the deep ocean through the carbonate pump, whereby PIC is exported out of the photic zone and deposited in the bottom sediments. Calcium carbonate Particulate inorganic carbon (PIC) usually takes the form of calcium carbonate (CaCO3), and plays a key part in the ocean carbon cycle. This biologically fixed carbon is used as a protective coating for many planktonic species (coccolithophores, foraminifera) as well as larger marine organisms (mollusk shells). Calcium carbonate is also excreted at high rates during osmoregulation by fish, and can form in whiting events. While this form of carbon is not directly taken from the atmospheric budget, it is formed from dissolved forms of carbonate which are in equilibrium with CO2 and then responsible for removing this carbon via sequestration. CO2 + H2O → H2CO3 → H+ + HCO3− Ca2+ + 2HCO3− → CaCO3 + CO2 + H2O While this process does manage to fix a large amount of carbon, two units of alkalinity are sequestered for every unit of sequestered carbon. The formation and sinking of CaCO3 therefore drives a surface to deep alkalinity gradient which serves to raise the pH of surface waters, shifting the speciation of dissolved carbon to raise the partial pressure of dissolved CO2 in surface waters, which actually raises atmospheric levels. In addition, the burial of CaCO3 in sediments serves to lower overall oceanic alkalinity, tending to raise pH and thereby atmospheric CO2 levels if not counterbalanced by the new input of alkalinity from weathering. The portion of carbon that is permanently buried at the sea floor becomes part of the geologic record. Calcium carbonate often forms remarkable deposits that can then be raised onto land through tectonic motion as in the case with the White Cliffs of Dover in Southern England. These cliffs are made almost entirely of the plates of buried coccolithophores. Oceanic carbon cycle Three main processes (or pumps) that make up the marine carbon cycle bring atmospheric carbon dioxide (CO2) into the ocean interior and distribute it through the oceans. These three pumps are: (1) the solubility pump, (2) the carbonate pump, and (3) the biological pump. The total active pool of carbon at the Earth's surface for durations of less than 10,000 years is roughly 40,000 gigatons C (Gt C, a gigaton is one billion tons, or the weight of approximately 6 million blue whales), and about 95% (~38,000 Gt C) is stored in the ocean, mostly as dissolved inorganic carbon. The speciation of dissolved inorganic carbon in the marine carbon cycle is a primary controller of acid-base chemistry in the oceans. Solubility pump The biological pump is accompanied by a physico-chemical counterpart known as the solubility pump. This pump transports significant amounts of carbon in the form of dissolved inorganic carbon (DIC) from the ocean's surface to its interior. It involves physical and chemical processes only, and does not involve biological processes. The solubility pump is driven by the coincidence of two processes in the ocean: The solubility of carbon dioxide is a strong inverse function of seawater temperature (i.e. solubility is greater in cooler water) The thermohaline circulation is driven by the formation of deep water at high latitudes where seawater is usually cooler and denser Since deep water (that is, seawater in the ocean's interior) is formed under the same surface conditions that promote carbon dioxide solubility, it contains a higher concentration of dissolved inorganic carbon than might be expected from average surface concentrations. Consequently, these two processes act together to pump carbon from the atmosphere into the ocean's interior. One consequence of this is that when deep water upwells in warmer, equatorial latitudes, it strongly outgasses carbon dioxide to the atmosphere because of the reduced solubility of the gas. Carbonate pump The carbonate pump is sometimes referred to as the "hard tissue" component of the biological pump. Some surface marine organisms, like coccolithophores, produce hard structures out of calcium carbonate, a form of particulate inorganic carbon, by fixing bicarbonate. This fixation of DIC is an important part of the oceanic carbon cycle. Ca2+ + 2 HCO3− → CaCO3 + CO2 + H2O While the biological carbon pump fixes inorganic carbon (CO2) into particulate organic carbon in the form of sugar (C6H12O6), the carbonate pump fixes inorganic bicarbonate and causes a net release of CO2. In this way, the carbonate pump could be termed the carbonate counter pump. It works counter to the biological pump by counteracting the CO2 flux into the biological pump. Continental shelf pump The continental shelf pump is proposed as operating in the shallow waters of the continental shelves as a mechanism transporting carbon (dissolved or particulate) from the continental waters to the interior of the adjacent deep ocean. As originally formulated, the pump is thought to occur where the solubility pump interacts with cooler, and therefore denser water from the shelf floor which feeds down the continental slope into the neighbouring deep ocean. The shallowness of the continental shelf restricts the convection of cooling water, so the cooling can be greater for continental shelf waters than for neighbouring open ocean waters. These cooler waters promote the solubility pump and lead to an increased storage of dissolved inorganic carbon. This extra carbon storage is further augmented by the increased biological production characteristic of shelves. The dense, carbon-rich shelf waters then sink to the shelf floor and enter the sub-surface layer of the open ocean via isopycnal mixing. As the sea level rises in response to global warming, the surface area of the shelf seas will grow and in consequence the strength of the shelf sea pump should increase. Processes in the biological pump In the diagram on the right, phytoplankton convert CO2, which has dissolved from the atmosphere into the surface oceans (90 Gt yr−1), into particulate organic carbon (POC) during primary production (~ 50 Gt C yr−1). Phytoplankton are then consumed by copepods, krill and other small zooplankton grazers, which in turn are preyed upon by higher trophic levels. Any unconsumed phytoplankton form aggregates, and along with zooplankton faecal pellets, sink rapidly and are exported out of the mixed layer (< 12 Gt C yr−1 14). Krill, copepods, zooplankton and microbes intercept phytoplankton in the surface ocean and sinking detrital particles at depth, consuming and respiring this POC to CO2 (dissolved inorganic carbon, DIC), such that only a small proportion of surface-produced carbon sinks to the deep ocean (i.e., depths > 1000 m). As krill and smaller zooplankton feed, they also physically fragment particles into small, slower- or non-sinking pieces (via sloppy feeding, coprorhexy if fragmenting faeces), retarding POC export. This releases dissolved organic carbon (DOC) either directly from cells or indirectly via bacterial solubilisation (yellow circle around DOC). Bacteria can then remineralise the DOC to DIC (CO2, microbial gardening). The biological carbon pump is one of the chief determinants of the vertical distribution of carbon in the oceans and therefore of the surface partial pressure of CO2 governing air-sea CO2 exchange. It comprises phytoplankton cells, their consumers and the bacteria that assimilate their waste and plays a central role in the global carbon cycle by delivering carbon from the atmosphere to the deep sea, where it is concentrated and sequestered for centuries. Photosynthesis by phytoplankton lowers the partial pressure of CO2 in the upper ocean, thereby facilitating the absorption of CO2 from the atmosphere by generating a steeper CO2 gradient. It also results in the formation of particulate organic carbon (POC) in the euphotic layer of the epipelagic zone (0–200 m depth). The POC is processed by microbes, zooplankton and their consumers into fecal pellets, organic aggregates ("marine snow") and other forms, which are thereafter exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. Although primary production includes both dissolved and particulate organic carbon (DOC and POC respectively), only POC leads to efficient carbon export to the ocean interior, whereas the DOC fraction in surface waters is mostly recycled by bacteria. However, a more biologically resistant DOC fraction produced in the euphotic zone (accounting for 15–20% of net community productivity), is not immediately mineralized by microbes and accumulates in the ocean surface as biologically semi-labile DOC. This semi-labile DOC undergoes net export to the deep ocean, thus constituting a dynamic part of the biological carbon pump. The efficiency of DOC production and export varies across oceanographic regions, being more prominent in the oligotrophic subtropical oceans. The overall efficiency of the biological carbon pump is mostly controlled by the export of POC. Marine snow Most carbon incorporated in organic and inorganic biological matter is formed at the sea surface where it can then start sinking to the ocean floor. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material. A single phytoplankton cell has a sinking rate around one metre per day. Given that the average depth of the ocean is about four kilometres, it can take over ten years for these cells to reach the ocean floor. However, through processes such as coagulation and expulsion in predator fecal pellets, these cells form aggregates. These aggregates, known as marine snow, have sinking rates orders of magnitude greater than individual cells and complete their journey to the deep in a matter of days. In the diagram on the right, phytoplankton fix CO2 in the euphotic zone using solar energy and produce particulate organic carbon (POC). POC formed in the euphotic zone is processed by microbes, zooplankton and their consumers into organic aggregates (marine snow), which is thereafter exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. Export flux is defined as the sedimentation out of the surface layer (at approximately 100 m depth) and sequestration flux is the sedimentation out of the mesopelagic zone (at approximately 1000 m depth). A portion of the POC is respired back to CO2 in the oceanic water column at depth, mostly by heterotrophic microbes and zooplankton, thus maintaining a vertical gradient in concentration of dissolved inorganic carbon (DIC). This deep-ocean DIC returns to the atmosphere on millennial timescales through thermohaline circulation. Between 1% and 40% of the primary production is exported out of the euphotic zone, which attenuates exponentially towards the base of the mesopelagic zone and only about 1% of the surface production reaches the sea floor. Of the 50–60 Pg of carbon fixed annually, roughly 10% leaves the surface mixed layer of the oceans, while less than 0.5% of eventually reaches the sea floor. Most is retained in regenerated production in the euphotic zone and a significant portion is remineralized in midwater processes during particle sinking. The portion of carbon that leaves the surface mixed layer of the ocean is sometimes considered "sequestered", and essentially removed from contact with the atmosphere for many centuries. However, work also finds that, in regions such as the Southern Ocean, much of this carbon can quickly (within decades) come back into contact with the atmosphere. Budget calculations of the biological carbon pump are based on the ratio between sedimentation (carbon export) and remineralization (release to the atmosphere). It has been estimated that sinking particles export up to 25% of the carbon captured by phytoplankton in the surface ocean to deeper water layers. About 20% of this export (5% of surface values) is buried in the ocean sediments mainly due to their mineral ballast. During the sinking process, these organic particles are hotspots of microbial activity and represent important loci for organic matter mineralization and nutrient redistribution in the water column. Biomineralization Ballast minerals Observations have shown that fluxes of ballast minerals (calcium carbonate, opal, and lithogenic material) and organic carbon fluxes are closely correlated in the bathypelagic zones of the ocean. A large fraction of particulate organic matter occurs in the form of marine snow aggregates (>0.5 mm) composed of phytoplankton, detritus, inorganic mineral grains, and fecal pellets in the ocean. Formation and sinking of these aggregates drive the biological carbon pump via export and sedimentation of organic matter from the surface mixed layer to the deep ocean and sediments. The fraction of organic matter that leaves the upper mixed layer of the ocean is, among other factors, determined by the sinking velocity and microbial remineralisation rate of these aggregates. Recent observations have shown that the fluxes of ballast minerals (calcium carbonate, opal, and lithogenic material) and the organic carbon fluxes are closely correlated in the bathypelagic zones of the ocean. This has led to the hypothesis that organic carbon export is determined by the presence of ballast minerals within settling aggregates. Mineral ballasting is associated with about 60% of the flux of particulate organic carbon (POC) in the high-latitude North Atlantic, and with about 40% of the flux in the Southern Ocean. Strong correlations exist also in the deep ocean between the presence of ballast minerals and the flux of POC. This suggests ballast minerals enhance POC flux by increasing the sink rate of ballasted aggregates. Ballast minerals could additionally provide aggregated organic matter some protection from degradation. It has been proposed that organic carbon is better preserved in sinking particles due to increased aggregate density and sinking velocity when ballast minerals are present and/or via protection of the organic matter due to quantitative association to ballast minerals. In 2002, Klaas and Archer observed that about 83% of the global particulate organic carbon (POC) fluxes were associated with carbonate, and suggested carbonate was a more efficient ballast mineral as compared to opal and terrigenous material. They hypothesized that the higher density of calcium carbonate compared to that of opal and the higher abundance of calcium carbonate relative to terrigenous material might be the reason for the efficient ballasting by calcium carbonate. However, the direct effects of ballast minerals on sinking velocity and degradation rates in sinking aggregates are still unclear. A 2008 study demonstrated copepod fecal pellets produced on a diet of diatoms or coccolithophorids show higher sinking velocities as compared to pellets produced on a nanoflagellate diet. Carbon-specific respiration rates in pellets, however, were similar and independent of mineral content. These results suggest differences in mineral composition do not lead to differential protection of POC against microbial degradation, but the enhanced sinking velocities may result in up to 10-fold higher carbon preservation in pellets containing biogenic minerals as compared to that of pellets without biogenic minerals Minerals seem to enhance the flocculation of phytoplankton aggregates and may even act as a catalyst in aggregate formation. However, it has also been shown that incorporation of minerals can cause aggregates to fragment into smaller and denser aggregates. This can potentially lower the sinking velocity of the aggregated organic material due to the reduced aggregate sizes, and, thus, lower the total export of organic matter. Conversely, if the incorporation of minerals increases the aggregate density, its size-specific sinking velocity may also increase, which could potentially increase the carbon export. Therefore, there is still a need for better quantitative investigations of how the interactions between minerals and organic aggregates affect the degradation and sinking velocity of the aggregates and, hence, carbon sequestration in the ocean. Remineralisation Remineralisation refers to the breakdown or transformation of organic matter (those molecules derived from a biological source) into its simplest inorganic forms. These transformations form a crucial link within ecosystems as they are responsible for liberating the energy stored in organic molecules and recycling matter within the system to be reused as nutrients by other organisms. What fraction does escape remineralisation varies depending on the location. For example, in the North Sea, values of carbon deposition are ~1% of primary production while that value is <0.5% in the open oceans on average. Therefore, most of nutrients remain in the water column, recycled by the biota. Heterotrophic organisms will utilize the materials produced by the autotrophic (and chemotrophic) organisms and via respiration will remineralise the compounds from the organic form back to inorganic, making them available for primary producers again. For most areas of the ocean, the highest rates of carbon remineralisation occur at depths between in the water column, decreasing down to about where remineralisation rates remain pretty constant at 0.1 μmol kg−1 yr−1. This provides the most nutrients available for primary producers within the photic zone, though it leaves the upper surface waters starved of inorganic nutrients. Most remineralisation is done with dissolved organic carbon (DOC). Studies have shown that it is larger sinking particles that transport matter down to the sea floor while suspended particles and dissolved organics are mostly consumed by remineralisation. This happens in part due to the fact that organisms must typically ingest nutrients smaller than they are, often by orders of magnitude. With the microbial community making up 90% of marine biomass, it is particles smaller than the microbes (on the order of ) that will be taken up for remineralisation. Key role of phytoplankton Marine phytoplankton perform half of all photosynthesis on Earth and directly influence global biogeochemical cycles and the climate, yet how they will respond to future global change is unknown. Carbon dioxide is one of the principal drivers of global change and has been identified as one of the major challenges in the 21st century. Carbon dioxide (CO2) generated during anthropogenic activities such as deforestation and burning of fossil fuels for energy generation rapidly dissolves in the surface ocean and lowers seawater pH, while CO2 remaining in the atmosphere increases global temperatures and leads to increased ocean thermal stratification. While CO2 concentration in the atmosphere is estimated to be about 270 ppm before the industrial revolution, it has currently increased to about 400 ppm and is expected to reach 800–1000 ppm by the end of this century according to the "business as usual" CO2 emission scenario. Marine ecosystems are a major sink for atmospheric CO2 and take up similar amount of CO2 as terrestrial ecosystems, currently accounting for the removal of nearly one third of anthropogenic CO2 emissions from the atmosphere. The net transfer of CO2 from the atmosphere to the oceans and then sediments, is mainly a direct consequence of the combined effect of the solubility and the biological pump. While the solubility pump serves to concentrate dissolved inorganic carbon (CO2 plus bicarbonate and carbonate ions) in the deep oceans, the biological carbon pump (a key natural process and a major component of the global carbon cycle that regulates atmospheric CO2 levels) transfers both organic and inorganic carbon fixed by primary producers (phytoplankton) in the euphotic zone to the ocean interior and subsequently to the underlying sediments. Thus, the biological pump takes carbon out of contact with the atmosphere for several thousand years or longer and maintains atmospheric CO2 at significantly lower levels than would be the case if it did not exist. An ocean without a biological pump, which transfers roughly 11 Gt C yr−1 into the ocean's interior, would result in atmospheric CO2 levels ~400 ppm higher than present day. Passow and Carlson defined sedimentation out of the surface layer (at approximately 100 m depth) as the "export flux" and that out of the mesopelagic zone (at approximately 1000 m depth) as the "sequestration flux". Once carbon is transported below the mesopelagic zone, it remains in the deep sea for 100 years or longer, hence the term "sequestration" flux. According to the modelling results of Buesseler and Boyd between 1% and 40% of the primary production is exported out of the euphotic zone, which attenuates exponentially towards the base of the mesopelagic zone and only about 1% of the surface production reaches the sea floor. The export efficiency of particulate organic carbon (POC) shows regional variability. For instance, in the North Atlantic, over 40% of net primary production is exported out of the euphotic zone as compared to only 10% in the South Pacific, and this is driven in part by the composition of the phytoplankton community including cell size and composition (see below). Exported organic carbon is remineralized, that is, respired back to CO2 in the oceanic water column at depth, mainly by heterotrophic microbes and zooplankton. Thus, the biological carbon pump maintains a vertical gradient in the concentration of dissolved inorganic carbon (DIC), with higher values at increased ocean depth. This deep-ocean DIC returns to the atmosphere on millennial timescales through thermohaline circulation. In 2001, Hugh et al. expressed the efficiency of the biological pump as the amount of carbon exported from the surface layer (export production) divided by the total amount produced by photosynthesis (overall production). Modelling studies by Buesseler and Boyd revealed that the overall transfer efficiency of the biological pump is determined by a combination of factors: seasonality; the composition of phytoplankton species; the fragmentation of particles by zooplankton; and the solubilization of particles by microbes. In addition, the efficiency of the biological pump is also dependent on the aggregation and disaggregation of organic-rich aggregates and interaction between POC aggregates and suspended "ballast" minerals. Ballast minerals (silicate and carbonate biominerals and dust) are the major constituents of particles that leave the ocean surface via sinking. They are typically denser than seawater and most organic matter, thus, providing a large part of the density differential needed for sinking of the particles. Aggregation of particles increases vertical flux by transforming small suspended particles into larger, rapidly-sinking ones. It plays an important role in the sedimentation of phytodetritus from surface layer phytoplankton blooms. As illustrated by Turner in 2015, the vertical flux of sinking particles is mainly due to a combination of fecal pellets, marine snow and direct sedimentation of phytoplankton blooms, which are typically composed of diatoms, coccolithophorids, dinoflagellates and other plankton. Marine snow comprises macroscopic organic aggregates >500 μm in size and originates from clumps of aggregated phytoplankton (phytodetritus), discarded appendicularian houses, fecal matter and other miscellaneous detrital particles, Appendicularians secrete mucous feeding structures or "houses" to collect food particles and discard and renew them up to 40 times a day . Discarded appendicularian houses are highly abundant (thousands per m3 in surface waters) and are microbial hotspots with high concentrations of bacteria, ciliates, flagellates and phytoplankton. These discarded houses are therefore among the most important sources of aggregates directly produced by zooplankton in terms of carbon cycling potential. The composition of the phytoplankton community in the euphotic zone largely determines the quantity and quality of organic matter that sinks to depth. The main functional groups of marine phytoplankton that contribute to export production include nitrogen fixers (diazotrophic cyanobacteria), silicifiers (diatoms) and calcifiers (coccolithophores). Each of these phytoplankton groups differ in the size and composition of their cell walls and coverings, which influence their sinking velocities. For example, autotrophic picoplankton (0.2–2 μm in diameter)—which include taxa such as cyanobacteria (e.g., Prochlorococcus spp. and Synechococcus spp.) and prasinophytes (various genera of eukaryotes <2 μm)—are believed to contribute much less to carbon export from surface layers due to their small size, slow sinking velocities (<0.5 m/day) and rapid turnover in the microbial loop. In contrast, larger phytoplankton cells such as diatoms (2–500 μm in diameter) are very efficient in transporting carbon to depth by forming rapidly sinking aggregates. They are unique among phytoplankton, because they require Si in the form of silicic acid (Si(OH)4) for growth and production of their frustules, which are made of biogenic silica (bSiO2) and act as ballast. According to the reports of Miklasz and Denny, the sinking velocities of diatoms can range from 0.4 to 35 m/day. Analogously, coccolithophores are covered with calcium carbonate plates called 'coccoliths', which are central to aggregation and ballasting, producing sinking velocities of nearly 5 m/day. Although it has been assumed that picophytoplankton, characterizing vast oligotrophic areas of the ocean, do not contribute substantially to the particulate organic carbon (POC) flux, in 2007 Richardson and Jackson suggested that all phytoplankton, including picoplankton cells, contribute equally to POC export. They proposed alternative pathways for picoplankton carbon cycling, which rely on aggregation as a mechanism for both direct sinking (the export of picoplankton as POC) and mesozooplankton- or large filter feeder-mediated sinking of picoplankton-based production. Zooplankton grazing Sloppy feeding In addition to linking primary producers to higher trophic levels in marine food webs, zooplankton also play an important role as "recyclers" of carbon and other nutrients that significantly impact marine biogeochemical cycles, including the biological pump. This is particularly the case with copepods and krill, and is especially important in oligotrophic waters of the open ocean. Through sloppy feeding, excretion, egestion, and leaching of fecal pellets, zooplankton release dissolved organic matter (DOM) which controls DOM cycling and supports the microbial loop. Absorption efficiency, respiration, and prey size all further complicate how zooplankton are able to transform and deliver carbon to the deep ocean. Excretion and sloppy feeding (the physical breakdown of food source) make up 80% and 20% of crustacean zooplankton-mediated DOM release respectively. In the same study, fecal pellet leaching was found to be an insignificant contributor. For protozoan grazers, DOM is released primarily through excretion and egestion and gelatinous zooplankton can also release DOM through the production of mucus. Leaching of fecal pellets can extend from hours to days after initial egestion and its effects can vary depending on food concentration and quality. Various factors can affect how much DOM is released from zooplankton individuals or populations. Fecal pellets The fecal pellets of zooplankton can be important vehicles for the transfer of particulate organic carbon (POC) to the deep ocean, often making large contributions to the carbon sequestration. The size distribution of the copepod community indicates high numbers of small fecal pellets are produced in the epipelagic. However, small fecal pellets are rare in the deeper layers, suggesting they are not transferred efficiently to depth. This means small fecal pellets make only minor contributions to fecal pellet fluxes in the meso- and bathypelagic, particularly in terms of carbon. In a study is focussed on the Scotia Sea, which contains some of the most productive regions in the Southern Ocean, the dominant fecal pellets in the upper mesopelagic were cylindrical and elliptical, while ovoid fecal pellets were dominant in the bathypelagic. The change in fecal pellet morphology, as well as size distribution, points to the repacking of surface fecal pellets in the mesopelagic and in situ production in the lower meso- and bathypelagic, which may be augmented by inputs of fecal pellets via zooplankton vertical migrations. This suggests the flux of carbon to the deeper layers within the Southern Ocean is strongly modulated by meso- and bathypelagic zooplankton, meaning that the community structure in these zones has a major impact on the efficiency of the fecal pellet transfer to ocean depths. Absorption efficiency (AE) is the proportion of food absorbed by plankton that determines how available the consumed organic materials are in meeting the required physiological demands. Depending on the feeding rate and prey composition, variations in AE may lead to variations in fecal pellet production, and thus regulates how much organic material is recycled back to the marine environment. Low feeding rates typically lead to high AE and small, dense pellets, while high feeding rates typically lead to low AE and larger pellets with more organic content. Another contributing factor to DOM release is respiration rate. Physical factors such as oxygen availability, pH, and light conditions may affect overall oxygen consumption and how much carbon is loss from zooplankton in the form of respired CO2. The relative sizes of zooplankton and prey also mediate how much carbon is released via sloppy feeding. Smaller prey are ingested whole, whereas larger prey may be fed on more "sloppily", that is more biomatter is released through inefficient consumption. There is also evidence that diet composition can impact nutrient release, with carnivorous diets releasing more dissolved organic carbon (DOC) and ammonium than omnivorous diets. Microbial loop Bacterial lysis The microbial loop describes a trophic pathway in the marine microbial food web where dissolved organic carbon (DOC) is returned to higher trophic levels via its incorporation into bacterial biomass, and then coupled with the classic food chain formed by phytoplankton-zooplankton-nekton. The term microbial loop was coined by Farooq Azam, Tom Fenchel et al. in 1983 to include the role played by bacteria in the carbon and nutrient cycles of the marine environment. In general, dissolved organic carbon is introduced into the ocean environment from bacterial lysis, the leakage or exudation of fixed carbon from phytoplankton (e.g., mucilaginous exopolymer from diatoms), sudden cell senescence, sloppy feeding by zooplankton, the excretion of waste products by aquatic animals, or the breakdown or dissolution of organic particles from terrestrial plants and soils. Bacteria in the microbial loop decompose this particulate detritus to utilize this energy-rich matter for growth. Since more than 95% of organic matter in marine ecosystems consists of polymeric, high molecular weight (HMW) compounds (e.g., protein, polysaccharides, lipids), only a small portion of total dissolved organic matter (DOM) is readily utilizable to most marine organisms at higher trophic levels. This means that dissolved organic carbon is not available directly to most marine organisms; marine bacteria introduce this organic carbon into the food web, resulting in additional energy becoming available to higher trophic levels. Viral shunt As much as 25% of the primary production from phytoplankton in the global oceans may be recycled within the microbial loop through viral shunting. The viral shunt is a mechanism whereby marine viruses prevent microbial particulate organic matter (POM) from migrating up trophic levels by recycling them into dissolved organic matter (DOM), which can be readily taken up by microorganisms. The DOM recycled by the viral shunt pathway is comparable to the amount generated by the other main sources of marine DOM. Viruses can easily infect microorganisms in the microbial loop due to their relative abundance compared to microbes. Prokaryotic and eukaryotic mortality contribute to carbon nutrient recycling through cell lysis. There is evidence as well of nitrogen (specifically ammonium) regeneration. This nutrient recycling helps stimulates microbial growth. Macroorganisms Jelly fall Jelly-falls are marine carbon cycling events whereby gelatinous zooplankton, primarily cnidarians, sink to the seafloor and enhance carbon and nitrogen fluxes via rapidly sinking particulate organic matter. These events provide nutrition to benthic megafauna and bacteria. Jelly-falls have been implicated as a major "gelatinous pathway" for the sequestration of labile biogenic carbon through the biological pump. These events are common in protected areas with high levels of primary production and water quality suitable to support cnidarian species. These areas include estuaries and several studies have been conducted in fjords of Norway. Whale pump Whales and other marine mammals also enhance primary productivity in their feeding areas by concentrating nitrogen near the surface through the release of flocculent fecal plumes. For example, whales and seals may be responsible for replenishing more nitrogen in the Gulf of Maine's euphotic zone than the input of all rivers combined. This upward whale pump played a much larger role before industrial fishing devastated marine mammal stocks, when recycling of nitrogen was likely more than three times the atmospheric nitrogen input. The biological pump mediates the removal of carbon and nitrogen from the euphotic zone through the downward flux of aggregates, feces, and vertical migration of invertebrates and fish. Copepods and other zooplankton produce sinking fecal pellets and contribute to downward transport of dissolved and particulate organic matter by respiring and excreting at depth during migration cycles, thus playing an important role in the export of nutrients (N, P, and Fe) from surface waters. Zooplankton feed in the euphotic zone and export nutrients via sinking fecal pellets, and vertical migration. Fish typically release nutrients at the same depth at which they feed. Excretion for marine mammals, tethered to the surface for respiration, is expected to be shallower in the water column than where they feed. Marine mammals provide important ecosystem services. On a global scale, they can influence climate, through fertilization events and the export of carbon from surface waters to the deep sea through sinking whale carcasses. In coastal areas, whales retain nutrients locally, increasing ecosystem productivity and perhaps raising the carrying capacity for other marine consumers, including commercial fish species. It has been estimated that, in terms of carbon sequestration, one whale is equivalent to thousands of trees. Vertical migrations Diel vertically migrating krill, salps, smaller zooplankton and fish can actively transport carbon to depth by consuming POC in the surface layer at night, and metabolising it at their daytime, mesopelagic residence depths. Depending on species life history, active transport may occur on a seasonal basis as well. Without vertical migration the biological pump wouldn't be nearly as efficient. Organisms migrate up to feed at night so when they migrate back to depth during the day they defecate large sinking fecal pellets. Whilst some larger fecal pellets can sink quite fast, the speed that organisms move back to depth is still faster. At night organisms are in the top 100 metres of the water column, but during the day they move down to between 800 and 1000 metres. If organisms were to defecate at the surface it would take the fecal pellets days to reach the depth that they reach in a matter of hours. Therefore, by releasing fecal pellets at depth they have almost 1000 metres less to travel to get to the deep ocean. This is something known as active transport. The organisms are playing a more active role in moving organic matter down to depths. Because a large majority of the deep sea, especially marine microbes, depends on nutrients falling down, the quicker they can reach the ocean floor the better. Zooplankton and salps play a large role in the active transport of fecal pellets. 15–50% of zooplankton biomass is estimated to migrate, accounting for the transport of 5–45% of particulate organic nitrogen to depth. Salps are large gelatinous plankton that can vertically migrate 800 metres and eat large amounts of food at the surface. They have a very long gut retention time, so fecal pellets usually are released at maximum depth. Salps are also known for having some of the largest fecal pellets. Because of this they have a very fast sinking rate, small detritus particles are known to aggregate on them. This makes them sink that much faster. So while currently there is still much research being done on why organisms vertically migrate, it is clear that vertical migration plays a large role in the active transport of dissolved organic matter to depth. Lipid pump The lipid pump sequesters carbon from the ocean's surface to deeper waters via lipids associated with overwintering vertically migratory zooplankton. Lipids are a class of hydrocarbon rich, nitrogen and phosphorus deficient compounds essential for cellular structures. The lipid associated carbon enters the deep ocean as carbon dioxide produced by respiration of lipid reserves and as organic matter from the mortality of zooplankton. Compared to the more general biological pump, the lipid pump also results in a lipid shunt, where other nutrients like nitrogen and phosphorus that are consumed in excess must be excreted back to the surface environment, and thus are not removed from the surface mixed layer of the ocean. This means that the carbon transported by the lipid pump does not limit the availability of essential nutrients in the ocean surface. Carbon sequestration via the lipid pump is therefore decoupled from nutrient removal, allowing carbon uptake by oceanic primary production to continue. In the Biological Pump, nutrient removal is always coupled to carbon sequestration; primary production is limited as carbon and nutrients are transported to depth together in the form of organic matter. The contribution of the lipid pump to the sequestering of carbon in the deeper waters of the ocean can be substantial: the carbon transported below 1,000 metres (3,300 ft) by copepods of the genus Calanus in the Arctic Ocean almost equals that transported below the same depth annually by particulate organic carbon (POC) in this region. A significant fraction of this transported carbon would not return to the surface due to respiration and mortality. Research is ongoing to more precisely estimate the amount that remains at depth. The export rate of the lipid pump may vary from 1–9.3 g C m−2 y−1 across temperate and subpolar regions containing seasonally-migrating zooplankton. The role of zooplankton, and particularly copepods, in the food web is crucial to the survival of higher trophic level organisms whose primary source of nutrition is copepods. With warming oceans and increasing melting of ice caps due to climate change, the organisms associated with the lipid pump may be affected, thus influencing the survival of many commercially important fish and endangered marine mammals. As a new and previously unquantified component of oceanic carbon sequestration, further research on the lipid pump can improve the accuracy and overall understanding of carbon fluxes in global oceanic systems. Bioluminescent shunt Luminous bacteria in light organ symbioses are successively acquired by host (squid, fish) from the seawater while they are juveniles, then regularly released into the ocean. In the diagram on the right, depending on the light organ position, luminous bacteria are released from their guts into fecal pellets or directly into the seawater (step 1). Motile luminous bacteria colonize organic matter sinking along the water column. Bioluminescent bacteria colonising fecal pellets and particles influence zooplankton consumption rates. Such visual markers increase detection ("bait hypothesis"), attraction and finally predation by upper trophic levels (step 2). In the mesopelagic, zooplankton and their predators feed on sinking luminous particles and fecal pellets, which form either aggregates (repackaging) of faster sinking rates or fragment organic matter (due to sloppy feeding) with slower sinking rates (step 3). Filter feeders also aggregate sinking organic matter without particular visual detection and selection of luminous matter. Diel (and seasonal) vertical migrators feeding on luminous food metabolize and release glowing fecal pellets from the surface to the mesopelagic zone (step 4). This implies bioluminescent bacteria dispersion at large spatial scales, for zooplankton or even some fish actively swimming long distances. Luminous bacteria attached to particles sink down to the seafloor, and sediment can be resuspended by oceanographic physical conditions (step 5) and consumed by epi-benthic organisms. Instruments are (a) plankton net, (b) fish net, (c) Niskin water sampler, (d) bathyphotometer, (e) sediment traps, (f) autonomous underwater vehicles, (g) photomultiplier module, (h) astrophysics optical modules ANTARES and (i–j) remotely operated vehicles. Quantification The geologic component of the carbon cycle operates slowly in comparison to the other parts of the global carbon cycle. It is one of the most important determinants of the amount of carbon in the atmosphere, and thus of global temperatures. As the biological pump plays an important role in the Earth's carbon cycle, significant effort is spent quantifying its strength. However, because they occur as a result of poorly constrained ecological interactions usually at depth, the processes that form the biological pump are difficult to measure. A common method is to estimate primary production fuelled by nitrate and ammonium as these nutrients have different sources that are related to the remineralisation of sinking material. From these it is possible to derive the so-called f-ratio, a proxy for the local strength of the biological pump. Applying the results of local studies to the global scale is complicated by the role the ocean's circulation plays in different ocean regions. Effects of climate change Changes in land use, the combustion of fossil fuels, and the production of cement have led to an increase in CO2 concentration in the atmosphere. At present, about one third (approximately 2 Pg C y−1 = 2 × 1015 grams of carbon per year) of anthropogenic emissions of CO2 may be entering the ocean, but this is quite uncertain. Some research suggests that a link between elevated CO2 and marine primary production exists. Climate change may affect the biological pump in the future by warming and stratifying the surface ocean. It is believed that this could decrease the supply of nutrients to the euphotic zone, reducing primary production there. Also, changes in the ecological success of calcifying organisms caused by ocean acidification may affect the biological pump by altering the strength of the hard tissues pump. This may then have a "knock-on" effect on the soft tissues pump because calcium carbonate acts to ballast sinking organic material. The second diagram on the right shows some possible effects of sea ice decline and permafrost thaw on Arctic carbon fluxes. On land, plants take up carbon while microorganisms in the soil produce methane and respire CO2. Lakes are net emitters of methane, and organic and inorganic carbon (dissolved and particulate) flow into the ocean through freshwater systems. In the ocean, methane can be released from thawing subsea permafrost, and CO2 is absorbed due to an undersaturation of CO2 in the water compared with the atmosphere. In addition, multiple fluxes are closely associated to sea ice. Current best estimates of atmospheric fluxes are given in Tg C year−1, where available. Note that the emission estimate for lakes is for the area North of ~50º N rather than the narrower definition of arctic tundra for the other terrestrial fluxes. When available, uncertainty ranges are shown in brackets. The arrows do not represent the size of each flux. The biological pump is thought to have played significant roles in atmospheric CO2 fluctuations during past glacial-interglacial periods. However, it is not yet clear how the biological pump will respond to future climate change. For such predictions to be reasonable, it is important to first decipher the response of phytoplankton, one of the key components of the biological pump to future changes in atmospheric CO2. Due to their phylogenetic diversity, different phytoplankton taxa will likely respond to climate change in different ways. For instance, a decrease in the abundance of diatom is expected due to increased stratification in the future ocean. Diatoms are highly efficient in transporting carbon to depths by forming large, rapidly sinking aggregates and their reduced numbers could in turn lead to decreased carbon export. Further, decreased ocean pH due to ocean acidification may thwart the ability of coccolithophores to generate calcareous plates, potentially affecting the biological pump; however, it appears that some species are more sensitive than others. Thus, future changes in the relative abundance of these or other phytoplankton taxa could have a marked impact on total ocean productivity, subsequently affecting ocean biogeochemistry and carbon storage. A 2015 study determined that coccolithophore concentrations in the North Atlantic have increased by an order of magnitude since the 1960s and an increase in absorbed CO2, as well as temperature, were modeled to be the most likely cause of this increase. In a 2017 study, scientists used species distribution modelling (SDM) to predict the future global distribution of two phytoplankton species important to the biological pump: the diatom Chaetoceros diadema and the coccolithophore Emiliania huxleyi. They employed environmental data described in the IPCC Representative Concentration Pathways scenario 8.5, which predicts radiative forcing in the year 2100 relative to pre-industrial values. Their modelling results predicted that the total ocean area covered by C. diadema and E. huxleyi would decline by 8% and 16%, respectively, under the examined climate scenario. They predicted changes in the range and distribution of these two phytoplankton species under these future ocean conditions, if realized, might result in reduced contribution to carbon sequestration via the biological pump. In 2019, a study indicated that at current rates of seawater acidification, we could see Antarctic phytoplanktons smaller and less effective at storing carbon before the end of the century. Monitoring Monitoring the biological pump is critical to understanding how the Earth's carbon cycle is changing. A variety of techniques are used to monitor the biological pump, which can be deployed from various platforms such as ships, autonomous vehicles, and satellites. At present, satellite remote sensing is the only tool available for viewing the entire surface ocean at high temporal and spatial scales. Needed research Multidisciplinary observations are still needed in the deep water column to properly understand the biological pump: Physics: stratification affects particle sinking; understanding the origin of the particles and the residence time of the DIC from particle remineralization in the deep ocean requires measurement of advection and mixing. Biogeochemistry: export/mixing down of particulate and dissolved organic matter from the surface layer determines labile organic matter arriving at the seafloor, which is either respired by seafloor biota or stored for longer times in the sediment. Biology and ecosystems: zooplankton and microorganisms break down and remineralize sinking particles in the water column. Exported organic matter feeds all water column and benthic biota (zooplankton, benthic invertebrates, microbes) sustaining their biomass, density, and biodiversity. See also f-ratio (oceanography) Lysocline Mooring (oceanography) Apparent oxygen utilisation References Aquatic ecology Biological oceanography Carbon cycle Chemical oceanography
Biological pump
[ "Chemistry", "Biology" ]
12,634
[ "Chemical oceanography", "Aquatic ecology", "Ecosystems" ]
240,244
https://en.wikipedia.org/wiki/Biological%20interaction
In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitude of interaction force (competition/mutualism) or effect of individual fitness, according the stress gradient hypothesis and Mutualism Parasitism Continuum. Evolutionary game theory such as Red Queen Hypothesis, Red King Hypothesis or Black Queen Hypothesis, have demonstrated a classification based on the force of interaction is important. Classification based on time of interaction Short-term interactions Short-term interactions, including predation and pollination, are extremely important in ecology and evolution. These are short-lived in terms of the duration of a single interaction: a predator kills and eats a prey; a pollinator transfers pollen from one flower to another; but they are extremely durable in terms of their influence on the evolution of both partners. As a result, the partners coevolve. Predation In predation, one organism, the predator, kills and eats another organism, its prey. Predators are adapted and often highly specialized for hunting, with acute senses such as vision, hearing, or smell. Many predatory animals, both vertebrate and invertebrate, have sharp claws or jaws to grip, kill, and cut up their prey. Other adaptations include stealth and aggressive mimicry that improve hunting efficiency. Predation has a powerful selective effect on prey, causing them to develop antipredator adaptations such as warning coloration, alarm calls and other signals, camouflage and defensive spines and chemicals. Predation has been a major driver of evolution since at least the Cambrian period. Pollination In pollination, pollinators including insects (entomophily), some birds (ornithophily), and some bats, transfer pollen from a male flower part to a female flower part, enabling fertilisation, in return for a reward of pollen or nectar. The partners have coevolved through geological time; in the case of insects and flowering plants, the coevolution has continued for over 100 million years. Insect-pollinated flowers are adapted with shaped structures, bright colours, patterns, scent, nectar, and sticky pollen to attract insects, guide them to pick up and deposit pollen, and reward them for the service. Pollinator insects like bees are adapted to detect flowers by colour, pattern, and scent, to collect and transport pollen (such as with bristles shaped to form pollen baskets on their hind legs), and to collect and process nectar (in the case of honey bees, making and storing honey). The adaptations on each side of the interaction match the adaptations on the other side, and have been shaped by natural selection on their effectiveness of pollination. Seed dispersal Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. Dispersal involves the letting go or detachment of a diaspore from the main parent plant. Long-term interactions (symbioses) The six possible types of symbiosis are mutualism, commensalism, parasitism, neutralism, amensalism, and competition. These are distinguished by the degree of benefit or harm they cause to each partner. Mutualism Mutualism is an interaction between two or more species, where species derive a mutual benefit, for example an increased carrying capacity. Similar interactions within a species are known as co-operation. Mutualism may be classified in terms of the closeness of association, the closest being symbiosis, which is often confused with mutualism. One or both species involved in the interaction may be obligate, meaning they cannot survive in the short or long term without the other species. Though mutualism has historically received less attention than other interactions such as predation, it is an important subject in ecology. Examples include cleaning symbiosis, gut flora, Müllerian mimicry, and nitrogen fixation by bacteria in the root nodules of legumes. Commensalism Commensalism benefits one organism and the other organism is neither benefited nor harmed. It occurs when one organism takes benefits by interacting with another organism by which the host organism is not affected. A good example is a remora living with a manatee. Remoras feed on the manatee's faeces. The manatee is not affected by this interaction, as the remora does not deplete the manatee's resources. Parasitism Parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The parasite either feeds on the host, or, in the case of intestinal parasites, consumes some of its food. Neutralism Neutralism (a term introduced by Eugene Odum) describes the relationship between two species that interact but do not affect each other. Examples of true neutralism are virtually impossible to prove; the term is in practice used to describe situations where interactions are negligible or insignificant. Amensalism Amensalism (a term introduced by Edward Haskell) is an interaction where an organism inflicts harm to another organism without any costs or benefits received by itself. Amensalism describes the adverse effect that one organism has on another organism (figure 32.1). This is a unidirectional process based on the release of a specific compound by one organism that has a negative effect on another. A classic example of amensalism is the microbial production of antibiotics that can inhibit or kill other, susceptible microorganisms. A clear case of amensalism is where sheep or cattle trample grass. Whilst the presence of the grass causes negligible detrimental effects to the animal's hoof, the grass suffers from being crushed. Amensalism is often used to describe strongly asymmetrical competitive interactions, such as has been observed between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it. Competition Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Competition is often for a resource such as food, water, or territory in limited supply, or for access to females for reproduction. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources should either adapt or die out. This competition within and between species for resources plays a critical role in natural selection. Classification based on effect on fitness Biotic interactions can vary in intensity (strength of interaction), and frequency (number of interactions in a given time). There are direct interactions when there is a physical contact between individuals or indirect interactions when there is no physical contact, that is, the interaction occurs with a resource, ecological service, toxine or growth inhibitor. The interactions can be directly determined by individuals (incidentally) or by stochastic processes (accidentally), for instance side effects that one individual have on other. They are divided into six major types: Competition, Antagonism, Amensalism, Neutralism, Commensalism and Mutualism. Non-trophic interactions Some examples of non-trophic interactions are habitat modification, mutualism and competition for space. It has been suggested recently that non-trophic interactions can indirectly affect food web topology and trophic dynamics by affecting the species in the network and the strength of trophic links. It is necessary to integrate trophic and non-trophic interactions in ecological network analyses. The few empirical studies that address this suggest food web structures (network topologies) can be strongly influenced by species interactions outside the trophic network. However these studies include only a limited number of coastal systems, and it remains unclear to what extent these findings can be generalized. Whether non-trophic interactions typically affect specific species, trophic levels, or functional groups within the food web, or, alternatively, indiscriminately mediate species and their trophic interactions throughout the network has yet to be resolved. sessile species with generally low trophic levels seem to benefit more than others from non-trophic facilitation, though facilitation benefits higher trophic and more mobile species as well. See also Altruism (biology) Animal sexual behaviour Biological pump – interaction between marine animals and carbon forms Cheating (biology) Collective animal behavior Detritivory Epibiont Evolving digital ecological network Food chain Kin selection Microbial cooperation Microbial loop Quorum sensing Spite (game theory) Swarm behaviour Notes References Further reading Snow, B. K. & Snow, D. W. (1988). Birds and berries: a study of an ecological interaction. Poyser, London External links Global Biotic Interactions (GloBI) - Open access to finding species interaction data Ecology
Biological interaction
[ "Biology" ]
2,366
[ "Behavior", "Biological interactions", "Ecology", "nan", "Ethology" ]
18,380,504
https://en.wikipedia.org/wiki/Pearson%20symbol
The Pearson symbol, or Pearson notation, is used in crystallography as a means of describing a crystal structure. It was originated by W. B. Pearson and is used extensively in Peason's handbook of crystallographic data for intermetallic phases. The symbol is made up of two letters followed by a number. For example: Diamond structure, cF8 Rutile structure, tP6 Construction The two letters in the Pearson symbol specify the Bravais lattice, and more specifically, the lower-case letter specifies the crystal family, while the upper-case letter the lattice type. The number at the end of the Pearson symbol gives the number of the atoms in the conventional unit cell (atoms which satisfy for the atom's position in the unit cell). The following two tables give the six letters possible for the crystal family and the five letters posible for the lattice type: The letters A, B and C were formerly used instead of S. When the centred face cuts the X axis, the Bravais lattice is called A-centred. In analogy, when the centred face cuts the Y or Z axis, we have B- or C-centring respectively. The fourteen possible Bravais lattices are identified by the first two letters: Pearson symbol and space group The Pearson symbol does not uniquely identify the space group of a crystal structure. For example, both the NaCl structure (space group Fmm) and diamond (space group Fdm) have the same Pearson symbol cF8. Due to this constraint, the Pearson symbol should only be used to designate simple structures (elements, some binary compound) where the number of atoms per unit cell equals, ideally, the number of translationally equivalent points. Confusion also arises in the rhombohedral lattice, which is alternatively described in a centred hexagonal (a = b, c, α = β = 90°, γ = 120°) or primitive rhombohedral (a = b = c, α = β = γ) setting. The more commonly used hexagonal setting has 3 translationally equivalent points per unit cell. The Pearson symbol refers to the hexagonal setting in its letter code (hR), but the following figure gives the number of translationally equivalent points in the primitive rhombohedral setting. Examples: hR1 and hR2 are used to designate the Hg and Bi structures respectively. Because there are many possible structures that can correspond to one Pearson symbol, a prototypical compound may be useful to specify. Examples of how to write this would be hP12-MgZn or cF8-C. Prototypical compounds for particular structures can be found on the Inorganic Crystal Structure Database (ICSD) or on the AFLOW Library of Crystallographic Prototypes. See also Crystal structure Bravais lattice Strukturbericht designation References External links Further reading United States Naval Research Laboratory - Pearson symbol (Examples and pictures) Crystallography
Pearson symbol
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
600
[ "Crystallography", "Condensed matter physics", "Materials science" ]
18,383,939
https://en.wikipedia.org/wiki/X-ray%20Raman%20scattering
X-ray Raman scattering (XRS) is non-resonant inelastic scattering of X-rays from core electrons. It is analogous to vibrational Raman scattering, which is a widely used tool in optical spectroscopy, with the difference being that the wavelengths of the exciting photons fall in the X-ray regime and the corresponding excitations are from deep core electrons. XRS is an element-specific spectroscopic tool for studying the electronic structure of matter. In particular, it probes the excited-state density of states (DOS) of an atomic species in a sample. Description XRS is an inelastic X-ray scattering process, in which a high-energy X-ray photon gives energy to a core electron, exciting it to an unoccupied state. The process is in principle analogous to X-ray absorption (XAS), but the energy transfer plays the role of the X-ray photon energy absorbed in X-ray absorption, exactly as in Raman scattering in optics vibrational low-energy excitations can be observed by studying the spectrum of light scattered from a molecule. Because the energy (and therefore wavelength) of the probing X-ray can be chosen freely and is usually in the hard X-ray regime, certain constraints of soft X-rays in the studies of electronic structure of the material are overcome. For example, soft X-ray studies may be surface sensitive and they require a vacuum environment. This makes studies of many substances, such as numerous liquids impossible using soft X-ray absorption. One of the most notable applications in which X-ray Raman scattering is superior to soft X-ray absorption is the study of soft X-ray absorption edges in high pressure. Whereas high-energy X-rays may pass through a high-pressure apparatus like a diamond anvil cell and reach the sample inside the cell, soft X-rays would be absorbed by the cell itself. History In his report of finding of a new type of scattering, Sir Chandrasekhara Venkata Raman proposed that a similar effect should be found also in the X-ray regime. Around the same time, Bergen Davis and Dana Mitchell reported in 1928 on the fine-structure of the scattered radiation from graphite and noted that they had lines that seemed to be in agreement with carbon K shell energy. Several researchers attempted similar experiments in the late 1920s and early 1930s but the results could not always be confirmed. Often the first unambiguous observations of the XRS effect is credited to K. Das Gupta (reported findings 1959) and Tadasu Suzuki (reported 1964). It was soon realized that the XRS peak in solids was broadened by the solid-state effects and it appeared as a band, with a shape similar to that of a XAS spectrum. The potential of the technique was limited until modern synchrotron light sources became available. This is due to the very small XRS probability of the incident photons, requiring radiation with a very high intensity. Today, XRS techniques are rapidly growing in importance. They can be used to study near-edge X-ray absorption fine structure (NEXAFS or XANES) as well as extended X-ray absorption fine structure (EXAFS). Brief theory of XRS XRS belongs to the class of non-resonant inelastic X-ray scattering, which has a cross section of . Here, is the Thomson cross section, which signifies that the scattering is that of electromagnetic waves from electrons. The physics of the system under study is buried in the dynamic structure factor , which is a function of momentum transfer and energy transfer . The dynamic structure factor contains all non-resonant electronic excitations, including not only the core-electron excitations observed in XRS but also e.g. plasmons, the collective fluctuations of valence electrons, and Compton scattering. Similarity to X-ray absorption It was shown by Yukio Mizuno and Yoshihiro Ohmura in 1967 that at small momentum transfers the XRS contribution of the dynamic structure factor is proportional to the X-ray absorption spectrum. The main difference is that while the polarization vector of light couples to momentum of the absorbing electron in XAS, in XRS the momentum of the incident photon couples to the charge of the electron. Because of this, the momentum transfer of XRS plays the role of photon polarization of XAS. See also X-ray scattering techniques Resonant inelastic X-ray scattering (RIXS) References X-ray scattering X-ray spectroscopy Raman scattering
X-ray Raman scattering
[ "Physics", "Chemistry" ]
938
[ "Spectrum (physical sciences)", "X-ray scattering", "Scattering", "X-ray spectroscopy", "Spectroscopy" ]
18,384,964
https://en.wikipedia.org/wiki/Full%20pond
Full pond is an American phrase used to describe the water level of a lake, reservoir or other body of fresh water when the level is just below the spillway, or is otherwise at a maximum, sustainable and safe level. Technically, a body of water can have a water level higher than full pond when the inflow of water into the body greatly exceeds the outflow (such as during a heavy rain event), even if the body of water is not at full flood level. Most lakes and reservoirs have the ability to lower the level of the lake even when it is already significantly below full. This is used for flood prevention during periods of high (or potentially high) inflow to prevent flooding and to prevent water from breaching containment when it is expected the body of water will soon exceed full pond status. See also High water mark Hydroelectric power Hydrology
Full pond
[ "Chemistry", "Engineering", "Environmental_science" ]
174
[ "Hydrology", "Environmental engineering" ]
18,388,258
https://en.wikipedia.org/wiki/Tho-Radia
Tho-Radia was a French pharmaceutical company making cosmetics between 1932 and 1968. Tho-Radia-branded creams, toothpastes and soaps were notable for containing radium and thorium until 1937, as a scheme to exploit popular interest for radium after it was discovered by Pierre and Marie Curie, in a fad of radioactive quackery. Foundation So-called "microcurietherapy" In the early 1910s, French pharmacist Alexandre Jaboin postulated the principles of "microcurietherapy", inspired by the success of curietherapy in treating certain cancers: he assumed that very small doses of radium would stimulate living cells and increase their energy. These notions were not scientifically demonstrated, but they triggered a fad for radium-laden medicine and cosmetics. Several brands started exploiting the market in the course of the 1910s, notably Activa and Radior. Dr Alfred Curie's formula In the early 1920s, pharmacist Alexis Moussalli joined the Millot pharmaceutical laboratories in Paris. Leveraging his expertise of rare-earth elements, he invented a beauty cream laden with thorium chloride and radium bromide. In order to start his own brand and as a marketing device, he associated with Alfred Curie, a medical doctor, homonymous to Pierre and Marie Curie but with no connection to them. Pierre and Marie Curie apparently considered legal action against the company. Alfred Curie was to register the Tho-Radia brand on 29 November 1932 and approved the mention "after Dr Alfred Curie's formula" on the packaging and publicities. The Tho-Radia company In order to launch his company, Alexis Moussalli also associated with Secor (Société d'exportation, commission, représentation), a French-American corporation, which was to field Tho-Radia products on the market. The brand was officially launched in 1932 for Paris and in 1933 for the rest of France. Tho-Radia creams got noticed through their recognisable advertising, designed by publicist Tony Burnand, depicting a young, blond woman lit from below by visible rays. This image, which became associated with the brand in the public consciousness, would serve into the 1950s. The success of Tho-Radia creams allowed Alexis Moussalli to start selling powder, toothpaste and soap, although the two later were sold as containing only thorium. The first products by Tho-Radia did actually contain radium, as a July 1932 analysis by Colombes scientific research laboratory certified that 100 grams of cream contained "0,223 microgram of radium bromide". From 1937, regulation on radioactive materials changed, limiting their usage to medical prescription and mandating a red label with the mentions "Poison", or "Toxic" for products with internal use. Tho-Radia then changed its marketing, and on 23 April 1937, SECOR registered the Tho-Radia brand again, but leaving out any mention of radium and of Alfred Curie, only to keep the name of the now successful line of products. Relocation to Vichy The Second World War forced the company to relocate from Paris to Vichy, where several other pharmaceutical companies were already installed. The Vichy Regime took an interest in the company, and attempted to despoil Jewish stakeholders. However, two of the main stakeholders were from Switzerland, and Swiss ambassador Walter Stucki managed to first delay the ploy, and eventually to derail it entirely. Business slowed down for Tho-Radia during the war, but from 1948 it gathered momentum again, as Alexis Moussalli and chemist Pierre Corniou developed further products such as skincare beauty milk, perfume and lipstick. At its zenith, the company had between 80 and 90 employees. Decline After Alexis Moussalli died in 1955, his heirs dismissed SECOR administrators, but the company faced increasing competition and lost market shares. In 1962, the company was sold to Lafarge laboratories, which were in turn purchased by Sanofi in 1976. Lafarge closed the Vichy factory in 1965 and relocated to Châteauroux. With declining sales, the Tho-Radia brand was finally abandoned in 1968. References Bibliography . . . Companies disestablished in 1968 Companies established in 1932 Radioactivity Pharmaceutical companies of France Radioactive quackery
Tho-Radia
[ "Physics", "Chemistry" ]
886
[ "Radioactive quackery", "Radioactivity", "Nuclear physics" ]
18,388,283
https://en.wikipedia.org/wiki/Crystal%20chemistry
Crystal chemistry is the study of the principles of chemistry behind crystals and their use in describing structure-property relations in solids, as well as the chemical properties of periodic structures. The principles that govern the assembly of crystal and glass structures are described, models of many of the technologically important crystal structures (alumina, quartz, perovskite) are studied, and the effect of crystal structure on the various fundamental mechanisms responsible for many physical properties are discussed. The objectives of the field include: identifying important raw materials and minerals as well as their names and chemical formulae. describing the crystal structure of important materials and determining their atomic details learning the systematics of crystal and glass chemistry. understanding how physical and chemical properties are related to crystal structure and microstructure. studying the engineering significance of these ideas and how they relate to foreign products: past, present, and future. Topics studied are: Chemical bonding, Electronegativity Fundamentals of crystallography: crystal systems, Miller Indices, symmetry elements, bond lengths and radii, theoretical density Crystal and glass structure prediction: Pauling's and Zachariasen’s rules Phase diagrams and crystal chemistry (including solid solutions) Imperfections (including defect chemistry and line defects) Phase transitions Structure – property relations: Neumann's law, melting point, mechanical properties (hardness, slip, cleavage, elastic moduli), wetting, thermal properties (thermal expansion, specific heat, thermal conductivity), diffusion, ionic conductivity, refractive index, absorption, color, Dielectrics and Ferroelectrics, and Magnetism Crystal structures of representative metals, semiconductors, polymers, and ceramics References Chemistry Crystallography
Crystal chemistry
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
343
[ "Crystallography", "Condensed matter physics", "Materials science" ]
6,075,294
https://en.wikipedia.org/wiki/Sulfonyl%20halide
In chemistry, a sulfonyl halide consists of a sulfonyl () group singly bonded to a halogen atom. They have the general formula , where X is a halogen. The stability of sulfonyl halides decreases in the order fluorides > chlorides > bromides > iodides, all four types being well known. The sulfonyl chlorides and fluorides are of dominant importance in this series. Sulfonyl halides have tetrahedral sulfur centres attached to two oxygen atoms, an organic radical, and a halide. In a representative example, methanesulfonyl chloride, the S=O, S−C, and S−Cl bond distances are respectively 142.4, 176.3, and 204.6 pm. Sulfonyl chlorides Sulfonic acid chlorides, or sulfonyl chlorides, are a sulfonyl halide with the general formula . Production Arylsulfonyl chlorides are made industrially in a two-step, one-pot reaction from an arene (in this case, benzene) and chlorosulfuric acid: The intermediate benzenesulfonic acid can be chlorinated with thionyl chloride as well. Benzenesulfonyl chloride, the most important sulfonyl halide, can also be produced by treating sodium benzenesulfonate with phosphorus pentachlorides. Benzenediazonium chloride reacts with sulfur dioxide and copper(I) chloride to give the sulfonyl chloride: For alkylsulfonyl chlorides, one synthetic procedure is the Reed reaction: Reactions Sulfonyl chlorides react with water to give the corresponding sulfonic acid: These compounds react readily with many other nucleophiles as well, most notably alcohols and amines (see Hinsberg reaction). If the nucleophile is an alcohol, the product is a sulfonate ester; if it is an amine, the product is a sulfonamide: However, sulfonyl chlorides also react frequently as a source of RSO and Cl+. For example benzenesulfonyl chloride chlorinates ketene acetals and mesyl chloride Friedel-Crafts–chlorinates para-xylene. Using sodium sulfite as the nucleophilic reagent, p-toluenesulfonyl chloride is converted to its sulfinate salt, . Chlorosulfonated alkanes are susceptible to crosslinking via reactions with various nucleophiles. Sulfonyl chlorides readily undergo Friedel–Crafts reactions with arenes giving sulfones, for example: A readily available arylsulfonyl chloride source is tosyl chloride. The desulfonation of arylsulfonyl chlorides provides a route to aryl chlorides: 1,2,4-Trichlorobenzene is made industrially in this way. Treatment of alkanesulfonyl chlorides having α-hydrogens with amine bases can give sulfenes, highly unstable species that can be trapped: Reduction with tetrathiotungstate ions () induces dimerization to the disulfide. Common sulfonyl chlorides Chlorosulfonated polyethylene (CSPE) is produced industrially by chlorosulfonation of polyethylene. CSPE is noted for its toughness, hence its use for roofing shingles. An industrially important derivative is benzenesulfonyl chloride. In the laboratory, useful reagents include tosyl chloride, brosyl chloride, nosyl chloride and mesyl chloride. Sulfonyl fluorides Sulfonyl fluorides have the general formula RSO2F. They can be produced by treating sulfonic acids with sulfur tetrafluoride: Perfluorooctanesulfonyl derivatives, such as PFOS, are produced from their sulfonyl fluoride, which are produced by electrofluorination In the molecular biology, sulfonyl fluorides are used to label proteins. They specifically react with serine, threonine, tyrosine, lysine, cysteine, and histidine residues. The fluorides are more resistant than the corresponding chlorides and are therefore better suited to this task. Some sulfonyl fluorides can also be used as deoxyfluorinating reagents, such as 2-pyridinesulfonyl fluoride (PyFluor) and N-tosyl-4-chlorobenzenesulfonimidoyl fluoride (SulfoxFluor). Sulfonyl bromides Sulfonyl bromides have the general formula RSO2Br. In contrast to sulfonyl chlorides, sulfonyl bromides readily undergo light-induced homolysis affording sulfonyl radicals, which can add to alkenes, as illustrated by the use of bromomethanesulfonyl bromide, BrCH2SO2Br in Ramberg–Bäcklund reaction syntheses. Sulfonyl iodides Sulfonyl iodides, having the general formula RSO2I, are quite light-sensitive. Methanesulfonyl iodide evolves iodine in vacuum and branched-alkyl sulfonyl iodides are worse. Perfluoroalkanesulfonyl iodides, prepared by reaction between silver perfluoroalkanesulfinates and iodine in dichloromethane at −30 °C, react with alkenes to form the normal adducts, RFSO2CH2CHIR and the adducts resulting from loss of SO2, RFCH2CHIR. Arenesulfonyl iodides, prepared from reaction of arenesulfinates or arenehydrazides with iodine, are much more stable and can initiate the synthesis of poly(methyl methacrylate) containing C–I, C–Br and C–Cl chain ends. Their reduction with silver gives the disulfone: 2 ArSO2I + 2Ag → (ArSO2)2 + 2 AgI In popular culture In the episode "Encyclopedia Galactica" of his TV series Cosmos: A Personal Voyage, Carl Sagan speculates that some intelligent extraterrestrial beings might have a genetic code based on polyaromatic sulfonyl halides instead of DNA. References Functional groups
Sulfonyl halide
[ "Chemistry" ]
1,393
[ "Functional groups" ]
6,078,504
https://en.wikipedia.org/wiki/Kharitonov%27s%20theorem
Kharitonov's theorem is a result used in control theory to assess the stability of a dynamical system when the physical parameters of the system are not known precisely. When the coefficients of the characteristic polynomial are known, the Routh–Hurwitz stability criterion can be used to check if the system is stable (i.e. if all roots have negative real parts). Kharitonov's theorem can be used in the case where the coefficients are only known to be within specified ranges. It provides a test of stability for a so-called interval polynomial, while Routh–Hurwitz is concerned with an ordinary polynomial. Definition An interval polynomial is the family of all polynomials where each coefficient can take any value in the specified intervals It is also assumed that the leading coefficient cannot be zero: . Theorem An interval polynomial is stable (i.e. all members of the family are stable) if and only if the four so-called Kharitonov polynomials are stable. What is somewhat surprising about Kharitonov's result is that although in principle we are testing an infinite number of polynomials for stability, in fact we need to test only four. This we can do using Routh–Hurwitz or any other method. So it only takes four times more work to be informed about the stability of an interval polynomial than it takes to test one ordinary polynomial for stability. Kharitonov's theorem is useful in the field of robust control, which seeks to design systems that will work well despite uncertainties in component behavior due to measurement errors, changes in operating conditions, equipment wear and so on. References V. L. Kharitonov, "Asymptotic stability of an equilibrium position of a family of systems of differential equations", Differentsialnye uravneniya, 14 (1978), 2086-2088. Academic home page of Prof. V. L. Kharitonov (archived) Control theory Theorems about polynomials Theorems in dynamical systems Circuit theorems
Kharitonov's theorem
[ "Physics", "Mathematics" ]
414
[ "Theorems in dynamical systems", "Mathematical theorems", "Equations of physics", "Theorems in algebra", "Applied mathematics", "Control theory", "Circuit theorems", "Theorems about polynomials", "Dynamical systems", "Mathematical problems", "Physics theorems" ]
6,079,068
https://en.wikipedia.org/wiki/Vapour%20pressure%20of%20water
The vapor pressure of water is the pressure exerted by molecules of water vapor in gaseous form (whether pure or in a mixture with other gases such as air). The saturation vapor pressure is the pressure at which water vapor is in thermodynamic equilibrium with its condensed state. At pressures higher than saturation vapor pressure, water will condense, while at lower pressures it will evaporate or sublimate. The saturation vapor pressure of water increases with increasing temperature and can be determined with the Clausius–Clapeyron relation. The boiling point of water is the temperature at which the saturated vapor pressure equals the ambient pressure. Water supercooled below its normal freezing point has a higher vapor pressure than that of ice at the same temperature and is, thus, unstable. Calculations of the (saturation) vapor pressure of water are commonly used in meteorology. The temperature-vapor pressure relation inversely describes the relation between the boiling point of water and the pressure. This is relevant to both pressure cooking and cooking at high altitudes. An understanding of vapor pressure is also relevant in explaining high altitude breathing and cavitation. Approximation formulas There are many published approximations for calculating saturated vapor pressure over water and over ice. Some of these are (in approximate order of increasing accuracy): Accuracy of different formulations Here is a comparison of the accuracies of these different explicit formulations, showing saturation vapor pressures for liquid water in kPa, calculated at six temperatures with their percentage error from the table values of Lide (2005): {| class="wikitable" |- align="center" ! (°C) !! (Lide Table) !! (Eq 1) !! (Antoine) !! (Magnus) !! (Tetens) !! (Buck) !! (Goff-Gratch) |- align="center" | 0 ||0.6113||0.6593 (+7.85%)||0.6056 (-0.93%)||0.6109 (-0.06%)||0.6108 (-0.09%)||0.6112 (-0.01%)||0.6089 (-0.40%) |- align="center" | 20 ||2.3388||2.3755 (+1.57%) ||2.3296 (-0.39%) ||2.3334 (-0.23%)||2.3382 (-0.03%)||2.3383 (-0.02%)||2.3355 (-0.14%) |- align="center" | 35 ||5.6267||5.5696 (-1.01%) ||5.6090 (-0.31%) ||5.6176 (-0.16%)||5.6225 (-0.07%)||5.6268 (+0.00%)||5.6221 (-0.08%) |- align="center" | 50 ||12.344||12.065 (-2.26%) ||12.306 (-0.31%) ||12.361 (+0.13%)||12.336 (-0.06%)||12.349 (+0.04%)||12.338 (-0.05%) |- align="center" | 75 ||38.563||37.738 (-2.14%) ||38.463 (-0.26%) ||39.000 (+1.13%)||38.646 (+0.21%)||38.595 (+0.08%)||38.555 (-0.02%) |- align="center" | 100 ||101.32||101.31 (-0.01%) ||101.34 (+0.02%) ||104.077 (+2.72%)||102.21 (+0.88%)||101.31 (-0.01%)||101.32 (0.00%) |} A more detailed discussion of accuracy and considerations of the inaccuracy in temperature measurements is presented in Alduchov and Eskridge (1996). The analysis here shows the simple unattributed formula and the Antoine equation are reasonably accurate at 100 °C, but quite poor for lower temperatures above freezing. Tetens is much more accurate over the range from 0 to 50 °C and very competitive at 75 °C, but Antoine's is superior at 75 °C and above. The unattributed formula must have zero error at around 26 °C, but is of very poor accuracy outside a narrow range. Tetens' equations are generally much more accurate and arguably more straightforward for use at everyday temperatures (e.g., in meteorology). As expected, Buck's equation for > 0 °C is significantly more accurate than Tetens, and its superiority increases markedly above 50 °C, though it is more complicated to use. The Buck equation is even superior to the more complex Goff-Gratch equation over the range needed for practical meteorology. Numerical approximations For serious computation, Lowe (1977) developed two pairs of equations for temperatures above and below freezing, with different levels of accuracy. They are all very accurate (compared to Clausius-Clapeyron and the Goff-Gratch) but use nested polynomials for very efficient computation. However, there are more recent reviews of possibly superior formulations, notably Wexler (1976, 1977), reported by Flatau et al. (1992). Examples of modern use of these formulae can additionally be found in NASA's GISS Model-E and Seinfeld and Pandis (2006). The former is an extremely simple Antoine equation, while the latter is a polynomial. In 2018 a new physics-inspired approximation formula was devised and tested by Huang who also reviews other recent attempts. Graphical pressure dependency on temperature See also Dew point Gas laws Lee–Kesler method Molar mass References Further reading External links Thermodynamic properties Atmospheric thermodynamics
Vapour pressure of water
[ "Physics", "Chemistry", "Mathematics" ]
1,352
[ "Thermodynamic properties", "Quantity", "Thermodynamics", "Physical quantities" ]
1,808,258
https://en.wikipedia.org/wiki/Photosynthetically%20active%20radiation
Photosynthetically active radiation (PAR) designates the spectral range (wave band) of solar radiation from 400 to 700 nanometers that photosynthetic organisms are able to use in the process of photosynthesis. This spectral region corresponds more or less with the range of light visible to the human eye. Photons at shorter wavelengths tend to be so energetic that they can be damaging to cells and tissues, but are mostly filtered out by the ozone layer in the stratosphere. Photons at longer wavelengths do not carry enough energy to allow photosynthesis to take place. Other living organisms, such as cyanobacteria, purple bacteria, and heliobacteria, can exploit solar light in slightly extended spectral regions, such as the near-infrared. These bacteria live in environments such as the bottom of stagnant ponds, sediment and ocean depths. Because of their pigments, they form colorful mats of green, red and purple. Chlorophyll, the most abundant plant pigment, is most efficient in capturing red and blue light. Accessory pigments such as carotenes and xanthophylls harvest some green light and pass it on to the photosynthetic process, but enough of the green wavelengths are reflected to give leaves their characteristic color. An exception to the predominance of chlorophyll is autumn, when chlorophyll is degraded (because it contains N and Mg) but the accessory pigments are not (because they only contain C, H and O) and remain in the leaf producing red, yellow and orange leaves. In land plants, leaves absorb mostly red and blue light in the first layer of photosynthetic cells because of chlorophyll absorbance. Green light, however, penetrates deeper into the leaf interior and can drive photosynthesis more efficiently than red light. Because green and yellow wavelengths can transmit through chlorophyll and the entire leaf itself, they play a crucial role in growth beneath the plant canopy. PAR measurement is used in agriculture, forestry and oceanography. One of the requirements for productive farmland is adequate PAR, so PAR is used to evaluate agricultural investment potential. PAR sensors stationed at various levels of the forest canopy measure the pattern of PAR availability and utilization. Photosynthetic rate and related parameters can be measured non-destructively using a photosynthesis system, and these instruments measure PAR and sometimes control PAR at set intensities. PAR measurements are also used to calculate the euphotic depth in the ocean. In these contexts, the reason PAR is preferred over other lighting metrics such as luminous flux and illuminance is that these measures are based on human perception of brightness, which is strongly green biased and does not accurately describe the quantity of light usable for photosynthesis. Units When measuring the irradiance of PAR, values are expressed using units of energy (W/m2), which is relevant in energy-balance considerations for photosynthetic organisms. However, photosynthesis is a quantum process and the chemical reactions of photosynthesis are more dependent on the number of photons than the energy contained in the photons. Therefore, plant biologists often quantify PAR using the number of photons in the 400-700 nm range received by a surface for a specified amount of time, or the Photosynthetic Photon Flux Density (PPFD). Values of PPFD are normally expressed using units of mol⋅m−2⋅s−1. In relation to plant growth and morphology, it is better to characterise the light availability for plants by means of the Daily Light Integral (DLI), which is the daily flux of photons per ground area, and includes both diurnal variation as well as variation in day length. PPFD used to sometimes be expressed using einstein units, i.e., μE⋅m−2⋅s−1, although this usage is nonstandard and is no longer used. Light fixture efficiency Yield photon flux There are two common measures of photosynthetically active radiation: photosynthetic photon flux (PPF) and yield photon flux (YPF). PPF values all photons from 400 to 700 nm equally, while YPF weights photons in the range from 360 to 760 nm based on a plant's photosynthetic response. PAR as described with PPF does not distinguish between different wavelengths between 400 and 700 nm, and assumes that wavelengths outside this range have zero photosynthetic action. If the exact spectrum of the light is known, the photosynthetic photon flux density (PPFD) values in μmol⋅s−1⋅m−2) can be modified by applying different weighting factors to different wavelengths. This results in a quantity called the yield photon flux (YPF). The red curve in the graph shows that photons around 610 nm (orange-red) have the highest amount of photosynthesis per photon. However, because short-wavelength photons carry more energy per photon, the maximum amount of photosynthesis per incident unit of energy is at a longer wavelength, around 650 nm (deep red). It has been noted that there is considerable misunderstanding over the effect of light quality on plant growth. Many manufacturers claim significantly increased plant growth due to light quality (high YPF). The YPF curve indicates that orange and red photons between 600 and 630 nm can result in 20 to 30% more photosynthesis than blue or cyan photons between 400 and 540 nm. But the YPF curve was developed from short-term measurements made on single leaves in low light. More recent longer-term studies with whole plants in higher light indicate that light quality may have a smaller effect on plant growth rate than light quantity. Blue light, while not delivering as many photons per joule, encourages leaf growth and affects other outcomes. The conversion between energy-based PAR and photon-based PAR depends on the spectrum of the light source (see Photosynthetic efficiency). The following table shows the conversion factors from watts for black-body spectra that are truncated to the range 400–700 nm. It also shows the luminous efficacy for these light sources and the fraction of a real black-body radiator that is emitted as PAR. For example, a light source of 1000 lm at a color temperature of 5800 K would emit approximately 1000/265 = 3.8 W of PAR, which is equivalent to 3.8 × 4.56 = 17.3 μmol/s. For a black-body light source at 5800 K, such as the sun is approximately, a fraction 0.368 of its total emitted radiation is emitted as PAR. For artificial light sources, that usually do not have a black-body spectrum, these conversion factors are only approximate. The quantities in the table are calculated as where is the black-body spectrum according to Planck's law, is the standard luminosity function, represent the wavelength range (400–700 nm) of PAR, and is the Avogadro constant. Second law PAR efficiency Besides the amount of radiation reaching a plant in the PAR region of the spectrum, it is also important to consider the quality of such radiation. Radiation reaching a plant contains entropy as well as energy, and combining those two concepts the exergy can be determined. This sort of analysis is known as exergy analysis or second law analysis, and the exergy represents a measure of the useful work, i.e., the useful part of radiation which can be transformed into other forms of energy. The spectral distribution of the exergy of radiation is defined as: One of the advantages of working with the exergy is that it depends not only on the temperature of the emitter (the Sun), , but also on the temperature of the receiving body (the plant), , i.e., it includes the fact that the plant is emitting radiation. Naming and , the exergy emissive power of radiation in a region is determined as: Where is a special function called the polylogarithm. By definition, the exergy obtained by the receiving body is always lower than the energy radiated by the emitting blackbody, as a consequence of the entropy content in radiation. Thus, as a consequence of the entropy content, not all the radiation reaching the Earth's surface is "useful" to produce work. Therefore, the efficiency of a process involving radiation should be measured against its exergy, not its energy. Using the expression above, the optimal efficiency or second law efficiency for the conversion of radiation to work in the PAR region (from 400 nm to 700 nm), for a blackbody at = 5800 K and an organism at = 300 K is determined as: about 8.3% lower than the value considered until now, as a direct consequence of the fact that the organisms which are using solar radiation are also emitting radiation as a consequence of their own temperature. Therefore, the conversion factor of the organism will be different depending on its temperature, and the exergy concept is more suitable than the energy one. Measurement Researchers at Utah State University compared measurements for PPF and YPF using different types of equipment. They measured the PPF and YPF of seven common radiation sources with a spectroradiometer, then compared with measurements from six quantum sensors designed to measure PPF, and three quantum sensors designed to measure YPF. They found that the PPF and YPF sensors were the least accurate for narrow-band sources (narrow spectrum of light) and most accurate for broad-band sources (fuller spectra of light). They found that PPF sensors were significantly more accurate under metal halide, low-pressure sodium and high-pressure sodium lamps than YPF sensors (>9% difference). Both YPF and PPF sensors were very inaccurate (>18% error) when used to measure light from red-light-emitting diodes. Similar measurement Photobiologically Active Radiation (PBAR) Photobiologically Active Radiation (PBAR) is a range of light energy beyond and including PAR. Photobiological Photon Flux (PBF) is the metric used to measure PBAR. See also Action spectrum Daily light integral Electromagnetic absorption by water References Gates, David M. (1980). Biophysical Ecology, Springer-Verlag, New York, 611 p. McCree, Keith J. (1981). "Photosynthetically active radiation". In: Encyclopedia of Plant Physiology, vol. 12A. Springer-Verlag, Berlin, pp. 41–55. External links The Photosynthetic Process Comparison of Quantum (PAR) Sensors with Different Spectral Sensitivities Photosynthesis
Photosynthetically active radiation
[ "Chemistry", "Biology" ]
2,189
[ "Biochemistry", "Photosynthesis" ]
1,809,324
https://en.wikipedia.org/wiki/Pycnocline
A pycnocline is the cline or layer where the density gradient () is greatest within a body of water. An ocean current is generated by the forces such as breaking waves, temperature and salinity differences, wind, Coriolis effect, and tides caused by the gravitational pull of celestial bodies. In addition, the physical properties in a pycnocline driven by density gradients also affect the flows and vertical profiles in the ocean. These changes can be connected to the transport of heat, salt, and nutrients through the ocean, and the pycnocline diffusion controls upwelling. Below the mixed layer, a stable density gradient (or pycnocline) separates the upper and lower water, hindering vertical transport. This separation has important biological effects on the ocean and the marine living organisms. However, vertical mixing across a pycnocline is a regular phenomenon in oceans, and occurs through shear-produced turbulence. Such mixing plays a key role in the transport of nutrients. Physical function Turbulent mixing produced by winds and waves transfers heat downward from the surface. In low and mid-latitudes, this creates a surface-mixed layer of water of almost uniform temperature which may be a few meters deep to several hundred meters deep. Below this mixed layer, at depths of 200–300 m in the open ocean, the temperature begins to decrease rapidly down to about 1000 m. The water layer within which the temperature gradient is steepest is known as the permanent thermocline. The temperature difference through this layer may be as large as 20°C, depending on latitude. The permanent thermocline coincides with a change in water density between the warmer, low-density surface waters and the underlying cold dense bottom waters. The region of rapid density change is known as the pycnocline, and it acts as a barrier to vertical water circulation; thus it also affects the vertical distribution of certain chemicals which play a role in the biology of the seas. The sharp gradients in temperature and density also may act as a restriction to vertical movements of animals. Seasonality While the general structure of a pycnocline explained above holds true, pycnoclines can change based on the season. In the winter, sea surface temperatures are cooler, and waves tend to be larger, which increases the depth of the mixed layer even down to the main thermocline/pycnocline in some cases. In the summer, warmer temperatures, melting sea and land ice, and increased sunlight cause the surface layer of the ocean to increase in temperature. This layer sits on top of the large winter mixed layer that was previously created and forms a seasonal pycnocline above the main pycnocline, with the winter mixed layer becoming a lower density gradient called a pycnostad. As the seasons begin to change again, a net loss of heat from the surface layer and continued wind mixing wear away the seasonal pycnocline until the next summer. Changes with Latitude While temperature and salinity both have an impact on density, one can have a greater effect than the other depending on latitudinal region. In the tropics and mid-latitudes, the surface density for all oceans follows surface temperature rather than surface salinity. At the highest latitudes over 50°, surface density follows salinity more than temperature for all oceans because temperature consistently sites near the freezing point. In low and mid-latitudes, a permanent pycnocline exists at depths between 200-1000 m. In some large but geographically restricted subtropical regions such as the Sargasso Sea in the Atlantic, two permanent thermoclines exist with a layer of lower vertical stratification called a thermostad separating them. This phenomenon is reflected in density due to the strong dependence of density on ocean temperature; two permanent pycnoclines are associated with the permanent thermoclines, and the density equivalent to the thermostad is called the pycnostad. In subpolar and polar regions, the surface waters are much colder year-round due to latitude and much fresher due to the melting of sea and land ice, high precipitation, and freshwater runoff, while deeper waters are fairly consistent across the globe. Due to this, there is no permanent thermocline present, but seasonal thermoclines can occur. In these areas, a permanent halocline exists, and this halocline is the main factor in determining the permanent pycnocline. Biological function Growth rate of phytoplankton is controlled by the nutrient concentration, and the regeneration of nutrients in the sea is a very important part of the interaction between higher and lower trophic levels. The separation due to the pycnocline formation prevents the supply of nutrients from the lower layer into the upper layer. Nutrient fluxes through the pycnocline are lower than at other surface layers. Microbial loop The microbial loop is a trophic pathway in the marine microbial food web. The term "microbial loop" was coined by Azam et al. (1983) to describe the role played by microbes in the marine ecosystem carbon and nutrient cycles where dissolved organic carbon (DOC) is returned to higher trophic levels via the incorporation into bacterial biomass, and also coupled with the classic food chain formed by phytoplankton-zooplankton-nekton. At the end of phytoplankton bloom, when the algae enter a senescent stage, there is an accumulation of phytodetritus and an increased release of dissolved metabolites. It is particularly at this time that the bacteria can utilize these energy sources to multiply and produce a sharp pulse (or bloom) that follows the phytoplankton bloom. The same relationship between phytoplankton and bacteria influences the vertical distribution of bacterioplankton. Maximum numbers of bacteria generally occur at the pycnocline, where phytodetritus accumulates by sinking from the overlying euphotic zone. There, decomposition by bacteria contributes to the formation of oxygen minimum layers in stable waters. Diel vertical migration One of the most characteristic behavioural features of plankton is a vertical migration that occurs with a 24-hour periodicity. This has often been referred to as diurnal or diel vertical migration. The vertical distance travelled over 24 hours varies, generally being greater among larger species and better swimmers. But even small copepods may migrate several hundred meters twice in a 24-hour period, and stronger swimmers like euphausiids and pelagic shrimp may travel 800 m or more. The depth range of migration may be inhibited by the presence of a thermocline or pycnocline. However, phytoplankton and zooplankton capable of diel vertical migration are often concentrated in the pycnocline. Furthermore, those marine organisms with swimming skills through thermocline or pycnocline may experience strong temperature and density gradients, as well as considerable pressure changes during the migration. Stability Pycnoclines become unstable when their Richardson number drops below 0.25. The Richardson number is a dimensionless value expressing the ratio of potential to kinetic energy. This ratio drops below 0.25 when the shear rate exceeds stratification. This can produce Kelvin-Helmholtz instability, resulting in a turbulence which leads to mixing. The changes in pycnocline depth or properties can be simulated from some computer program models. The simple approach for those models is to examine the Ekman pumping model based on the ocean general circulation model (OCGM). Types of clines Thermocline - A cline based on difference in water temperature. Chemocline - A cline based on difference in water chemistry. Halocline - A cline based on difference in water salinity. Lutocline - A cline based on difference in water turbidity. See also Isopycnal Oceanography Thin layers (oceanography) References Aquatic ecology Physical oceanography
Pycnocline
[ "Physics", "Biology" ]
1,700
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
1,810,027
https://en.wikipedia.org/wiki/Twine
Twine is a strong thread, light string or cord composed of string in which two or more thinner strands are twisted, and then twisted together (plied). The strands are plied in the opposite direction to that of their twist, which adds torsional strength to the cord and keeps it from unravelling. This process is sometimes called reverse wrap. The same technique used for making twine is also used to make thread, which is thinner, yarn, and rope, which is stronger and thicker, generally with three or more strands. Natural fibres used for making twine include wool, cotton, sisal, jute, hemp, henequen, paper, and coir. A variety of synthetic fibres are also used. Twine is a popular substance used in modern-day crafting. Prehistoric The invention of twine is at least as important as the development of stone tools for early humans. Indeed, Elizabeth Wayland Barber has called the development of twine, which can be made far stronger and longer than its component fibers, "the string revolution." Twine could be used to fasten points and blades to arrows, spears, harpoons and other tools and to make snares, bags, baby slings, fishing and hunting nets and marine tackle, not to mention to secure firewood, haul goods and anchor tents and shelters. Twine is the foundation to both textile and rope making. Twine has been made of animal hair, including human, sinews and plant material, often from the vascular tissue of a plant (known as bast), but also bark and even seed down, e.g. milkweed. However, unlike stone or metal tools, most twine is missing from the archaeological record because it is made of perishable materials that rarely survive over time. In fact, the discovery of ancient beads and the dating of sea travel to at least 60,000 years ago suggests that the "string revolution" might have occurred much earlier than the Upper Paleolithic. Plant twine was used for hafting stone tips by about 58,000 years ago in southern Africa. Paleolithic cord remnants have been discovered in a few places: Georgia's Dzudzuana Cave (30,000 years old), Israel's Ohalo II site (19,000 years old), and France's Lascaux Cave (17,000 years old). In 2016, a carved piece of mammoth ivory with three holes, dated at 40,000 years old, was unearthed at the Hohle Fels site, famous for the discovery of both Paleolithic female figurines and flutes. It has been identified as a tool for twining rope. In the Americas, cordage has been found at the Windover Bog, in Florida, dating to 8000 years ago. A small piece of cord discovered at Abris du Muras, in south-eastern France, has been dated to around 50,000 years ago. Early depictions of twine are few, but one of the around 200 Venus figurines that have been found across Eurasia is depicted as wearing a "string skirt" (the Venus of Lespugue, dated to 25,000 years ago). Barber notes that not only is each twist in the strings carved in detail, but also "the bottom end of each string [is shown] fraying out into a mass of loose fibers (not possible for e.g. a twisted piece of gut or sinew)." Other evidence for the prehistoric use of twine is provided by impressions on metal or in pottery and other ceramic artifacts. In the Fukui cave, Japan, such impressions date to 13,000 years ago. Imprints of woven material in clay found at Dolni Vestonice I and several other sites in Moravia date to 26,000 years ago. and were found along with needles and tools that were used to sew clothing and make nets for hunting small animals and birds. Beads, as well as shells and animal teeth with man-made holes, have also been used as indirect evidence of twining, as have net sinkers and tools with the marks of cord wear. Beads have been found with the remnants of thread still trapped inside them. Historical manufacture After the technique of making twine by hand was invented, various implements to produce thread for textile production such as spindles, spinning wheels and looms for spinning and weaving and tools for twine and rope-making were developed. Process The twining process begins with cordage, which can be any form of untwisted, twisted or braided combination of fibers. A cord is formed by the twisting of at least one ply of material or the braiding together of multiple plies. The number of plies and the type of material lends itself to the naming of the type and structure of the cord. A simple ply is one that is made from a single strand or bunch of material that is spun in the same direction whereas a compound ply is created by twisting several strands or bunches of material individually and then spinning those together in opposite directions to one another. Once twine is produced, it can be used to produce other forms of function, most commonly textiles and basketry. The spun twine is then combined using a process called twining in order to produce both types of object. The primary constituents of this twining process are known as the warp and weft or the foundation and stitch. Objects created with this method using varying techniques may also host unique structural decoration. Systematic passing of the warp can create images or patterned modifications. In accompaniment of warp modifications, dyed or naturally coloured materials may be used to accumulate patterns. Textural differences may be created in twined objects by intentional spacing of strands implemented in the weave. Lastly, other auxiliary materials can be incorporated into the object for further detail such as embroidery, feathers, appliques, etc. Classifications There are several primary means of classifying objects such as threads, textiles and baskets created with twining. The way that the weft rows are spaced can be defined as open, closed or a combination of the two. These terms identify the closeness of the weft rows to one another and variation in this intentional spacing. The way that the warp and weft are interconnected creates different compositional arrangements. These arrangements can be simple, diagonal or both. The last main categorization comes from the direction that the weft is twisted. This is denoted as S-twist and Z-twist or both. In the S-twist the strands appear to come up as they are twisted left and the Z-twist appears to come up as they are twisted to the right. Additional classifications that are typically recorded by anthropologists can include the width of the strands, the number of strands being used together to form the warp or weft, the number of warp and weft rows per unit centimeter, and the width of the gaps in the weft rows. Methods of preparation, composition, and creation are also of great importance. See also Biggest ball of twine Binder Twine Festival Hair twists International Year of Natural Fibres 2009 Rope String (disambiguation) Timeline of clothing and textiles technology References Fasteners Fibers Ropework
Twine
[ "Engineering" ]
1,482
[ "Construction", "Fasteners" ]
1,810,201
https://en.wikipedia.org/wiki/Biclustering
Biclustering, block clustering, Co-clustering or two-mode clustering is a data mining technique which allows simultaneous clustering of the rows and columns of a matrix. The term was first introduced by Boris Mirkin to name a technique introduced many years earlier, in 1972, by John A. Hartigan. Given a set of samples represented by an -dimensional feature vector, the entire dataset can be represented as rows in columns (i.e., an matrix). The Biclustering algorithm generates Biclusters. A Bicluster is a subset of rows which exhibit similar behavior across a subset of columns, or vice versa. Development Biclustering was originally introduced by John A. Hartigan in 1972. The term "Biclustering" was then later used and refined by Boris G. Mirkin. This algorithm was not generalized until 2000, when Y. Cheng and George M. Church proposed a biclustering algorithm based on the mean squared residue score (MSR) and applied it to biological gene expression data. In 2001 and 2003, I. S. Dhillon published two algorithms applying biclustering to files and words. One version was based on bipartite spectral graph partitioning. The other was based on information theory. Dhillon assumed the loss of mutual information during biclustering was equal to the Kullback–Leibler-distance (KL-distance) between P and Q. P represents the distribution of files and feature words before Biclustering, while Q is the distribution after Biclustering. KL-distance is for measuring the difference between two random distributions. KL = 0 when the two distributions are the same and KL increases as the difference increases. Thus, the aim of the algorithm was to find the minimum KL-distance between P and Q. In 2004, Arindam Banerjee used a weighted-Bregman distance instead of KL-distance to design a Biclustering algorithm that was suitable for any kind of matrix, unlike the KL-distance algorithm. To cluster more than two types of objects, in 2005, Bekkerman expanded the mutual information in Dhillon's theorem from a single pair into multiple pairs. Complexity The complexity of the Biclustering problem depends on the exact problem formulation, and particularly on the merit function used to evaluate the quality of a given Bicluster. However, the most interesting variants of this problem are NP-complete. NP-complete has two conditions. In the simple case that there is an only element a(i,j) either 0 or 1 in the binary matrix A, a Bicluster is equal to a biclique in the corresponding bipartite graph. The maximum size Bicluster is equivalent to the maximum edge biclique in the bipartite graph. In the complex case, the element in matrix A is used to compute the quality of a given Bicluster and solve the more restricted version of the problem. It requires either large computational effort or the use of lossy heuristics to short-circuit the calculation. Types of Biclusters Bicluster with constant values (a) When a Biclustering algorithm tries to find a constant-value Bicluster, it reorders the rows and columns of the matrix to group together similar rows and columns, eventually grouping Biclusters with similar values. This method is sufficient when the data is normalized. A perfect constant Bicluster is a matrix(I,J) in which all values a(i,j) are equal to a given constant μ. In tangible data, these entries a(i,j) may be represented with the form n(i,j) + μ where n(i,j) denotes the noise. According to Hartigan's algorithm, by splitting the original data matrix into a set of Biclusters, variance is used to compute constant Biclusters. Hence, a perfect Bicluster may be equivalently defined as a matrix with a variance of zero. In order to prevent the partitioning of the data matrix into Biclusters with the only one row and one column; Hartigan assumes that there are, for example, K Biclusters within the data matrix. When the data matrix is partitioned into K Biclusters, the algorithm ends. Bicluster with constant values on rows (b) or columns (c) Unlike the constant-value Biclusters, these types of Biclusters cannot be evaluated solely based on the variance of their values. To finish the identification, the columns and the rows should be normalized first. There are, however, other algorithms, without the normalization step, that can find Biclusters which have rows and columns with different approaches. Bicluster with coherent values (d, e) For Biclusters with coherent values on rows and columns, an overall improvement over the algorithms for Biclusters with constant values on rows or on columns should be considered. This algorithm may contain analysis of variance between groups, using co-variance between both rows and columns. In Cheng and Church's theorem, a Bicluster is defined as a subset of rows and columns with almost the same score. The similarity score is used to measure the coherence of rows and columns. The relationship between these cluster models and other types of clustering such as correlation clustering is discussed in. Algorithms There are many Biclustering algorithms developed for bioinformatics, including: block clustering, CTWC (Coupled Two-Way Clustering), ITWC (Interrelated Two-Way Clustering), δ-bicluster, δ-pCluster, δ-pattern, FLOC, OPC, Plaid Model, OPSMs (Order-preserving submatrixes), Gibbs, SAMBA (Statistical-Algorithmic Method for Bicluster Analysis), Robust Biclustering Algorithm (RoBA), Crossing Minimization, cMonkey, PRMs, DCC, LEB (Localize and Extract Biclusters), QUBIC (QUalitative BIClustering), BCCA (Bi-Correlation Clustering Algorithm) BIMAX, ISA and FABIA (Factor analysis for Bicluster Acquisition), runibic, and recently proposed hybrid method EBIC (evolutionary-based Biclustering), which was shown to detect multiple patterns with very high accuracy. More recently, IMMD-CC is proposed that is developed based on the iterative complexity reduction concept. IMMD-CC is able to identify co-cluster centroids from highly sparse transformation obtained by iterative multi-mode discretization. Biclustering algorithms have also been proposed and used in other application fields under the names co-clustering, bi-dimensional clustering, and subspace clustering. Given the known importance of discovering local patterns in time-series data. Recent proposals have addressed the Biclustering problem in the specific case of time-series gene expression data. In this case, the interesting Biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the development of efficient exhaustive enumeration algorithms such as CCC-Biclustering and e-CCC-Biclustering. The approximate patterns in CCC-Biclustering algorithms allow a given number of errors, per gene, relatively to an expression profile representing the expression pattern in the Bicluster. The e-CCC-Biclustering algorithm uses approximate expressions to find and report all maximal CCC-Bicluster's by a discretized matrix A and efficient string processing techniques. These algorithms find and report all maximal Biclusters with coherent and contiguous columns with perfect/approximate expression patterns, in time linear/polynomial which is obtained by manipulating a discretized version of original expression matrix in the size of the time-series gene expression matrix using efficient string processing techniques based on suffix trees. These algorithms are also applied to solve problems and sketch the analysis of computational complexity. Some recent algorithms have attempted to include additional support for Biclustering rectangular matrices in the form of other datatypes, including cMonkey. There is an ongoing debate about how to judge the results of these methods, as Biclustering allows overlap between clusters and some algorithms allow the exclusion of hard-to-reconcile columns/conditions. Not all of the available algorithms are deterministic and the analyst must pay attention to the degree to which results represent stable minima. Because this is an unsupervised classification problem, the lack of a gold standard makes it difficult to spot errors in the results. One approach is to utilize multiple Biclustering algorithms, with the majority or super-majority voting amongst them to decide the best result. Another way is to analyze the quality of shifting and scaling patterns in Biclusters. Biclustering has been used in the domain of text mining (or classification) which is popularly known as co-clustering. Text corpora are represented in a vectoral form as a matrix D whose rows denote the documents and whose columns denote the words in the dictionary. Matrix elements Dij denote occurrence of word j in document i. Co-clustering algorithms are then applied to discover blocks in D that correspond to a group of documents (rows) characterized by a group of words(columns). Text clustering can solve the high-dimensional sparse problem, which means clustering text and words at the same time. When clustering text, we need to think about not only the words information, but also the information of words clusters that was composed by words. Then, according to similarity of feature words in the text, will eventually cluster the feature words. This is called co-clustering. There are two advantages of co-clustering: one is clustering the test based on words clusters can extremely decrease the dimension of clustering, it can also appropriate to measure the distance between the tests. Second is mining more useful information and can get the corresponding information in test clusters and words clusters. This corresponding information can be used to describe the type of texts and words, at the same time, the result of words clustering can be also used to text mining and information retrieval. Several approaches have been proposed based on the information contents of the resulting blocks: matrix-based approaches such as SVD and BVD, and graph-based approaches. Information-theoretic algorithms iteratively assign each row to a cluster of documents and each column to a cluster of words such that the mutual information is maximized. Matrix-based methods focus on the decomposition of matrices into blocks such that the error between the original matrix and the regenerated matrices from the decomposition is minimized. Graph-based methods tend to minimize the cuts between the clusters. Given two groups of documents d1 and d2, the number of cuts can be measured as the number of words that occur in documents of groups d1 and d2. More recently (Bisson and Hussain) have proposed a new approach of using the similarity between words and the similarity between documents to co-cluster the matrix. Their method (known as χ-Sim, for cross similarity) is based on finding document-document similarity and word-word similarity, and then using classical clustering methods such as hierarchical clustering. Instead of explicitly clustering rows and columns alternately, they consider higher-order occurrences of words, inherently taking into account the documents in which they occur. Thus, the similarity between two words is calculated based on the documents in which they occur and also the documents in which "similar" words occur. The idea here is that two documents about the same topic do not necessarily use the same set of words to describe it, but a subset of the words and other similar words that are characteristic of that topic. This approach of taking higher-order similarities takes the latent semantic structure of the whole corpus into consideration with the result of generating a better clustering of the documents and words. In text databases, for a document collection defined by a document by term D matrix (of size m by n, m: number of documents, n: number of terms) the cover-coefficient based clustering methodology yields the same number of clusters both for documents and terms (words) using a double-stage probability experiment. According to the cover coefficient concept number of clusters can also be roughly estimated by the following formula where t is the number of non-zero entries in D. Note that in D each row and each column must contain at least one non-zero element. In contrast to other approaches, FABIA is a multiplicative model that assumes realistic non-Gaussian signal distributions with heavy tails. FABIA utilizes well understood model selection techniques like variational approaches and applies the Bayesian framework. The generative framework allows FABIA to determine the information content of each Bicluster to separate spurious Biclusters from true Biclusters. See also Formal concept analysis Biclique Galois connection References Others N.K. Verma, S. Bajpai, A. Singh, A. Nagrare, S. Meena, Yan Cui, "A Comparison of Biclustering Algorithms" in International conference on Systems in Medicine and Biology (ICSMB 2010)in IIT Kharagpur India, pp. 90–97, Dec. 16–18. J. Gupta, S. Singh and N.K. Verma "MTBA: MATLAB Toolbox for Biclustering Analysis", IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions", IIT Kanpur India, pp. 148–152, Jul. 2013. A. Tanay. R. Sharan, and R. Shamir, "Biclustering Algorithms: A Survey", In Handbook of Computational Molecular Biology, Edited by Srinivas Aluru, Chapman (2004) Adetayo Kasim, Ziv Shkedy, Sebastian Kaiser, Sepp Hochreiter, Willem Talloen (2016), Applied Biclustering Methods for Big and High-Dimensional Data Using R, Chapman & Hall/CRC Press Orzechowski, P., Sipper, M., Huang, X., & Moore, J. H. (2018). EBIC: an evolutionary-based parallel biclustering algorithm for pattern discovery. Bioinformatics. External links FABIA: Factor Analysis for Bicluster Acquisition, an R package —software Cluster analysis Bioinformatics NP-complete problems
Biclustering
[ "Mathematics", "Engineering", "Biology" ]
3,015
[ "Biological engineering", "Computational problems", "Bioinformatics", "Mathematical problems", "NP-complete problems" ]
1,810,909
https://en.wikipedia.org/wiki/Drag-reducing%20aerospike
A drag-reducing aerospike is a device (see nose cone design) used to reduce the forebody pressure aerodynamic drag of blunt bodies at supersonic speeds. The aerospike creates a detached shock ahead of the body. Between the shock and the forebody a zone of recirculating flow occurs which acts like a more streamlined forebody profile, reducing the drag. Development This concept was used on the UGM-96 Trident I and is estimated to have increased the range by 550 km. The Trident aerospike consists of a flat circular plate mounted on an extensible boom which is deployed shortly after the missile breaks through the surface of the water after launch from the submarine. The use of the aerospike allowed a much blunter nose shape, providing increased internal volume for payload and propulsion without increasing the drag. This was required because the Trident I C-4 was fitted with a third propulsion stage to achieve the desired increase in range over the Poseidon C-3 missile it replaced. To fit within the existing submarine launch tubes the third-stage motor had to be mounted in the center of the post-boost vehicle with the reentry vehicles arranged around the motor. At the same time (middle 1970s) an aerospike was developed in KB Mashinostroyeniya (KBM) for the 9M39 surface-to-air missile of 9K38 Igla MANPADS (in order to diminish heating of infrared homing seeker fairing and reduce wave drag), giving the name to the whole system ( means 'needle'). A simplified Igla-1 version with a different kind of target seeker featured a tripod instead of a 'needle' for the same purpose. Further development of this concept has resulted in the "air-spike". This is formed by concentrated energy, either from an electric arc torch or a pulsed laser, projected forwards from the body, which produces a region of low density hot air ahead of the body. In 1995 at the 33rd Aerospace Sciences Meeting, it was reported that tests were performed with an aerospike-protected missile dome to Mach 6, obtaining quantitative surface pressure and temperature-rise data on the feasibility of using aerospikes on hypersonic missiles. Missiles with aerospikes USSR 9K38 Igla (MANPADS) US UGM-96 Trident I UGM-133 Trident II France M51 (missile) See also Index of aviation articles References External links American Institute of Aeronautics and Astronautics National Aeronautics and Space Administration Progress in Flight Physics Drag (physics) Aircraft components
Drag-reducing aerospike
[ "Chemistry" ]
526
[ "Drag (physics)", "Fluid dynamics" ]
1,811,292
https://en.wikipedia.org/wiki/Functional%20selectivity
Functional selectivity (or “agonist trafficking”, “biased agonism”, “biased signaling”, "ligand bias" and “differential engagement”) is the ligand-dependent selectivity for certain signal transduction pathways relative to a reference ligand (often the endogenous hormone or peptide) at the same receptor. Functional selectivity can be present when a receptor has several possible signal transduction pathways. To which degree each pathway is activated thus depends on which ligand binds to the receptor. Functional selectivity, or biased signaling, is most extensively characterized at G protein coupled receptors (GPCRs). A number of biased agonists, such as those at muscarinic M2 receptors tested as analgesics or antiproliferative drugs, or those at opioid receptors that mediate pain, show potential at various receptor families to increase beneficial properties while reducing side effects. For example, pre-clinical studies with G protein biased agonists at the μ-opioid receptor show equivalent efficacy for treating pain with reduced risk for addictive potential and respiratory depression. Studies within the chemokine receptor system also suggest that GPCR biased agonism is physiologically relevant. For example, a beta-arrestin biased agonist of the chemokine receptor CXCR3 induced greater chemotaxis of T cells relative to a G protein biased agonist. Functional vs. traditional selectivity Functional selectivity has been proposed to broaden conventional definitions of pharmacology. Traditional pharmacology posits that a ligand can be either classified as an agonist (full or partial), antagonist or more recently an inverse agonist through a specific receptor subtype, and that this characteristic will be consistent with all effector (second messenger) systems coupled to that receptor. While this dogma has been the backbone of ligand-receptor interactions for decades now, more recent data indicates that this classic definition of ligand-protein associations does not hold true for a number of compounds; such compounds may be termed as mixed agonist-antagonists. Functional selectivity posits that a ligand may inherently produce a mix of the classic characteristics through a single receptor isoform depending on the effector pathway coupled to that receptor. For instance, a ligand can not easily be classified as an agonist or antagonist, because it can be a little of both, depending on its preferred signal transduction pathways. Thus, such ligands must instead be classified on the basis of their individual effects in the cell, instead of being either an agonist or antagonist to a receptor. These observations were made in a number of different expression systems, and therefore functional selectivity is not just an epiphenomenon of one particular expression system. Examples One notable example of functional selectivity occurs with the 5-HT2A receptor, as well as the 5-HT2C receptor. Serotonin, the main endogenous ligand of 5-HT receptors, is a functionally selective agonist at this receptor, activating phospholipase C (which leads to inositol triphosphate accumulation), but does not activate phospholipase A2, which would result in arachidonic acid signaling. However, the other endogenous compound dimethyltryptamine activates arachidonic acid signaling at the 5-HT2A receptor, as do many exogenous hallucinogens such as DOB and lysergic acid diethylamide (LSD). Notably, LSD does not activate IP3 signaling through this receptor to any significant extent. (Conversely, LSD, unlike serotonin, has negligible affinity for the 5-HT2C-VGV isoform, is unable to promote calcium release, and is, thus, functionally selective at 5-HT2C.) Oligomers, specifically 5-HT2A– heteromers, mediate this effect. This may explain why some direct 5-HT2 receptor agonists have psychedelic effects, whereas compounds that indirectly increase serotonin signaling at the 5-HT2 receptors generally do not, for example: selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and medications using 5HT2A receptor agonists that do not have constitutive activity at the mGluR2 dimer, such as lisuride. Tianeptine, an atypical antidepressant, is thought to exhibit functional selectivity at the μ-opioid receptor to mediate its antidepressant effects. Oliceridine is a μ-opioid receptor agonist that has been described to be functionally selective towards G protein and away from β-arrestin2 pathways. However, recent reports highlight that, rather than functional selectivity or 'G protein bias', this agonist has low intrinsic efficacy. In vivo, it has been reported to mediate pain relief without tolerance nor gastrointestinal side effects. The delta opioid receptor agonists SNC80 and ARM390 demonstrate functional selectivity that is thought to be due to their differing capacity to cause receptor internalization. While SNC80 causes delta opioid receptors to internalize, ARM390 causes very little receptor internalization. Functionally, that means that the effects of SNC80 (e.g. analgesia) do not occur when a subsequent dose follows the first, whereas the effects of ARM390 persist. However, tolerance to ARM390's analgesia still occurs eventually after multiple doses, though through a mechanism that does not involve receptor internalization. Interestingly, the other effects of ARM390 (e.g. decreased anxiety) persist after tolerance to its analgesic effects has occurred. An example of functional selectivity to bias metabolism was demonstrated for an electron transfer protein cytochrome P450 reductase (POR) with binding of small molecule ligands shown to alter the protein conformation and interaction with various redox partner proteins of POR. See also Signal transduction Second messenger system References Further reading Biased ligands Neurophysiology Pharmacodynamics Signal transduction
Functional selectivity
[ "Chemistry", "Biology" ]
1,282
[ "Pharmacology", "Pharmacodynamics", "Signal transduction", "Biased ligands", "Biochemistry", "Neurochemistry" ]
1,811,328
https://en.wikipedia.org/wiki/Canadian%20Association%20of%20Petroleum%20Producers
The Canadian Association of Petroleum Producers (CAPP), with its head office in Calgary, Alberta, is a lobby group that represents the upstream Canadian oil and natural gas industry. CAPP's members produce "90% of Canada's natural gas and crude oil" and "are an important part of a national industry with revenues of about $100 billion-a-year (CAPP 2011)." History CAPP origins can be traced back to the Alberta Oil Operators’ Association, which was founded in 1927, after the discovery of the Turner Valley Oil Field. In 1947, the Alberta Petroleum Association changed its name to the Western Canadian Petroleum Association, and In 1952, the Western Canada Petroleum Association amalgamated with the Saskatchewan Operators’ Association and adopted the name Canadian Petroleum Association. At a meeting on December 9, 1952, the CPA drafted a new constitution which outlined the objectives of the organization as follows: to establish better understanding between the petroleum and natural gas industry and the public to encourage cooperation between the petroleum and natural gas industry and federal, provincial and local governments, and other authoritative bodies to provide a forum for the discussion of matters affecting the welfare of its members to foster better understanding between the Association and purposes On June 10, 1958 the CPA opened an office in Ottawa and became "one (of) the oldest, largest and most influential lobby groups in Canada." It provided the federal government with information pertaining to the oil industry while keeping the CPA informed about political trends, government regulations and statistics. By 1965 the CPA had a membership of more than 200 members representing roughly 97 percent of all oil and gas production in Canada. In 1981, two years after the first commercial discovery at Hibernia off the coast of Newfoundland, the CPA opened an office in St. John’s in cooperation with the Eastcoast Petroleum Operators’ Association. In 1992, when the Canadian Association of Petroleum Producers (CAPP) was formed, with the CPA amalgamation with the Independent Petroleum Association of Canada (IPAC) to form the Canadian Association of Petroleum Producers (CAPP), Gerry Protti was named as founding president. According to the Federal lobbyist registry, from January to September 2012, the Canadian Association of Petroleum Producers had 178 contacts with federal officials to discuss issues such as pipelines, making it the lobby group with the most contacts that year. They lobbied on greenhouse gas regulations related to the Clean Air Act, Fisheries Act, pipeline regulation and tax credits. Oil industry advocacy Canada's estimated total oil reserves including conventional oil were approximately 180 billion barrels (29 km3), behind only Saudi Arabia and Venezuela. Canada produces approximately 2.7 million barrels (430,000 m3) of crude oil a day, and 6.4 trillion cubic feet (180 km3) of natural gas per year. In 2013, an IPSOS poll showed a majority (75%) of Canadians prioritize local crude before using imported oil from foreign sources, while just over one in ten (14%) ‘disagree’ (4% strongly/11% somewhat) and 11% have no opinion. CAPP has advocated for the industry as GHG emissions rose 14% in 2009 and 2010, by its own admission. However, GHG emissions per barrel of oil sands crude produced have dropped by 26% since 1990 as a result of new operating practices and technology. According to IHS CERA, oil sands crude has similar CO2 emissions to other heavy oils and is 9% more intensive than the U.S. crude supply average on a wells-to-wheels basis. The industry employs 550,000 people and paid billions in taxes and royalties to different levels of government. Advocacy for Oil Sands CAPP's series of meetings in 2010 in eight cities in Canada and the United States, including Vancouver, Edmonton, Ottawa, Toronto, Montreal, Washington D.C., New York and Chicago, with CAPP representatives, oil sands CEOs and 160 key stakeholders, culminated in a report entitled Dialogues published on 14 April 2011. Fracking advocacy CAPP advocates for the use of the controversial technology hydraulic fracturing. In 2010 released a series of voluntary Guiding Principles for Hydraulic Fracturing for Canadian natural gas producers to adhere to. The Guiding Principles of Hydraulic Fracturing were followed in 2011 by an agreed set of Six Hydraulic Fracturing Practices for: 1. Fracturing fluid additive disclosure 2. Fracturing fluid additive risk management 3. Baseline groundwater testing 4. Wellbore construction 5. Water sourcing and reuse 6. Fluid handling, transport, disposal. Criticisms and concerns The Council of Canadians and Sierra Club Canada take a strong position against hydraulic fracturing and want it banned in Canada entirely, and have supported specific bans in Nova Scotia and New Brunswick. Advocacy for crude oil exports via Canada's west coast CAPP supports and advocates for exports of Canadian crude oil via Canada's west coast via the Northern Gateway and the KinderMorgan TransMountain Expansion Project. In September 2011, the Asia Pacific Foundation of Canada (APF Canada) and the Canada West Foundation established the Canada-Asia Energy Futures Task Force with Kathleen (Kathy) E. Sendall, C.M., FCAE, a former Governor and Board Chair of the Canadian Association of Petroleum Producers (CAPP) and Kevin G. Lynch, a Canadian economist and former Clerk of the Privy Council and Secretary to the Cabinet, Canada's most senior civil servant as co-chairs, to investigate a long-term Canada-Asia energy relationship. One of their recommendations was the creation of a public energy transportation corridor. Criticisms and concerns Canadian opponents to the Northern Gateway , intended to permit shipping of high-carbon Canadian crude over ecologically sensitive rivers and waters to carbon-uncontrolled countries including India and China, include 61 First Nations in British Columbia. Advocacy for Keystone XL Pipelines expansion CAPP supports and advocates for the $7-billion pipeline expansion project by the Canadian-based company TransCanada to build the Keystone XL, that would extend and expand capacity of existing pipelines, that transport crude oil from the Athabasca oil sands in northern Alberta to tidewater and to refineries in the Persian Gulf, capable of refining the heavy bitumen crude oil. Criticisms and concerns Nine winners of the Nobel Peace Prize, including Archbishop Desmond Tutu and the Dalai Lama, were signatories to a letter to pressure U.S. President Barack Obama to reject the $7-billion pipeline expansion project by the Canadian-based company TransCanada to build the Keystone XL. The position of the Nobel Peace Prize winners, essentially, is that one rich nation selling increasingly heavy high-carbon oil to another sabotages any effort to reach a deal on global carbon controls, and that moves to expand this export (like Keystone XL or Northern Gateway) cause significant and direct risks to world peace, as climate victim countries become subject to chaotic weather, fighting over scarce water (especially in Southeast Asia and Africa), flooding and rising sea levels. Advocacy regarding GHG emissions CAPP opposed the Kyoto Protocol, from which Stephen Harper withdrew Canada in December 2011. CAPP's lobbying efforts included favouring "made in Canada" approach and advocating for a carbon pricing program. In 2007 a carbon tax was implemented in Alberta, Canada's major oil and gas producing province. Supported by CAPP and in the industry, the $15/tonne carbon tax feeds a GHG emissions reduction technology fund. By 2008, the oil sands industry contributes (approximately 3%–4%) of Canada’s GHG emissions (approximately 3%–4%. By 2012, oil sands contributed 0.14% of global GHG emissions. Transportation and electricity were the largest contributors of GHG, with transportation contributing 190 Mt of CO2 equivalent per year (MtCO2eq yr−1) and electricity and heat generation: 125 MtCO2eq yr−1. However, by 2007 (Environment Canada 2007) cautioned that unrestricted development of the oil sands could increase its emissions and the percentage. A 2008 CAPP report argued that both the Alberta and Federal governments adopted "comparable industry GHG emissions targets in which large emitters must reduce their emissions by either improving their operation, purchasing emissions credits or investing in technology funds." Criticisms and concerns Canada was the first signatory nation to walk away from the Kyoto Protocol in 2012. The U.S. abandoned the Kyoto Protocol in 2001. CAPP initiative 2011 In the summer of 2011 CAPP contacted ENV to requested a meeting with the Canadian Society for Unconventional Gas (CSUG), and officials from several government ministries, including Alberta Environment, Energy, Sustainable Resource Development (SRD), as well as the Energy Resources Conservation Board (ERCB), (now Alberta Energy Regulator) to discuss CAPP’s desire to strike a committee to develop a public communications strategy focused on fracturing and water use associated with shale gas development." Senior-level government and industry officials attended the joint meeting "to develop a plan to shape public perceptions of shale gas development and water use." From Alberta Energy participants included Director of Unconventional Gas Doug Bowes, Associate Branch Head Matthew Foss, Environment and Resource Services Audrey Murray, Executive Director of Resource Development Sharla Rauschning, Assistant Deputy Minister Resource Development Policy Division Jennifer Steber. From Alberta Environment participants included, Deputy Minister Ernie Hui, Former Head of Groundwater Policy within the Water Policy Branch, now the Exec. Dir. of OH&S Policy and Program with Human Services Ross Nairne. From Sustainable Resource Development (SRD) participants included Assistant Deputy Minister Glen Selland, Executive Director, Land Management Branch Jeff Reynolds, Officials from CAPP included VP Operations David Pryce, Manager of BC Operations Brad Herald, Manager of Water and Reclamation Tara Payment. From the Canadian Society for Unconventional Gas (CSUG) CSUG (a.k.a. CSUR) participants included Vice President Kevin Heffernan. June 8, 2011, e-mail to senior government officials from the Energy Resources Conservation Board, the arm’s length regulator of the oil and gas industry in Alberta, to several meetings to produce a collaborative communications campaign on fracking strategy. On 9 June 2011 the Alberta government approved collaborative communications campaign in the minutes of their joint meeting. stating that Criticisms and concerns By 29 November 2011, the CBC and the Alberta Federation of Labour (AFL), were investigating the role played by CAPP in influencing Alberta Environment over public communications surrounding shale gas extraction, a controversial practice that has significant environmental concerns associated with it, especially when fracturing is employed. Questions were raised about the legality of private interests influencing government. Complaints were filed and dismissed. 2019 federal general election Prior to the 2019 Canadian federal election CAPP registered as a political third party, which The Calgary Herald said was "breaking with tradition" "for the first time" to increase its advocacy efforts on behalf of the oil industry. As oil prices rose, the profits of the Alberta oil industry in 2019 experienced a $909 million profit compared to the $678 million loss in Q4 in 2018, according to Statistics Canada. By Q1 2019, operating profits of the oil industry increased by $1.6 billion. Alberta Premier Jason Kenney had said during his election campaign that he would request that the energy industry "significantly increase its advocacy efforts." COVID-19 pandemic The Canadian Petroleum Industry faces a major crisis during the COVID-19 pandemic, as Canadian crude oil prices fell to records low. Facing dire economic prospects, CAPP intensified its lobbying efforts with the federal government. On March 27, the group sent a 13-page letter to Natural Resources minister Seamus O'Regan and other ministers to ask them to defer or waive some of the industry's regulatory obligation, to defer the development or implementation of new policies regarding the industry, and to implement policies to support the industry directly. Specifically, this meant that the Industry requests, among other things, to defer the reporting of its greenhouse gas emission, to defer the implementation of the new Methane regulation and carbon pricing, and to delay the introduction of legislation that would entrench the Declaration on the Rights of Indigenous Peoples (UNDRIP) in Canadian law. Criticisms and concerns Assembly of First Nations National Chief Perry Bellegarde wrote a letter to CAPP President and CEO Tim McMillan telling him to back off from advocating for the indefinite delay of the implementation of UNDRIP in Canada. Macmillan responded by affirming CAPP's support of UNDRIP, but maintaining that such legislation shouldn't be adopted during the pandemic because of the government's limited ability to hold consultations during this time. Selected CAPP publications p. 22. p. 1. p. 30 p. 1 Full text on-line report Canada's Oil Sands Fact Book (Report). Canadian Association of Petroleum Producers (CAPP). Retrieved February 2022. See also American Petroleum Institute (API) Notes References Further reading Additional information about the lobbying controversy can be found here: https://www.cbc.ca/news/canada/edmonton/dismissal-of-illegal-lobbying-complaint-questioned-1.1009995 External links CAPP web site Canadian lobbyists Lobbying in Canada Lobbying organizations in Canada Organizations based in Calgary Petroleum industry in Canada Petroleum industry in Alberta Trade associations based in Alberta Trade associations based in Canada Hydraulic fracturing Petrochemical industry associations
Canadian Association of Petroleum Producers
[ "Chemistry" ]
2,716
[ "Petroleum technology", "Natural gas technology", "Hydraulic fracturing" ]
20,570,809
https://en.wikipedia.org/wiki/Galoter%20process
The Galoter process (also known as TSK, UTT, or SHC; its newest modifications are called Enefit and Petroter) is a shale oil extraction technology for the production of shale oil, a type of synthetic crude oil. In this process, the oil shale is decomposed into shale oil, oil shale gas, and spent residue. Decomposition is caused by mixing raw oil shale with hot oil shale ash generated by the combustion of carbonaceous residue (semi-coke) in the spent residue. The process was developed in the 1950s, and it is used commercially for shale oil production in Estonia. There are projects for further development of this technology and expansion of its usage, e.g., in Jordan and the USA. History Research on the solid heat carrier process for pyrolysis of lignite, peat, and oil shale started in 1944 at the G. M. Krzhizhanovsky Power Engineering Institute of the Academy of Sciences of the USSR. At the laboratory scale, the Galoter process was invented and developed in 1945–1946. The process was named Galoter after the research team leader, Israel Galynker, whose name was combined with the word "thermal". Further research continued in Estonia. A pilot unit with a capacity of 2.5 tonnes of oil shale per day was built in Tallinn in 1947. The first Galoter-type commercial scale pilot retorts were built at Kiviõli, Estonia, in 1953 and 1963 (closed in 1963 and 1981, respectively), with capacities of 200 and 500 tonnes of oil shale per day, respectively. The Narva Oil Plant, annexed to the Eesti Power Plant and operating two Galoter-type 3000 tonnes per day retorts, was commissioned in Estonia in 1980. These retorts were designed by AtomEnergoProject and developed in cooperation with the Krzhizhanovsky Institute. Started as a pilot plant, the process of converting it to a commercial-scale plant took about 20 years. During this period, the company has modernized more than 70% of the equipment compared to the initial design. In 1978, a 12.5-tonnes pilot plant was built in Verkhne-Sinevidnoy, Ukraine. It was used for testing Lviv–Volinsk lignite, and Carpathian, Kashpir (Russia), and Rotem (Israel) oil shales. In 1996–1997, a test unit was assembled in Tver. In 2008, Estonian energy company Eesti Energia, an operator of Galoter retorts at the Narva Oil Plant, established a joint venture with the Finnish technology company Outotec called Enefit Outotec Technology to develop and commercialize a modified Galoter process–the Enefit process–which combines the current process with circulating fluidized bed technologies. In 2013, Enefit Outotec Technology opened an Enefit testing plant in Frankfurt. In 2012, Eesti Energia opened a new generation Galoter-type plant in Narva using Enefit 280 technology. In 2009–2015, VKG Oil, a subsidiary of Viru Keemia Grupp, opened in Kohtla-Järve, Estonia, three modified Galoter-type oil plants called Petroter. Technology Galoter retort The Galoter process is an above-ground oil-shale retorting technology classified as a hot recycled solids technology. The process uses a horizontal cylindrical rotating kiln-type retort, which is slightly declined. It has similarities with the TOSCO II process. Before retorting, the oil shale is crushed into fine particles with a size of less than in diameter. The crushed oil shale is dried in the fluidized bed drier (aerofountain drier) by contact with hot gases. After drying and pre-heating to , oil shale particles are separated from gases by cyclonic separation. Oil shale is transported to the mixer chamber, where it is mixed with hot ash of , produced by combustion of spent oil shale in a separate furnace. The ratio of oil shale ash to raw oil shale is 2.8–3:1. The mixture is moved then to the hermetic rotating kiln. When the heat transfers from the hot ash to raw oil shale particles, the pyrolysis (chemical decomposition) begins in oxygen deficit conditions. The temperature of pyrolysis is kept at . Produced oil vapors and gases are cleaned of solids by cyclones and moved to condensation system (rectification column) where shale oil condenses and oil shale gas is separated in gaseous form. Spent shale (semi-coke) is transported then to the separate furnace for combustion to produce hot ash. A portion of the hot ash is separated from the furnace gas by cyclones and recycled to the rotary kiln for pyrolysis. The remaining ash is removed from the combustion gas by more cyclones and cooled and removed for disposal by using water. The cleaned hot gas returns to the oil shale dryer. The Galoter process has high thermal and technological efficiency, and high oil recovery ratio. Oil yield reaches 85–90% of Fischer Assay and retort gas yield accounts for 48 cubic meters per tonne. Oil quality is considered good, but the equipment is sophisticated and capacity is relatively low. This process creates less pollution than internal combustion technologies, as it uses less water, but it still generates carbon dioxide as also carbon disulfide and calcium sulfide. Enefit process Enefit process is a modification of the Galoter process being developed by Enefit Outotec Technology. In this process, the Galoter technology is combined with proven circulating fluidized bed (CFB) combustion technology used in coal-fired power plants and mineral processing. Oil shale particles and hot oil shale ash are mixed in a rotary drum as in the classical Galoter process. The primary modification is the replacing of the Galoter semi-coke furnace with a CFB furnace. The Enefit process also incorporates fluid bed ash cooler and waste heat boiler commonly used in coal-fired boilers to convert waste heat to steam for power generation. Compared to the traditional Galoter, the Enefit process allows complete combustion of carbonaceous residue, improved energy efficiency by maximum utilization of waste heat, and less water use for quenching. According to promoters, the Enefit process has a lower retorting time compare to the classical Galoter process and therefore it has a greater throughput. Avoidance of moving parts in the retorting zones increases their durability. Commercial use Two Galoter retorts built in 1980 are used for oil production by the Narva Oil Plant, a subsidiary of the Estonian energy company Eesti Energia. Both retorts process 125 tonnes per hour of oil shale. The annual shale oil production is 135,000 tonnes and oil shale gas production is . Since 2012, it also uses a new plant employing Enefit 280 technology with a processing capacity of 2.26 million tonnes of oil shale per year and producing 290,000 tonnes of shale oil and of oil shale gas. In addition, Eesti Energia planned to begin construction of similar Enefit plants in Jordan and in USA. Enefit Outotec Technology analysis suitability of Enefit technology for the Tarfaya oil shale deposit in Morocco, developed by San Leon Energy. VKG Oil operates in Kohtal-Järve, Estonia three modified Galoter-type oil plants called Petroter. The basic engineering of these retorts was done by Atomenergoproject of Saint Petersburg. The basic engineering of the condensation and distillation plant was done by Rintekno of Finland. The plant has a processing capacity of 1.1 million tonnes of oil shale per year and it produces 100,000 tonnes of shale oil, of oil shale gas, and 150 GWh of steam per year. Saudi Arabian International Corporation for Oil Shale Investment planned to utilize Galoter (UTT-3000) process to build a shale oil plant in Jordan. Uzbekneftegaz planned to build eight UTT-3000 plants in Uzbekistan. However, in December 2015 Uzbekneftegaz announced a postponement of the project. See also Alberta Taciuk Process Petrosix process Kiviter process Fushun process Paraho process Lurgi-Ruhrgas process References Oil shale technology Oil shale in Estonia
Galoter process
[ "Chemistry" ]
1,750
[ "Petroleum technology", "Oil shale technology", "Synthetic fuel technologies" ]
20,571,589
https://en.wikipedia.org/wiki/Regulation%20of%20chemicals
The regulation of chemicals is the legislative intent of a variety of national laws or international initiatives such as agreements, strategies or conventions. These international initiatives define the policy of further regulations to be implemented locally as well as exposure or emission limits. Often, regulatory agencies oversee the enforcement of these laws. Chemicals are regulated for: environmental protection (chemical waste, and chemical pollution of water, air, subterrestrial,and terrestrial environments such as of pesticides) human health (such as in cosmetics and foods) and drugs (recreational and pharmaceuticals) chemical weapons prohibition (such as for the Chemical Weapons Convention) International initiatives Strategic Approach to International Chemicals Management (SAICM) -. This initiative was adopted at the International Conference on Chemicals Management (ICCM), which took place from 4–6 February 2006 in Dubai gathering Governments and intergovernmental and non-governmental organizations. It defines a policy framework to foster the sound worldwide management of chemicals. This initiative covers risk assessments of chemicals and harmonized labeling up to tackling obsolete and stockpiled products. Are included provisions for national centers aimed at helping in the developing world, training staff in chemical safety as well as dealing with spills and accidents. SAICM is a voluntary agreement. A second International Conference on Chemicals Management -ICCM2- held in May 2009 in Geneva took place to enhance synergies and cost-effectiveness and to promote SAICM’s multi-sectorial nature. Globally Harmonized System of Classification and Labeling of Chemicals (GHS)[] The “Globally Harmonized System of Classification and Labelling of Chemicals” (GHS) proposes harmonized hazard communication elements, including labels and safety data sheets. It was adopted by the United Nations Economic Commission for Europe (UNECE) in 2002. This system aims to ensure a better protection of human health and the environment during the handling of chemicals, including their transport and use. The classification of chemicals is done based on their hazard. This harmonization will facilitate trade when implemented entirely. Stockholm Convention - The Stockholm Convention is a global treaty to protect human health and the environment from persistent organic pollutants(POPs). It entered into force, on 17 May 2004, and over 150 countries signed the Convention. In May 2009, nine new chemicals are proposed for listing which then contained 12 substances. Rotterdam Convention – The objectives of the Rotterdam Convention are: to promote shared responsibility and cooperative efforts among Parties in the international trade of certain hazardous chemicals in order to protect human health and the environment from potential harm; to contribute to the environmentally sound use of those hazardous chemicals, by facilitating information exchange about their characteristics, by providing for a national decision-making process on their import and export and by disseminating these decisions to Parties. The text of the Convention was adopted on 10 September 1998 by a Conference in Rotterdam, the Netherlands. The Convention entered into force on 24 February 2004. The Convention creates legally binding obligations for the implementation of the Prior Informed Consent (PIC) procedure. Basel Convention – The Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and their Disposal is a global environmental agreement on hazardous and other wastes. It came into force in 1992. The Convention has 172 Parties and aims to protect human health and the environment against the adverse effects resulting from the generation, management, transboundary movements and disposal of hazardous and other wastes. Montreal Protocol – The Montreal Protocol was a globally coordinated regulatory action that sought to regulate ozone-depleting chemicals. 191 countries have ratified the treaty. Global Framework on Chemicals - The plan was adopted on 30 September 2023 in Bonn at the fifth session of the International Conference on Chemicals Management organized by the UN Environment Programme (UNEP). Regional regulations USA: The Environmental Protection Agency (EPA) of the US announced in 2009 that the chemicals management laws would be strengthened, and that it would initiate a comprehensive approach to enhance the chemicals management program, including: New Regulatory Risk Management Actions Development of Chemical Action Plans, which will target the risk management efforts on chemicals of concern Requiring Information Needed to Understand Chemical Risks Increasing Public Access to Information About Chemicals Engaging Stakeholders in Prioritizing Chemicals for Future Risk Management Action. Chemicals are regulated under various laws including the Toxic Substances Control Act (TSCA). In 2010, Congress was considering a new law entitled the Safe Chemicals Act. Over the following several years, the Senate considered a number of legislative texts to amend the TSCA. These included the Safer Chemicals Act, several versions of which were introduced by Senator Frank Lautenberg (D-NJ), with the latest in 2013, and the Chemical Safety Improvement Act (S. 1009, CSIA) introduced by Senators Lautenberg and David Vitter (R-LA) in 2013. Senator Lautenberg died shortly after CSIA's introduction, and over time his mantle was picked up by Senator Tom Udall (D-NM), who continued to work with Senator Vitter on revisions to the CSIA. The result of that effort was the Frank R. Lautenberg Chemical Safety for the 21st Century Act, passed by the Senate on December 17, 2015. The Toxic Substances Control Act (TSCA) Modernization Act of 2015 (H.R. 2576), passed the House of Representatives on June 23, 2015. Revised legislation, which resolved differences between the House and Senate versions, was forwarded to the President on June 14, 2016. President Obama signed the bill into law on June 22, 2016. The Senator's widow, Bonnie Lautenberg, was present at the White House signing ceremony. EU: Chemicals in Europe are managed by the REACH (Registration, Evaluation and Authorization and Restriction of Chemicals) and the CLP (Classification, Labeling and Packaging) regulations. Specific regulations exist for specific families of products such as Fertilizers, Detergents, Explosives, Pyrotechnic Articles, Drug Precursors. Canada: In Canada, the Chemicals Management Plan is responsible for designating priority chemicals, gathering public information about those chemicals, and generating risk assessment and management strategies. Issues A study suggests and defines a 'planetary boundary' for novel entities such as plastic- and chemical pollution and concluded that it has been crossed, suggesting – alongside many other studies and indicators – that more and improved regulations or related changes (e.g. enforcement- or trade-related changes) are necessary. Using drug discovery artificial intelligence algorithms, researchers generated 40,000 potential chemical weapon candidates, which may be relevant to timely regulation of chemicals and related products that can be used to manufacture the fraction of viable candidates. According to a senior scientist author of the study, synthesizing these chemicals for real harm would be the more difficult part and certain needed molecules for doing so are known and regulated – however, some viable candidates may only require currently non-regulated compounds. Other issues include: the public and academic debate about drug prohibition or about health policy in respect to recreational drugs, nootropics and bodybuilding supplements the lack of various requirements, quality standards and lab testing for dietary supplements (various product information may also be necessary in some cases – for example in the case of the supplement C60 which in a study showed significant morbidity and mortality in mice in under 2 weeks when exposed to room-level light levels) See also Regulation of science Regulation of nanotechnology DEA list of chemicals Emergency Planning and Community Right-to-Know Act Safety data sheet Chemophobia Environmental Persistent Pharmaceutical Pollutant EPPP References External links Environmental law
Regulation of chemicals
[ "Chemistry" ]
1,502
[ "Regulation of chemicals" ]
20,572,735
https://en.wikipedia.org/wiki/Threshold%20displacement%20energy
In materials science, the threshold displacement energy () is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal, a separate threshold displacement energy exists for each crystallographic direction. Then one should distinguish between the minimum () and average () over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV. Theory and simulation The threshold displacement energy is a materials property relevant during high-energy particle radiation of materials. The maximum energy that an irradiating particle can transfer in a binary collision to an atom in a material is given by (including relativistic effects) where E is the kinetic energy and m the mass of the incoming irradiating particle and M the mass of the material atom. c is the velocity of light. If the kinetic energy E is much smaller than the mass of the irradiating particle, the equation reduces to In order for a permanent defect to be produced from initially perfect crystal lattice, the kinetic energy that it receives must be larger than the formation energy of a Frenkel pair. However, while the Frenkel pair formation energies in crystals are typically around 5–10 eV, the average threshold displacement energies are much higher, 20–50 eV. The reason for this apparent discrepancy is that the defect formation is a complex multi-body collision process (a small collision cascade) where the atom that receives a recoil energy can also bounce back, or kick another atom back to its lattice site. Hence, even the minimum threshold displacement energy is usually clearly higher than the Frenkel pair formation energy. Each crystal direction has in principle its own threshold displacement energy, so for a full description one should know the full threshold displacement surface for all non-equivalent crystallographic directions [hkl]. Then and where the minimum and average is with respect to all angles in three dimensions. An additional complication is that the threshold displacement energy for a given direction is not necessarily a step function, but there can be an intermediate energy region where a defect may or may not be formed depending on the random atom displacements. The one can define a lower threshold where a defect may be formed , and an upper one where it is certainly formed . The difference between these two may be surprisingly large, and whether or not this effect is taken into account may have a large effect on the average threshold displacement energy. . It is not possible to write down a single analytical equation that would relate e.g. elastic material properties or defect formation energies to the threshold displacement energy. Hence theoretical study of the threshold displacement energy is conventionally carried out using either classical or quantum mechanical molecular dynamics computer simulations. Although an analytical description of the displacement is not possible, the "sudden approximation" gives fairly good approximations of the threshold displacement energies at least in covalent materials and low-index crystal directions An example molecular dynamics simulation of a threshold displacement event is available in 100_20eV.avi. The animation shows how a defect (Frenkel pair, i.e. an interstitial and vacancy) is formed in silicon when a lattice atom is given a recoil energy of 20 eV in the 100 direction. The data for the animation was obtained from density functional theory molecular dynamics computer simulations. Such simulations have given significant qualitative insights into the threshold displacement energy, but the quantitative results should be viewed with caution. The classical interatomic potentials are usually fit only to equilibrium properties, and hence their predictive capability may be limited. Even in the most studied materials such as Si and Fe, there are variations of more than a factor of two in the predicted threshold displacement energies. The quantum mechanical simulations based on density functional theory (DFT) are likely to be much more accurate, but very few comparative studies of different DFT methods on this issue have yet been carried out to assess their quantitative reliability. Experimental studies The threshold displacement energies have been studied extensively with electron irradiation experiments. Electrons with kinetic energies of the order of hundreds of keVs or a few MeVs can to a very good approximation be considered to collide with a single lattice atom at a time. Since the initial energy for electrons coming from a particle accelerator is accurately known, one can thus at least in principle determine the lower minimum threshold displacement energy by irradiating a crystal with electrons of increasing energy until defect formation is observed. Using the equations given above one can then translate the electron energy E into the threshold energy T. If the irradiation is carried out on a single crystal in a known crystallographic directions one can determine also direction-specific thresholds . There are several complications in interpreting the experimental results, however. To name a few, in thick samples the electron beam will spread, and hence the measurement on single crystals does not probe only a single well-defined crystal direction. Impurities may cause the threshold to appear lower than they would be in pure materials. Temperature dependence Particular care has to be taken when interpreting threshold displacement energies at temperatures where defects are mobile and can recombine. At such temperatures, one should consider two distinct processes: the creation of the defect by the high-energy ion (stage A), and subsequent thermal recombination effects (stage B). The initial stage A. of defect creation, until all excess kinetic energy has dissipated in the lattice and it is back to its initial temperature T0, takes < 5 ps. This is the fundamental ("primary damage") threshold displacement energy, and also the one usually simulated by molecular dynamics computer simulations. After this (stage B), however, close Frenkel pairs may be recombined by thermal processes. Since low-energy recoils just above the threshold only produce close Frenkel pairs, recombination is quite likely. Hence on experimental time scales and temperatures above the first (stage I) recombination temperature, what one sees is the combined effect of stage A and B. Hence the net effect often is that the threshold energy appears to increase with increasing temperature, since the Frenkel pairs produced by the lowest-energy recoils above threshold all recombine, and only defects produced by higher-energy recoils remain. Since thermal recombination is time-dependent, any stage B kind of recombination also implies that the results may have a dependence on the ion irradiation flux. In a wide range of materials, defect recombination occurs already below room temperature. E.g. in metals the initial ("stage I") close Frenkel pair recombination and interstitial migration starts to happen already around 10-20 K. Similarly, in Si major recombination of damage happens already around 100 K during ion irradiation and 4 K during electron irradiation Even the stage A threshold displacement energy can be expected to have a temperature dependence, due to effects such as thermal expansion, temperature dependence of the elastic constants and increased probability of recombination before the lattice has cooled down back to the ambient temperature T0. These effects, are, however, likely to be much weaker than the stage B thermal recombination effects. Relation to higher-energy damage production The threshold displacement energy is often used to estimate the total amount of defects produced by higher energy irradiation using the Kinchin-Pease or NRT equations which says that the number of Frenkel pairs produced for a nuclear deposited energy of is for any nuclear deposited energy above . However, this equation should be used with great caution for several reasons. For instance, it does not account for any thermally activated recombination of damage, nor the well known fact that in metals the damage production is for high energies only something like 20% of the Kinchin-Pease prediction. The threshold displacement energy is also often used in binary collision approximation computer codes such as SRIM to estimate damage. However, the same caveats as for the Kinchin-Pease equation also apply for these codes (unless they are extended with a damage recombination model). Moreover, neither the Kinchin-Pease equation nor SRIM take in any way account of ion channeling, which may in crystalline or polycrystalline materials reduce the nuclear deposited energy and thus the damage production dramatically for some ion-target combinations. For instance, keV ion implantation into the Si 110 crystal direction leads to massive channeling and thus reductions in stopping power. Similarly, light ion like He irradiation of a BCC metal like Fe leads to massive channeling even in a randomly selected crystal direction. See also Threshold energy Stopping power (particle radiation) Crystallographic defect Primary knock-on atom Wigner effect References Condensed matter physics Radiation effects
Threshold displacement energy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,827
[ "Physical phenomena", "Phases of matter", "Materials science", "Radiation", "Condensed matter physics", "Radiation effects", "Matter" ]
20,575,029
https://en.wikipedia.org/wiki/Unitized%20regenerative%20fuel%20cell
A unitized regenerative fuel cell (URFC) is a fuel cell based on the proton exchange membrane which can do the electrolysis of water in regenerative mode and function in the other mode as a fuel cell recombining oxygen and hydrogen gas to produce electricity. Both modes are done with the same fuel cell stack By definition, the process of any fuel cell could be reversed. However, a given device is usually optimized for operating in one mode and may not be built in such a way that it can be operated backwards. Fuel cells operated backwards generally do not make very efficient systems unless they are purpose-built to do so as in high pressure electrolyzers, unitized regenerative fuel cells and regenerative fuel cells. History Livermore physicist Fred Mitlitsky studied the possibilities of reversible technology. In the mid-1990s Mitlitsky received some funding from NASA for development of Helios and from the Department of Energy for leveling peak and intermittent power usage with sources such as solar cells or wind turbines. By 1996 he produced a 50-watt prototype single proton-exchange membrane cell which operated for 1,700 ten-minute charge-discharge cycles, and degradation was less than a few percent at the highest current densities. A rated power of 18.5 kW URFC was installed in the Helios and was tested on-board during test flights in 2003. The aircraft unfortunately crashed on its second URFC test flight June 26, 2003. See also Glossary of fuel cell terms Hydrogen technologies References External links 2003-Unitized regenerative fuel cell system development 2005-Development of an oxygen electrode for a URFC Fuel cells Electrolysis Hydrogen production
Unitized regenerative fuel cell
[ "Chemistry" ]
345
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs", "Electrolysis" ]
20,575,238
https://en.wikipedia.org/wiki/Synthetic%20rescue
Synthetic rescue (or synthetic recovery or synthetic viability when a lethal phenotype is rescued ) refers to a genetic interaction in which a cell that is nonviable, sensitive to a specific drug, or otherwise impaired due to the presence of a genetic mutation becomes viable when the original mutation is combined with a second mutation in a different gene. The second mutation can either be a loss-of-function mutation (equivalent to a knockout) or a gain-of-function mutation. Synthetic rescue could potentially be exploited for gene therapy, but it also provides information on the function of the genes involved in the interaction. Types of genetic suppression Dosage-mediated suppression Dosage-mediated suppression occurs when the suppression of the mutant phenotype is mediated by the over expression of a second suppressor gene. This can occur when the initial mutations destabilize a protein-protein interaction and over expression of the interacting protein bypass the negative effect of the initial mutation. Interaction-mediated suppression Interaction-mediated suppression occurs when a deleterious mutation in a component of a protein complex destabilizes the complex. A compensatory mutation in another component of the protein complex can then suppress the deleterious phenotype by re-establishing the interaction between the two proteins. It usually means that the deleterious mutation and the suppressive mutation occurs in two residues that are closely located in the tridimensional structure of the multi-protein complex. As thus, this kind of suppression provides indirect information on the molecular structure of the proteins involved. Experimental observation of theoretical prediction The strongest form of synthetic rescues, in which the deleterious impact of a gene knockout is mitigated by an additional genetic perturbation that is also deleterious when considered in isolation, was modeled and predicted theoretically for gene interactions mediated by the metabolic network. This strong form of synthetic rescue has been recently observed in experiments in both Saccharomyces cerevisiae. and Escherichia coli. Patient survival analysis was also shown to predict synthetic rescues and other types of interactions. tRNA-mediated suppression Genetic suppression can be mediated by tRNA genes when a mutation alters their anticodon sequence. For example, a tRNA designated for the recognition of the codon TCA and the corresponding insertion of serine in the growing polypeptide chain can mutate so that it recognize a TAA stop codon and promote the insertion of serine instead of the termination of the polypeptide chain. This could be particularly useful when a nonsense mutation (TCA >TAA) prevents the expression of a gene by either leading to a partially completed polypeptide or degradation of the mRNA by nonsense-mediated decay. The redundancy of tRNA genes makes sure that such mutation would not prevent the normal insertion of serines when the TCA codon specifies them. See also Complex networks Gene therapy Suppressor mutation Synthetic lethality References Genetics Gene therapy
Synthetic rescue
[ "Engineering", "Biology" ]
596
[ "Gene therapy", "Genetic engineering" ]
20,575,536
https://en.wikipedia.org/wiki/Reliability%20%28semiconductor%29
Reliability of a semiconductor device is the ability of the device to perform its intended function during the life of the device in the field. There are multiple considerations that need to be accounted for when developing reliable semiconductor devices: Semiconductor devices are very sensitive to impurities and particles. Therefore, to manufacture these devices it is necessary to manage many processes while accurately controlling the level of impurities and particles. The finished product quality depends upon the many layered relationship of each interacting substance in the semiconductor, including metallization, chip material (list of semiconductor materials) and package. The problems of micro-processes, and thin films and must be fully understood as they apply to metallization and wire bonding. It is also necessary to analyze surface phenomena from the aspect of thin films. Due to the rapid advances in technology, many new devices are developed using new materials and processes, and design calendar time is limited due to non-recurring engineering constraints, plus time to market concerns. Consequently, it is not possible to base new designs on the reliability of existing devices. To achieve economy of scale, semiconductor products are manufactured in high volume. Furthermore, repair of finished semiconductor products is impractical. Therefore, incorporation of reliability at the design stage and reduction of variation in the production stage have become essential. Reliability of semiconductor devices may depend on assembly, use, environmental, and cooling conditions. Stress factors affecting device reliability include gas, dust, contamination, voltage, current density, temperature, humidity, mechanical stress, vibration, shock, radiation, pressure, and intensity of magnetic and electrical fields. Design factors affecting semiconductor reliability include: voltage, power, and current derating; metastability; logic timing margins (logic simulation); timing analysis; temperature derating; and process control. Methods of improvement Reliability of semiconductors is kept high through several methods. Cleanrooms control impurities, process control controls processing, and burn-in (short term operation at extremes) and probe and test reduce escapes. Probe (wafer prober) tests the semiconductor die, prior to packaging, via micro-probes connected to test equipment. Final test tests the packaged device, often pre-, and post burn-in for a set of parameters that assure operation. Process and design weaknesses are identified by applying a set of stress tests in the qualification phase of the semiconductors before their market introduction e. g. according to the AEC Q100 and Q101 stress qualifications. Parts Average Testing is a statistical method for recognizing and quarantining semiconductor die that have a higher probability of reliability failures. This technique identifies characteristics that are within specification but outside of a normal distribution for that population as at-risk outliers not suitable for high reliability applications. Tester-based Parts Average Testing varieties include Parametric Parts Average Testing (P-PAT) and Geographical Parts Average Testing (G-PAT), among others. Inline Parts Average Testing (I-PAT) uses data from production process control inspection and metrology to perform the outlier recognition function. Bond strength measurement is performed in two basic types: pull testing and shear testing. Both can be done destructively, which is more common, or non destructively. Non destructive tests are normally used when extreme reliability is required such as in military or aerospace applications. Failure mechanisms Failure mechanisms of electronic semiconductor devices fall in the following categories Material-interaction-induced mechanisms. Stress-induced mechanisms. Mechanically induced failure mechanisms. Environmentally induced failure mechanisms. Material-interaction-induced mechanisms Field-effect transistor gate-metal sinking Ohmic contact degradation Channel degradation Surface-state effects Package molding contamination—impurities in packaging compounds cause electrical failure Stress-induced failure mechanisms Electromigration – electrically induced movement of the materials in the chip Burnout – localized overstress Hot Electron Trapping – due to overdrive in power RF circuits Electrical Stress – Electrostatic discharge, High Electro-Magnetic Fields (HIRF), Latch-up overvoltage, overcurrent Mechanically induced failure mechanisms Die fracture – due to mis-match of thermal expansion coefficients Die-attach voids – manufacturing defect—screenable with Scanning Acoustic Microscopy. Solder joint failure by creep fatigue or intermetallics cracks. Die-pad/molding compound delamination due to thermal cycling Environmentally induced failure mechanisms Humidity effects – moisture absorption by the package and circuit Hydrogen effects – Hydrogen induced breakdown of portions of the circuit (Metal) Other Temperature Effects—Accelerated Aging, Increased Electro-migration with temperature, Increased Burn-Out See also Transistor aging Failure analysis Cleanroom Burn-in List of materials-testing resources List of materials analysis methods References Bibliography MIL-HDBK-217F Reliability Prediction of Electronic Equipment MIL-HDBK-251 Reliability/Design Thermal Applications MIL-HDBK-H 108 Sampling Procedures and Tables for Life and Reliability Testing (Based on Exponential Distribution) MIL-HDBK-338 Electronic Reliability Design Handbook MIL-HDBK-344 Environmental Stress Screening of Electronic Equipment MIL-STD-690C Failure Rate Sampling Plans and Procedures MIL-STD-721C Definition of Terms for Reliability and Maintainability MIL-STD-756B Reliability Modeling and Prediction MIL-HDBK-781 Reliability Test Methods, Plans and Environments for Engineering Development, Qualification and Production MIL-STD-1543B Reliability Program Requirements for Space and Missile Systems MIL-STD-1629A Procedures for Performing a Failure Mode, Effects, and Criticality Analysis MIL-STD-1686B Electrostatic Discharge Control Program for Protection of Electrical and Electronic Parts, Assemblies and Equipment (Excluding Electrically Initiated Explosive Devices) MIL-STD-2074 Failure Classification for Reliability Testing MIL-STD-2164 Environment Stress Screening Process for Electronic Equipment Semiconductor device fabrication
Reliability (semiconductor)
[ "Materials_science" ]
1,159
[ "Semiconductor device fabrication", "Microtechnology" ]
20,579,385
https://en.wikipedia.org/wiki/Demand%20flow%20technology
Demand flow technology (DFT) is a strategy for defining and deploying business processes in a flow, driven in response to customer demand. DFT is based on a set of applied mathematical tools that are used to connect processes in a flow and link it to daily changes in demand. DFT represents a scientific approach to flow manufacturing for discrete production. It is built on principles of demand pull where customer demand is the central signal to guide factory and office activity in the daily operation. DFT is intended to provide an alternative to schedule-push manufacturing which primarily uses a sales plan and forecast to determine a production schedule. History It was created by John R. Costanza, an executive with operations management experience at Hewlett Packard and Johnson & Johnson. Costanza, who was later nominated as a Nobel Laureate in Economics for Working Capital Management, founded the John Costanza Institute of Technology in Englewood, CO in 1984 to provide consulting and education services for manufacturers to implement the methodology. DFT uses applied mathematical methods to link raw and in-process materials with units of time and production resources in order to create a continuous flow in the factory. The objective is to link factory processes together in a flow and drive it to customer demand instead of to an internal forecast that is inherently inaccurate. Early adopters of DFT included American Standard Companies General Electric and John Deere (Deere & Company). In the early years, DFT was regarded as a method for "just-in-time" (JIT), which advocated manufacturing processes driven to actual customer demand via Kanban. It was introduced as a way for American manufacturers to adopt Japanese production techniques, such as Toyota Production System (TPS), whilst avoiding some of the cultural conflicts in applying Japanese business methods in an American company. Later, it has come to be seen as a lean manufacturing method that allows factories to implement techniques such as one-piece flow, TAKT-based line design, Kanban material management and demand-driven production. Demand Flow Technology is promoted as a method for any product, any day, any volume. In 2001, Costanza was awarded a patent for this approach for mixed-model manufacturing. Principles Demand-driven manufacturing The central tenet to DFT is the primacy of customer demand in daily execution of the operation. According to Aberdeen Group, "Demand driven manufacturing involves a synchronized, closed loop between customer orders, production scheduling, and manufacturing execution; all while simultaneously coordinating the flow of materials across the supply chain." [Aberdeen Group, 2007]. DFT is a pathway to achieve demand-driven manufacturing capability. It is used as a framework to guide the design, implementation and deployment of demand driven manufacturing in a repeatable form. In this way, it is similar original concept of Just-in-Time (JIT) that was first deployed in Japanese manufacturers using a foundation of total quality management. More recently, Just-in-time has been more commonly used to describe supplier delivery methods, rather than a production philosophy. DFT assumes basic process capability that can arise from TQM and statistical process control (SPC) principles and embeds it in a framework of management that can more easily achieve demand driven in a repeatable way. As a result, In-Progress and Finished inventories are all but eliminated, converted permanently into cash at full market value through much faster response to customer orders. Cash released from Working Capital in this way no longer has to be reinvested in inventory. It becomes available to retire debt, fund growth and innovation. Mixed-model production Mixed-model production is the production of a wide range of product models using a certain degree of shared resources and common material. It is commonly accepted that modern manufacturing places a greater pressure on producers for more choice in the product offering. Products are increasingly assembled from standard components and sub-assemblies, using machines and automated systems as well as manual labour. DFT is designed to handle this mix and provide a way to establish mixed-model production lines. A production schedule based on MRP will tend to cope with high product mix by allocating each model to a multiple of a shift or a day. This means that the whole product mix is supplied across a scheduling cycle of a multiple of weeks. This tends to extend the lead-time or increase dependency on the forecast. DFT offers “The ability to accommodate a range of volumes for any product, any day, based on the direction of actual customer demand”. Product synchronization The first tool to be used in a DFT implementation, product synchronization is a definition of relationship of processes in a flow to build a product. It takes the form of a diagram, usually created in pen and paper or whiteboard and formalized with a visualization program such as Microsoft PowerPoint or Visio. It displays how the processes relate to each other in a flow, with the conversion of raw material to finished goods. A process is defined by "A logical grouping of value-adding work performed to a common volume". Sequence of events (SoE) Each of the processes in the product synchronization requires a standard process definition. In DFT, the sequence of events provides this definition. In The Quantum Leap, written by Costanza, the sequence of events is defined as "[t]he definition of the required work and quality criteria to build a product in a specific production process." The SoE usually takes the form of a table with the product code, process ID, task description and sequence, required work and set-up time for machines and labour, and quality check criteria. The SoE intends to define times that are reasonable, realistic and repeatable to perform to the necessary quality. Many of the strengths and criticisms of DFT as a methodology stem from the SoE. The SoEs are the foundation of process definition but are not used as work instructions. To communicate standard work at the work center, operation method sheets are used. In an MRP systems environment, the SoE represents a drill-down from the routing that provides a tabular view of the Product Synchronization at the process level. A DFT manufacturer would therefore use the SoE as the master record of process definition and derive routings and ISO documentation from it. Operation method sheets These are visual description of work in motion, materials and the required quality check. In the purest form, operation method sheets are drawn in wire-frame to show the significant contours of the product form and clearly represent work in motion and quality without visual noise. The OMS has three stages of activity: total quality check, work, and verify. This establishes the concept where each operator checks the output quality of the operation immediately upstream. This can contribute to a total quality culture and parts-per-million capability. Mixed-model process map The sequence of events and product synchronization define how tasks and quality check compose the process for any given product. The mixed-model process map shows how products and processes form a requirement for resources. In such a map, the products and processes form a matrix with products as rows and processes as columns. At the intersection are most commonly actual times (standard times at the process from the sequence of events), but could also display yield and optionality ratios. Demand at capacity Demand at capacity is the volume of production for a single product item at capacity. It is a fixed value that defines the maximum daily rate of supply. The Demand-at-Capacity is often confused with the daily rate of production. In contrast to Toyota Production System, and many other lean manufacturing derivatives, a DFT line is designed for variable output rates according to daily demand. Thus, the demand data that are used for line design represent a limit quantity not an actual rate of supply. The relationship between the Dc and the average daily demand will be driven by the required service level of the product item to market demand. A higher service level will call for capacity that can supply a higher daily rate than the average over a long range. This will likely affect the resource productivity and inventory levels. A greater mix on the line is able to provide a higher level of service for any given level of resources and inventory. Effective hours The effective hours is the time available for a given resource to produce product or perform process set-up or changeover. It is defined per shift and represents the total available time to perform tasks set in the SoE. Non-productive time such as equipment maintenance, breaks, 5S activity and continuous improvement is deducted from effective hours. Setup time is included as it is arguably a form of productive time and calculations for batch size optimization and dynamic Kanban will require setup and run-time to be managed from a common pool of resource time. Takt & Operational Cycle-Time, OP c/t Takt-time is the ratio of time to volume at capacity and in DFT is expressed as Where HE is Effective Hours, S is the number of shifts and DC is the demand at capacity, a daily rate set for design purposes at some point 2 to 5 years into the future. This ratio can be expressed for finished products at the end of the line and is referred to as Takt-Time. It can also apply at the process where bill of material relationships, process yield and optionality can affect the dependent volume for any given Dc at the finished goods level. At the process level, this ratio is known as operational cycle-time. Takt time is typically used to calculate the "line design" or number and disposition of physical resources required to produce a given mix and volume of products that changes on a daily basis according to customer demand. Uniquely to DFT Takt time is constant, based on a fixed mix and product volume which is set for factory design purposes 2 to 5 years into the future. This allows for a stable "line design" that does not need to change on a daily basis. Daily changes in mix and volume are accommodated in DFT by adjusting the number of people working in production. Those not required to meet the Daily Rate (Dr) are free to spend quality time in training and continuous improvement activities. Weekly scheduling cycles to achieve level-loading of mix and volume, which cause significant planning delays, are eliminated. It becomes possible to produce any product on any day in response to real customer demand making possible a true Demand Flow. As a result, In-Progress and Finished inventories are all but eliminated, converted permanently into cash at full market value through much faster response to customer orders. Cash released from Working Capital in this way no longer has to be reinvested in inventory. It becomes available to retire debt, fund growth and innovation etc. Material Kanban DFT shares a conventional definition of material Kanban based on a visual signal to replenish a point of consumption with required material. A typical material Kanban system in DFT is "Single Card, Multiple Container" and enables card or container quantities to be consumed and replenished without shortages. Material Kanban provides an alternative to kitting as a way of issuing material to the production floor. A DFT environment will strive to simplify the definition of warehouse locations for material and reduce the number of transactions required to control the flow of material during production. The aim of Material Kanban is to connect the material flow with actual requirement at the process and provide a more robust availability of parts to production whilst reducing the response-time to the customer. Production Kanban Production Kanban is designed for a replenishment quantity that may be smaller than a lot size or batch. It is based on a "dual card Kanban" system where a "move" card or container represents the quantity required by the downstream point of consumption and a "produce" card is kept on a display board and accumulates to a replenishment batch. Demand-based management Demand-based management is an approach that defines tolerance capability for demand in order to unify material and production planning under conditions of demand uncertainty. It uses "flex fences" to set the upper and lower boundaries of supply against a definition of the current daily rate of demand. The current rate is usually some kind of smoothed average and will move over time. The flex-fences will be different for different product items or groups and should be calculated individually. Order policies, purchasing, inventory and production capacity will all be set against these flex fence boundaries, so these calculations will sit at the heart of operations planning. Unfortunately, this is a calculation-intensive and critical process that is largely unsupported by MRP/ERP systems. The lack of system tools and clash with conventional MRP planning routine are primary reason why demand-based management has not had the same level of adoption experienced by the rest of the DFT principles. Value and results Companies that implement DFT are typically looking for an improvement in the response to customer demand. This is reflected in the lead-time or replenishment time for finished product and will affect the level of inventory that is held to buffer response requirements. Effective response to demand can be described as a distribution curve, with some orders taking longer to fill than others. The result is variation and uncertainty in the manufacturer’s ability to serve the market. Working capital is required to hedge this response lag and uncertainty. DFT aims to reduce both the variation and duration of response to demand. This can be seen as a more capable fulfillment that provides a higher level of customer service at a lower level of working capital. The intended results are improvement in delivery performance together with increased cash-flow and return on working capital. Applications Demand flow technology is applicable in a wide range of product environments and has been successfully deployed in many different industries. Companies who have embraced demand flow technology include John Deere, Flextronics, American Standard Companies, Trane, AstraZeneca and many others. It has a strength in those manufacturing operations that are expected to supply a high mix to an unpredictable and volatile market. It is often seen as the science behind flow manufacturing for discrete manufacturers, whose products do not naturally flow across the manufacturing processes. Advantages and criticisms Advantages It is simple Demand flow technology provides a simple, logical method based on applied mathematics. The technique is based on simple operators of addition, subtraction, multiplication and division so it does not rely on advanced mathematics. It is repeatable DFT forms a step-by-step guide to converting production from a scheduled-push to a demand-pull and flow system. Although it is applicable to a wide range of products, the steps are consistent and work in the same way. It does not depend on the judgment of an expert in the same way as lean or Six Sigma and can be taught to a broader audience through short training workshops. It is effective At its heart, DFT formalizes the natural flow of material, processes and information required to build a product. It is not so much an invented technology as a description of the optimal way to align a factory towards customer demand. It is customer-centric DFT places the customer at the center of the operation. It enables companies to formalise a customer-centric view with practical tasks and actions that guide behaviour in the organisation. It moves the concept of customer-driven to an achievable plan of action beyond a statement of philosophy. It aligns business and customer goals The concept of maximizing shareholder value is often seen as a conflict with the quality of customer service. Demand Flow Technology, if applied correctly, can unify financial and customer objectives in a holistic approach to managing operating capital and growing a business. Criticisms It is constrained to the factory Demand flow techniques have been widely applied to the factory, yet have failed to gain widespread acceptance in corporate management. All too often, it tends to be limited to production planning whilst operations and material planning continue to be dominated by use of ERP/MRP systems. The holistic ideal of demand flow may be fractured by this conflict. It is unsupported by systems Major ERP/MRP vendors have largely ignored the advantages of Demand Flow techniques, or been acquired before their products have had a chance to gain market share. The advocates and users of Demand Flow have largely failed to challenge the inadequate logic that conventional MRP uses for planning capacity and production resources As a result, manufacturers are forced to rely on an outdated routine for planning that is largely unchanged since the 1960s. It requires process definition and discipline DFT aims to apply a standard process definition of product to daily requirements of demand. This favours processes that are capable and defined to the task level. This is sometimes a level of detail and discipline absent from the organisation. The creation and maintenance of Sequence of Events documentation involves extensive manual work. There are powerful advantages to quality and capability in performing this work, but success usually depends on management commitment to change beyond the narrow actions of a DFT implementation. References Supply chain management Workflow technology Lean manufacturing
Demand flow technology
[ "Engineering" ]
3,416
[ "Lean manufacturing" ]
20,579,517
https://en.wikipedia.org/wiki/Gateway%20Technology
The Gateway cloning method is a method of molecular cloning invented and commercialized by Invitrogen since the late 1990s, which makes use of the integration and excision recombination reactions that take place when bacteriophage lambda infects bacteria. This technology provides a fast and highly efficient way to transport DNA sequences into multi-vector systems for functional analysis and protein expression using Gateway att sites and two proprietary enzyme mixes called BP Clonase and LR Clonase. In vivo, these recombination reactions are facilitated by the recombination of attachment sites from the lambda/phage chromosome (attP) and the bacteria (attB). As a result of recombination between the attP and attB sites, the phage integrates into the bacterial genome flanked by two new recombination sites (attLeft and attRight). The removal of the phage from the bacterial chromosome and the regeneration of attP and attB sites can both result from the attL and attR sites recombining under specific circumstances. DNA sequences of interest are added to modified versions of these special Gateway Att sites. Two enzyme reactions take place, BP Clonase and LR Clonase. The BP Clonase occurs between the attB sites surrounding the insert and the attP sites of the donor vector. This reaction is catalyzed by the BP Clonase enzyme mixture and produces the entry clone containing the DNA of interest flanked by attL domains. As a byproduct of the reaction, the lethal ccdB gene is excised from the donor vector. The LR Clonase occurs between the attL regions of the generated entry clone and the attR regions of the target vector and is catalyzed by the LR Clonase enzyme mix. As a result, an expression clone with DNA of interest flanked by attB regions is produced. As in the BP reaction, a DNA sequence containing the ccdB gene is cut from the target vector. Large archives of Gateway Entry clones, containing the vast majority of human, mouse, and rat ORFs (open reading frames) have been cloned from human cDNA libraries or chemically synthesized to support the research community using NIH (National Institutes of Health) funding (e.g. Mammalian Gene Collection, http://mgc.nci.nih.gov/ ). The availability of these gene cassettes in a standard Gateway cloning plasmid helps researchers quickly transfer these cassettes into plasmids that facilitate the analysis of gene function. Gateway cloning does take more time for initial set-up, and is more expensive than traditional restriction enzyme and ligase-based cloning methods, but it saves time and offers simpler and highly efficient cloning for downstream applications. The technology has been widely adopted by the life science research community especially for applications that require the transfer of thousands of DNA fragments into one type of plasmid (e.g., one containing a CMV promoter for protein expression in mammalian cells), or for the transfer of one DNA fragment into many different types of plasmids (e.g., for bacterial, insect, and mammalian protein expression). Basic procedure The first step in Gateway cloning is the preparation of a Gateway Entry clone. There are a few different ways to make entry clone. Gateway attB1 and attB2 sequences are added to the 5' and 3' end of a gene fragment, respectively, using gene-specific PCR primers and PCR amplification. The PCR amplification products are then mixed with a proprietary mixture of plasmids called Gateway "Donor vectors" (Invitrogen terminology) and proprietary "BP Clonase" enzymes. The enzyme mix catalyzes the recombination and insertion of the PCR product containing the attB sequence into the attP recombination sites in the Gateway Donor vector. When the cassette is part of the target plasmid, it is referred to as an "Entry clone" in Gateway nomenclature and the recombination sequences are referred to as Gateway "attL" type. A short end containing attL is added using the TOPO method, a technique in which DNA fragments are cloned into specific vectors without the need for DNA ligases. The desired DNA sequence can be cloned into a multicloning site containing attL using restriction enzyme. The second step in Gateway cloning is the preparation of a Gateway Destination vector. It is important to choose the target vector that best suits your target when preparing the expression clone. The gene cassette in the Gateway Entry clone can then be simply and efficiently transferred into any Gateway Destination vector (Invitrogen nomenclature for any Gateway plasmid that contains Gateway “attR” recombination sequences and elements such as promoters and epitope tags, but not ORFs) using the proprietary enzyme mix, “LR Clonase”. Thousands of Gateway Destination plasmids have been made and are freely shared amongst researchers across the world. Gateway Destination vectors are similar to classical expression vectors containing multiple cloning sites, before the insertion of a gene of interest, using restriction enzyme digestion and ligation. Gateway Destination vectors are commercially available from Invitrogen, EMD (Novagen) and Covalys. The third step in Gateway cloning is the preparation of express your gene of interest. Make sure to use sequencing or a restriction digest to check the integrity of your expression clone. Once your construct is working, you can transform or transfect the cells you intend to employ in your investigations. Since Gateway cloning uses patented recombination sequences, and proprietary enzyme mixes available only from Invitrogen, the technology does not allow researchers to switch vendors and contributes to the lock-in effect of all such patented procedures. To summarize the different steps involved in Gateway cloning: Gateway BP reaction: PCR-product with flanking attB sites (this step can also use other methods of DNA isolation, such as restriction-digestion) + Donor vector containing attP sites + BP clonase => Gateway Entry clone, containing attL sites, flanking gene of interest Gateway LR reaction: Entry clone containing attL sites + Destination vector containing attR sites, and promoters and tags + LR clonase => Expression clone containing attB sites, flanking gene of interest, ready for gene expression. Advantages Flexibility: Your DNA sequence of interest can be moved across any expression system in just one recombination step when you create the entry clone with it. Speed: Instead of taking two or more days with conventional restriction and ligation cloning, the Gateway approach allows for the creation of the expression construct in just one day. The attB-PCR products can also be immediately cloned into the target vectors by performing the BP and LR reactions in the same tube. There are no procedures for restriction, ligation, or gel purification during the cloning process. Multiple fragment cloning: Gateway cloning can be used to simultaneously insert several DNA pieces into numerous vectors in a single tube. To create the necessary expression clone, up to four DNA segments can be cloned into a single Gateway vector in a precise order and orientation in a single tube. The design of the Gateway vectors makes this possible. High efficiency: The Gateway Cloning Method uses positive and negative selection markers to increase the chance of successfully cloning a gene. This means that the process is more efficient, meaning it is more likely to produce successful results. Universality: All types of DNA fragments can be cloned using PCR techniques. Cloning is available for many different kinds of organisms, from mammals to bacteria. See also Cloning Gateway cassette Subcloning References Molecular biology
Gateway Technology
[ "Chemistry", "Biology" ]
1,609
[ "Biochemistry", "Molecular biology" ]
19,388,009
https://en.wikipedia.org/wiki/WAMIT
WAMIT is a computer program for computing wave loads and motions of offshore structures in waves. It is based on the linear and second-order potential theory. The velocity potential is solved by means of boundary integral equation method, also known as panel method. WAMIT has the capability of representing the geometry of the structure by a higher-order method, whereby the potential is represented by continuous B-splines. WAMIT was developed by researchers at Massachusetts Institute of Technology, hence the acronym WaveAnalysisMIT. Its first version was launched in 1987. In 1999, WAMIT, Inc. was founded by Chang-Ho Lee and J. Nicholas Newman. Consortiums are held annually to discuss applications and new capabilities of the program. References External links WAMIT, Inc. web site Computational fluid dynamics Computer-aided engineering software
WAMIT
[ "Physics", "Chemistry" ]
164
[ "Computational physics stubs", "Computational fluid dynamics", "Computational physics", "Fluid dynamics stubs", "Fluid dynamics" ]
19,389,837
https://en.wikipedia.org/wiki/Relativistic%20quantum%20mechanics
In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity. Key features common to all RQMs include: the prediction of antimatter, spin magnetic moments of elementary spin  fermions, fine structure, and quantum dynamics of charged particles in electromagnetic fields. The key result is the Dirac equation, from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations. The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta. A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example in matter creation and annihilation. Paul Dirac's work between 1927 and 1933 shaped the synthesis of special relativity and quantum mechanics. His work was instrumental, as he formulated the Dirac equation and also originated quantum electrodynamics, both of which were successful in combining the two theories. In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier transformed – see position and momentum space. Combining special relativity and quantum mechanics One approach is to modify the Schrödinger picture to be consistent with special relativity. A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation: using a suitable Hamiltonian operator corresponding to the system. The solution is a complex-valued wavefunction , a function of the 3D position vector of the particle at time , describing the behavior of the system. Every particle has a non-negative spin quantum number . The number is an integer, odd for fermions and even for bosons. Each has z-projection quantum numbers; . This is an additional discrete variable the wavefunction requires; . Historically, in the early 1920s Pauli, Kronig, Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin–statistics theorem (1939) due to Fierz, rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry), to the quark configurations and colour charge (hence the properties of baryons and mesons). A fundamental prediction of special relativity is the relativistic energy–momentum relation; for a particle of rest mass , and in a particular frame of reference with energy and 3-momentum with magnitude in terms of the dot product , it is: These equations are used together with the energy and momentum operators, which are respectively: to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation). The Heisenberg picture is another formulation of QM, in which case the wavefunction is time-independent, and the operators contain the time dependence, governed by the equation of motion: This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR. Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory. A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group. Space and time In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system . In relativistic mechanics, the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events. The position and time coordinates combine naturally into a four-dimensional spacetime position corresponding to events, and the energy and 3-momentum combine naturally into the four-momentum of a dynamic particle, as measured in some reference frame, change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations. Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group: where is a finite-dimensional representation, in other words a square matrix . Again, is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation. Non-relativistic and relativistic Hamiltonians The classical Hamiltonian for a particle in a potential is the kinetic energy plus the potential energy , with the corresponding quantum operator in the Schrödinger picture: and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting: is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on . As a result of the power series, the space and time derivatives are completely asymmetric: infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality: if the particle is initially localized at a point so that is finite and zero elsewhere, then at any later time the equation predicts delocalization everywhere, even for which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint . There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of , the Bohr magneton: where is the (spin) g-factor for the particle, and the spin operator, so they interact with electromagnetic fields. For a particle in an externally applied magnetic field , the interaction term has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation. Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices, in which the matrix multiplication runs over the spin index , so in general a relativistic Hamiltonian: is a function of space, time, and the momentum and spin operators. The Klein–Gordon and Dirac equations for free particles Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation: and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant, yet this equation alone isn't a sufficient foundation for RQM for at least two reasons: one is that negative-energy states are solutions, another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form: where and are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for : and square to the identity matrix: so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor: is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass. Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators , and comparison with the KG equation determines the constraints on and . The positive mass equation can continue to be used without loss of continuity. The matrices multiplying suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions, so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle, electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details. Densities and currents In non-relativistic quantum mechanics, the square modulus of the wavefunction gives the probability density function . This is the Copenhagen interpretation, circa 1927. In RQM, while is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density or probability current (really meaning probability current density) because they are not positive-definite functions of space and time. The Dirac equation does: where the dagger denotes the Hermitian adjoint (authors usually write for the Dirac adjoint) and is the probability four-current, while the Klein–Gordon equation does not: where is the four-gradient. Since the initial values of both and may be freely chosen, the density can be negative. Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge. Then, the wavefunction is not a wavefunction at all, but reinterpreted as a field. The density and current of electric charge always satisfy a continuity equation: as charge is a conserved quantity. Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions. Spin and electromagnetically interacting particles Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge in an electromagnetic field, given by the magnetic vector potential defined by the magnetic field , and electric scalar potential , this is: where is the four-momentum that has a corresponding 4-momentum operator, and the four-potential. In the following, the non-relativistic limit refers to the limiting cases: that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum. Spin 0 In RQM, the KG equation admits the minimal coupling prescription; In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of representations. Solutions that do not belong to the irreducible representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin , see below. Thus if a system satisfies the KG equation only, it can only be interpreted as a system with zero spin. The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π-mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions. The KG equation is applicable to spinless charged bosons in an external electromagnetic potential. As such, the equation cannot be applied to the description of atoms, since the electron is a spin  particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field: Spin Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field: by means of the 2 × 2 Pauli matrices, and is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field: where the subscripts ↑ and ↓ refer to the "spin up" () and "spin down" () states. In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above; and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices . There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here is a four-component spinor field, which is conventionally split into two two-component spinors in the form: The 2-spinor corresponds to a particle with 4-momentum and charge and two spin states (, as before). The other 2-spinor corresponds to a similar particle with the same mass and spin states, but negative 4-momentum and negative charge , that is, negative energy states, time-reversed momentum, and negated charge. This was the first interpretation and prediction of a particle and corresponding antiparticle. See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting and to the appropriate electrostatic potential, additional relativistic terms include the spin–orbit interaction, electron gyromagnetic ratio, and Darwin term. In ordinary QM these terms have to be put in by hand and treated using perturbation theory. The positive energies do account accurately for the fine structure. Within RQM, for massless particles the Dirac equation reduces to: the first of which is the Weyl equation, a considerable simplification applicable for massless neutrinos. This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives). The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups, because they satisfy the important commutator [ , ] and anticommutator [ , ]+ relations respectively: where is the three-dimensional Levi-Civita symbol. The gamma matrices form bases in Clifford algebra, and have a connection to the components of the flat spacetime Minkowski metric in the anticommutation relation: (This can be extended to curved spacetime by introducing vierbeins, but is not the subject of special relativity). In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin  fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system. This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums. Helicity and chirality The helicity operator is defined by; where p is the momentum operator, S the spin operator for a particle of spin s, E is the total energy of the particle, and m0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors. Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment. An automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin  operator on the 3-momentum (times c), , which is the helicity (for the spin  case) times . For massless particles the helicity simplifies to: Higher spins The Dirac equation can only describe particles of spin . Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation. The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin. Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices. The wavefunctions are multicomponent spinor fields, which can be represented as column vectors of functions of space and time: where the expression on the right is the Hermitian conjugate. For a massive particle of spin , there are components for the particle, and another for the corresponding antiparticle (there are possible values in each case), altogether forming a -component spinor field: with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s, there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to +s and the other for the antiparticle in the opposite helicity state corresponding to −s: According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927. For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies. For spin greater than , the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments (electric dipole moments and magnetic dipole moments) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin  case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible. For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009). Velocity operator The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition , and substituting quantum operators in the usual way: which has eigenvalues that take any value. In RQM, the Dirac theory, it is: which must have eigenvalues between ±c. See Foldy–Wouthuysen transformation for more theoretical background. Relativistic quantum Lagrangians The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for . An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density), then generate the differential equation by the field-theoretic Euler–Lagrange equation: For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is: and Klein–Gordon Lagrangian is: This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) Weinberg (1995). Relativistic quantum angular momentum In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition . In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism: which are six components altogether: three are the non-relativistic 3-orbital angular momenta; , , , and the other three , , are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass , the total angular momentum tensor is: where the star denotes the Hodge dual, and is the Pauli–Lubanski pseudovector. For more on relativistic spin, see (for example) Troshin & Tyurin (1994). Thomas precession and spin–orbit interactions In 1926, the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects. In 1939 Wigner derived the Thomas precession. In classical electromagnetism and special relativity, an electron moving with a velocity through an electric field but not a magnetic field , will in its own frame of reference experience a Lorentz-transformed magnetic field : In the non-relativistic limit : so the non-relativistic spin interaction Hamiltonian becomes: where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order , but this disagrees with experimental atomic spectra by a factor of . It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference, and this additional precession of the electron is called the Thomas precession. It can be shown that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is: In the case of RQM, the factor of is predicted by the Dirac equation. History The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985), and P.W Atkins (1974)]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics, nuclear physics, and particle physics; by considering spectroscopy, diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin. Relativistic description of particles in quantum phenomena Albert Einstein in 1905 explained of the photoelectric effect; a particle description of light as photons. In 1916, Sommerfeld explains fine structure; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter: the de Broglie relations, which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality. Experiments 1897 J. J. Thomson discovers the electron and measures its mass-to-charge ratio. Discovery of the Zeeman effect: the splitting a spectral line into several components in the presence of a static magnetic field. 1908 Millikan measures the charge on the electron and finds experimental evidence of its quantization, in the oil drop experiment. 1911 Alpha particle scattering in the Geiger–Marsden experiment, led by Rutherford, showed that atoms possess an internal structure: the atomic nucleus. 1913 The Stark effect is discovered: splitting of spectral lines due to a static electric field (compare with the Zeeman effect). 1922 Stern–Gerlach experiment: experimental evidence of spin and its quantization. 1924 Stoner studies splitting of energy levels in magnetic fields. 1932 Experimental discovery of the neutron by Chadwick, and positrons by Anderson, confirming the theoretical prediction of positrons. 1958 Discovery of the Mössbauer effect: resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid, useful for accurate measurements of gravitational redshift and time dilation, and in the analysis of nuclear electromagnetic moments in hyperfine interactions. Quantum non-locality and relativistic locality In 1935, Einstein, Rosen, Podolsky published a paper concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c). QM does not violate SR. In 1959, Bohm and Aharonov publish a paper on the Aharonov–Bohm effect, questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox, showing that QM cannot be derived from local hidden-variable theories if locality is to be maintained. The Lamb shift In 1947, the Lamb shift was discovered: a small difference in the 2S and 2P levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2S and 2P hydrogen levels by microwave radiation. An explanation of the Lamb shift is presented by Bethe. Papers on the effect were published in the early 1950s. Development of quantum electrodynamics 1927 Dirac establishes the field of QED, also coining the term "quantum electrodynamics". 1943 Tomonaga begins work on renormalization, influential in QED. 1947 Schwinger calculates the anomalous magnetic moment of the electron. Kusch measures of the anomalous magnetic electron moment, confirming one of QED's great predictions. See also Atomic physics and chemistry Relativistic quantum chemistry Breit equation Electron spin resonance Fine-structure constant Mathematical physics Quantum spacetime Spin connection Spinor bundle Dirac equation in the algebra of physical space Casimir invariant Casimir operator Wigner D-matrix Particle physics and quantum field theory Zitterbewegung Two-body Dirac equations Relativistic Heavy Ion Collider Symmetry (physics) Parity CPT invariance Chirality (physics) Standard model Gauge theory Tachyon Modern searches for Lorentz violation Footnotes References Selected books Group theory in quantum physics Selected papers Further reading Relativistic quantum mechanics and field theory Quantum theory and applications in general External links Quantum mechanics Mathematical physics Electromagnetism Particle physics Atomic physics Theory of relativity
Relativistic quantum mechanics
[ "Physics", "Chemistry", "Mathematics" ]
6,471
[ "Electromagnetism", "Physical phenomena", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Atomic", "Atomic physics", "Fundamental interactions", "Theory of relativity", "Particle physics", " molecular", "Mathematical physics", " and optical physics" ]
19,393,933
https://en.wikipedia.org/wiki/Esfenvalerate
Esfenvalerate is a synthetic pyrethroid insecticide marketed under the brand Asana. It is the (S)-enantiomer of fenvalerate. In the United States, a limit of .05 ppm of the chemical's residue is permissible in food. References Insecticides Nitriles 4-Chlorophenyl compounds
Esfenvalerate
[ "Chemistry" ]
76
[ "Functional groups", "Organic compounds", "Nitriles", "Organic compound stubs", "Organic chemistry stubs" ]
19,394,645
https://en.wikipedia.org/wiki/Ned%20Kock
Nereu Florencio "Ned" Kock is a Brazilian-American philosopher. He is a Texas A&M Regents Professor of Information Systems at Texas A&M International University. Background Kock holds a B.E.E. in Electrical Engineering from the Federal Technological University of Parana at Curitiba, Brazil, a M.Sc. in computer science from the Institute of Aeronautical Technology, Brazil, and a Ph.D. in management with a concentration in information systems from the School of Management Studies, University of Waikato, New Zealand. Work Kock is best known for employing biological evolution ideas to the understanding of human behavior toward technologies, particularly information technologies. He developed media naturalness theory, an evolutionary communication media theory. Kock is the writer of a popular blog on the intersection of evolution, statistics, and health. He developed WarpPLS, a nonlinear variance-based structural equation modeling software tool. The underlying mathematics employed in WarpPLS builds on the method of path analysis, developed by the evolutionary biologist Sewall Wright. WarpPLS has been used to study a variety of topics, including nursing education, password security risks, software testing, customer satisfaction, accounting education, and web-based homework. He has conducted research and written on the topic of academic plagiarism. His research and writings in this area have been discussed in The Chronicle of Higher Education, and contributed to considerable debate on the topic within the Association for Computing Machinery, and to the establishment of an ethics committee within the Association for Information Systems. He was Founding Editor-in-Chief of the International Journal of e-Collaboration from 2004 to 2017. Kock has also been a proponent of the use of action research in the study of human behavior toward technologies, arguing that it can be used in investigations aimed at testing hypotheses in a postpositivist fashion. As a result of his action research investigations, he developed a method for systems analysis and business process redesign that places emphasis on the optimization of communication interactions in business processes. Selected publications Kock, N., Avison, D., & Malaurent, J. (2017). Positivist information systems action research: Methodological issues. Journal of Management Information Systems, 34(3), 754–767. Kock, N., Jung, Y., & Syn, T. (2016). Wikipedia and e-collaboration research: Opportunities and challenges. International Journal of e-Collaboration, 12(2), 1–8. Kock, N. (2009). Information systems theorizing based on evolutionary psychology: an interdisciplinary review and theory integration framework. MIS Quarterly, 33(2), 395–418. Kock, N. (Ed.). (2007). Information systems action research: An applied view of emerging concepts and methods. New York, NY: Springer-Verlag. Kock, N. (2006). Systems analysis & design fundamentals: A business process redesign approach. Thousand Oaks, CA: Sage Publications. Kock, N. (2005). Media richness or media naturalness? The evolution of our biological communication apparatus and its influence on our behavior toward e-communication tools. IEEE Transactions on Professional Communication, 48(2), 117–130. Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. [[Organization Science (journal)|Organization Science]], 15(3), 327–348. Kock, N., & Davison, R. (2003). Dealing with plagiarism in the IS research community: A look at factors that drive plagiarism and ways to address them. MIS Quarterly, 27(4), 511–532. Kock, N. (1999). A case of academic plagiarism. Communications of the ACM'', 42(7), 96–104. See also American philosophy List of American philosophers References Texas A&M International University faculty American sociologists 21st-century American philosophers Brazilian emigrants to the United States 21st-century Brazilian philosophers People in educational technology Brazilian scientists Living people University of Waikato alumni Information systems researchers Year of birth missing (living people)
Ned Kock
[ "Technology" ]
877
[ "Information systems", "Information systems researchers" ]
17,188,258
https://en.wikipedia.org/wiki/Disc%20shedding
Disc shedding is the process by which photoreceptor cells in the retina are renewed. The disc formations in the outer segment of photoreceptors, which contain the photosensitive opsins, are completely renewed every ten days. Photoreceptors The retina contains two types of photoreceptor – rod cells and cone cells. There are about 6-7 million cones that mediate photopic vision, and they are concentrated in the macula at the center of the retina. There are about 120 million rods that are more sensitive than the cones and therefore mediate scotopic vision. A vertebrate's photoreceptors are divided into three parts: an outer segment that contains the photosensitive opsins an inner segment that contains the cell's metabolic machinery (endoplasmic reticulum, Golgi complex, ribosomes, mitochondria) a synaptic terminal at which contacts with second-order neurons of the retina are made Discs The photosensitive outer segment consists of a series of discrete membranous discs . While in the rod, these discs lack any direct connection to the surface membrane (with the exception of a few recently formed basal discs that remain in continuity with the surface), the cone's photosensitive membrane is continuous with the surface membrane. The outer segment (OS) discs are densely packed with rhodopsin for high-sensitivity light detection. These discs are completely replaced once every ten days and this continuous renewal continues throughout the lifetime of the sighted animal. After the opsins are synthesized, they fuse to the plasma membrane, which then invaginates with discs budding off internally, forming the tightly packed stacks of outer segment discs. From translation of opsin to formation of the discs takes just a couple of hours. Shedding Disc shedding was first described by RW Young in 1967. Discs mature along with their distal migration; aged discs shed at the distal tip and are engulfed by the neighboring retinal pigment epithelial (RPE) cells for degradation. One early study showed that cones may not experience the cone shedding as rods do and may renew by replacing molecular constituents individually. However, other studies do show that at least some mammalian cones do shed their discs as a normal ongoing process. Each day about one tenth of the length of the outer segment is lost, so that after ten days the entire outer segment has been replaced. Regulating factors are involved at each step. While disc assembly is mostly genetically controlled, disc shedding and the subsequent RPE phagocytosis appear to be regulated by environmental factors like light and temperature. The timing of shedding follows a circadian rhythm according to neuromodulators, namely dopamine and melatonin. Melatonin is synthesized by the photoreceptors at night, and is inhibited by light and dopamine, so triggers cone disc shedding. Dopamine production is stimulated by light and inhibited by dark and melatonin, so triggers cone disc shedding. Importantly, rod discs are shed during the day and cone discs are shed during the night. Mechanism Traditional theories One grey area in the entire mechanism of outer segment disc shedding is in what exactly triggers the detachment of the discs and how they are transported out of the OS and phagocytosed by the RPE cells. Some studies suggest that disc detachment precedes engulfment by the RPE cells, and that an active process in the rod outer segment severs the disc. However, other studies observed RPE cell processes intruding into the OS during disc detachment. These processes are structurally similar to processes formed by macrophages during phagocytosis and were accordingly referred to as pseudopodia. The study suggested that these pseudopodia were the organelles of phagocytosis and that they may play a direct role in disc detachment. Recent research A 2007 paper offers a third theory that builds on recent evidence that suggests that rhodopsin-deficient mice fail to develop OSS. Researchers at Cornell hypothesized that rhodopsin itself has a role in OS biogenesis, in addition to its role as a phototransduction receptor. While the molecular basis underlying rhodopsin's participation in OS development is unknown, emerging evidence suggests that rhodopsin's cytoplasmic C-terminal tail bears an “address signal” for its transport from its site of synthesis in the rod cell body to the OS. References Eye Histology Photoreceptor cells
Disc shedding
[ "Chemistry" ]
943
[ "Histology", "Microscopy" ]
17,190,371
https://en.wikipedia.org/wiki/Tetrakis%28hydroxymethyl%29phosphonium%20chloride
Tetrakis(hydroxymethyl)phosphonium chloride (THPC) is an organophosphorus compound with the chemical formula [P(CH2OH)4]Cl. It is a white water-soluble salt with applications as a precursor to fire-retardant materials and as a microbiocide in commercial and industrial water systems. Synthesis, structure and reactions THPC can be synthesized with high yield by treating phosphine with formaldehyde in the presence of hydrochloric acid. PH3 + 4 H2C=O + HCl → [P(CH2OH)4]Cl The cation P(CH2OH)4+ features four-coordinate phosphorus, as is typical for phosphonium salts. THPC converts to tris(hydroxymethyl)phosphine upon treatment with aqueous sodium hydroxide: [P(CH2OH)4]Cl + NaOH → P(CH2OH)3 + H2O + H2C=O + NaCl Application in textiles THPC has industrial importance in the production of crease-resistant and flame-retardant finishes on cotton textiles and other cellulosic fabrics. A flame-retardant finish can be prepared from THPC by the Proban Process, in which THPC is treated with urea. The urea condenses with the hydroxymethyl groups on THPC. The phosphonium structure is converted to phosphine oxide as the result of this reaction. [P(CH2OH)4]Cl + NH2CONH2 → (HOCH2)2P(O)CH2NHC(O)NH2 + HCl + HCHO + H2 + H2O This reaction proceeds rapidly, forming insoluble high molecular weight polymers. The resulting product is applied to the fabrics in a "pad-dry process". This treated material is then treated with ammonia and ammonia hydroxide to produce fibers that are flame-retardant. THPC can condense with many other types of monomers in addition to urea. These monomers include amines, phenols and polybasic acids and anhydrides. Tris(hydroxymethyl)phosphine and its uses Tris(hydroxymethyl)phosphine, which is derived from tetrakis(hydroxymethyl)phosphonium chloride, is an intermediate in the preparation of the water-soluble ligand 1,3,5-triaza-7-phosphaadamantane (PTA). This conversion is achieved by treating hexamethylenetetramine with formaldehyde and tris(hydroxymethyl)phosphine. Tris(hydroxymethyl)phosphine can also be used to synthesize the heterocycle, N-boc-3-pyrroline by ring-closing metathesis using Grubbs' catalyst (bis(tricyclohexylphosphine)benzylidineruthenium dichloride). N-Boc-diallylamine is treated with Grubbs' catalyst, followed by tris(hydroxymethyl)phosphine. The carbon-carbon double bonds undergo ring closure, releasing ethene gas, resulting in N-boc-3-pyrroline. The hydroxymethyl groups on THPC undergo replacement reactions when THPC is treated with α,β-unsaturated nitrile, acid, amide and epoxides. For example, base induces condensation between THPC and acrylamide with displacement of the hydroxymethyl groups. (Z = CONH2) [P(CH2OH)4]Cl + NaOH + 3CH2=CHZ → P(CH2CH2Z)3 + 4CH2O + H2O + NaCl Similar reactions occur when THPC is treated with acrylic acid; only one hydroxymethyl group is displaced, however. References Quaternary phosphonium compounds Flame retardants Organophosphorus compounds
Tetrakis(hydroxymethyl)phosphonium chloride
[ "Chemistry" ]
867
[ "Organophosphorus compounds", "Organic compounds", "Functional groups" ]
17,192,084
https://en.wikipedia.org/wiki/Whatman%20plc
Whatman plc is a Cytiva brand specialising in laboratory filtration products and separation technologies. Whatman products cover a range of laboratory applications that require filtration, sample collection (cards and kits), blotting, lateral flow components and flow-through assays and other general laboratory accessories. Formerly Whatman plc, the company was originally acquired in 2008 by GE Healthcare, which became Cytiva in April 2020. History Founder's innovation and impact The papermaker James Whatman the Elder (1702–1759) founded the Whatman papermaking enterprise in 1740 in Maidstone, Kent, England. He made revolutionary advances to the craft in England and is credited as the inventor of wove paper (or Vélin), an innovation used for high-quality art and printing. His son, James Whatman the Younger (1741–1798), further developed the company's techniques. At a time when the craft was based in smaller paper mills, Whatman innovations led to the large-scale and widespread industrialisation of paper manufacturing. John Baskerville (1707-1775), who needed paper that would take a light impression of the printing plate, approached Whatman; the resultant paper was used for the edition of Virgil's poetry, embellished with Baskerville's typography and designs. The earliest examples of wove paper, bearing his watermark, appeared after 1740. The Whatman business is credited with the invention of the wove wire mesh used to mould and align pulp fibres. This is the principal method used in the mass production of most modern paper. The Whatmans held a part interest in the establishment at Turkey Mill, near Maidstone, after 1740; this was wholly acquired through the elder Whatman's marriage to Ann Harris. "Handmade" paper bearing the Whatman's mark continued in production for special editions and art books until 2002. Acquisition On 4 February 2008 GE Healthcare, a unit of General Electric, acquired Whatman plc at 270p per share in cash for each Whatman share, valuing Whatman at approximately £363 million (approximately $713 million.) Last production at Maidstone (Springfield Mill) occurred on 17 June 2014. Key products and technologies The Whatman product range covers Laboratory filtration products: filter papers, membrane filters, syringe filters, syringeless filters, microbiology, microplates, and capsule filters Sample collection cards and kits: FTA, FTA Elute, and 903 ranges Blotting: blotting membranes, blotting papers, and equipment Components for lateral flow and flow-through assays: membranes for immunoassays, conjugate release, blood separators, absorbents, and sample pads General laboratory accessories: extraction thimbles, weighing papers, test and chromatography papers, lens-cleaning tissue, and Benchkote papers References Former General Electric subsidiaries Paper products Filters 1740 establishments in England
Whatman plc
[ "Chemistry", "Engineering" ]
607
[ "Chemical equipment", "Filtration", "Filters" ]